Logo

The Data Daily

How to make sense of the I in AI? - Book introduction

How to make sense of the I in AI? - Book introduction

Contributions on capabilities and projected capabilities of Artificial Intelligence (AI) by researchers and other experts, many of them from non-engineers, trained outside the field of AI research or computer science abound. My book “The quest for a universal theory of intelligence: The mind, the machine, and singularity hypotheses” to be published with De Gruyter on April 18, 2022 is one of them. Yet, at the same time misperceptions and associated fears of AI abound too and are being nurtured by the absence of refutable, rigorous and bold perspectives on what just happened in the scientific and technical development of AI, therefore, leaving much to imagination. My book is not one of them. Unlike some red herrings where the respective authors lack pertinent (book) knowledge or promote ill-guided, spurious scenarios, my work attempts to avoid that trap by adhering mainly to a strictly philosophical endeavor (a field where its author has been trained).

There is an art of which every human being should be a master, the art of reflection. Very pertinent is the question which the English poet Samuel Taylor Coleridge puts, – “If you are not a thinking man, to what purpose are you a man at all?” Daniel Kahneman differentiates between “Thinking, Fast and Slow”. Yet, both instinctive and emotional thinking (what he calls fast thinking) and more deliberative, more logical or analytical thinking (what he calls slow thinking) can be superficial and quick, as well as premature and insufficient. In our time where analytical and quick thinking is often praised, rewarded and overrated, it might really go astray or might only be indispensable because people have begun too late to think about an issue. Thus, deep thinking, which takes time, ought to be honored.

As deep thinkers, we stumble upon the conundrum of our time in the so-called second machine age: Many people in AI have not thought deeply about the key term in “AI”: “intelligence”. Max Tegmark, for example, shares an anecdote from a symposium on AI which he attended and which was organized by the Swedish Nobel Foundation: “When a panel of leading AI researchers were asked to define intelligence, they argued at length without reaching consensus. We found this quite funny: there’s no agreement on what intelligence is even among intelligent intelligence researchers!” Most researchers in AI do not even touch upon the concept – be it that they have putatively more important things to say (which not seldom leads to technical disputes or wild speculations like about singularity or superintelligence), be it that it would be beyond the parochial scope of a ten to twenty page long research article. Sadly, other intelligence research disciplines, including philosophy, are not doing much better in this regard. Most notably, perhaps, psychology has been dealing with the nature and role of intelligence for more than 100 years. Yet, it has been biased towards human intelligence and measurable traits like spatial or linguistic abilities.

Figure 1: One of the most famous studies of experts’ conceptions of human intelligence was done by the editors of the Journal of Educational Psychology (“Intelligence and its measurement”, 1921). The blatant ambiguity, variability and heterogeneity of the answers in that issue illustrates the cacophony in intelligence research.Source:https://www.amazon.com/Journal-Educational-Psychology-Classic-Reprint/dp/1330313801.

In a time where the intelligent behaviors of smart animals like crows and octopi as well as of artificial animals, from social robots to cognitive assistants, electrify, new answers for meaningful comparison with other kinds of intelligence are urgently needed. We thus wonder, how can different intelligent systems, from human to biological to artificial, be catalogued, evaluated, and contrasted, with representations and projections that offer meaningful insights?

In my book, I identify the gap that albeit much has been explored in the philosophy of AI, the general key term of intelligence remains opaque in this connection. The ambitious and ambiguous word “intelligence”, and only as a derivative artificial intelligence, is the target of philosophical explanation. The objective in my book is not to introduce a new definition as it would only stipulate a meaning of the word “intelligence” while violating how people use it in real life. By contrast, the objective is to lay the cornerstone of a universal theory of intelligence, diminishing our uncertainty about the objects we apply the concept to. It not only describes what intelligence is, but comes with true explanatory power yielding orientation and clear as well as reliable predictions and elucidating why things are the way they are: for instance, why is a toaster less intelligent than my dog? Does it make sense at all to call a toaster intelligent? Would Google’s program AlphaGo outperform a professor in chess?

As a preview of what is to come in the book to be published, the treatise is composed of four distinct parts. In Part I, we position the quest for what intelligence is in the context of discovering where it can be found, i.e., in what kinds of creatures, which results in a tour de force through human, animal, and artificial intelligence. Part II turns to the gist of the matter by seeking to understand why we ascribe intelligence to some, but not to others and what we mean by that, thereby erecting a causal theory of intelligence. Subsequently, the main work is done. Part III and IV, both significantly shorter in length, underpin the theory development contribution through testing its application. The former is dedicated to the application to present-day and past AI systems whilst the latter elaborates on hypotheses about possible Strong AI. The book is available here.

Dr. Christian Hugo Hoffmann is a serial entrepreneur and conducted in-depth research into the nature of intelligence at the Sorbonne, Paris, and the KIT, Karlsruhe.

Images Powered by Shutterstock