Logo

The Data Daily

The Problems With Artificial General Intelligence

The Problems With Artificial General Intelligence

Photo by Adrien Olichon from Pexels
The Problems With Artificial General Intelligence
We can’t describe what we don’t know.
Oct 28, 2020 · 12 min read
Preface
There are plenty of articles describing Artificial General Intelligence (AGI) out there, the wiki on AGI is a great place to start, I think most definitions fit somewhere between describing an emulation of human cognitive abilities and defining intelligence by some well thought out premise: It can adapt to new circumstances, it can derive meaning and reason, it is logical, etc, etc.
Regardless of which flavor of AGI definition you like or agree with, what we still don’t have is an AGI and that’s because we simply can’t build one yet, but how can this be so ? What is preventing us from building one ? This article explores these subjects and I think can help if you are understandably lost with the current use of the terms AGI and AI.
What do we currently have ?
I’ll sidestep the history of AI and fast track you to today ( late 2020 ), I am also using the term we because it is notable that the AIs and tools we have right now are both incredibly powerful and at the same time accessible to anybody that is willing to invest a modest amount of time in learning how to code with them.
From a nuts and bolts perspective the current advances are built on at least three major fields, Artificial Neural Networks (ANN), Data (Big Data, Data Science) and Machine Learning. The lines blur in between them often enough to understand the frustration with the terminology; it also doesn’t help that what most people think when they hear of AI is something more akin to science fiction .
Out of the elements I just mentioned only a few are inspired by how the brain works, ANNs for instance do use neurons and connections in between neurons and even synapse (in mathematically terms represented as tensors, matrices, vectors ), they also employ Machine Learning concepts that can be traced to biological memory research. But they employ a fraction of the current knowledge in Neuroscience and Biology, quickly deviate from them and these branches of science are far from being done discovering all there is to discover in their respective fields. One first big opportunity and challenge for the willing.
Tests & Tasks
The preferred method to presumably know when we get to an AGI are tests , starting with the famous Turing test which pits humans against machines masquerading as humans…
The Turing test: In its simplest form consists of a text based conversational AI that emulates a human and if the evaluator cannot reliably tell the machine from a human the AI is said to have passed the test.
One point of contention with the Turing test relevant to AGIs is that of purpose and since part of its purpose is to fool us, it is unfortunately too narrow to fully describe an AGI. In other words an artificial intelligent agent might be capable to fool you into thinking it is a human, but it is not clear if that is all there is to intelligence.
In a greater sense the Turing test and other tests task a machine with exhibiting intelligent human like behavior, this ends up being a bit vague because intelligent behavior is a fuzzy term and there is no consideration as to how we arrive to this behavior and how the AI performs the task:
Anthropomorphism aside there are conditionals and sets of subroutines in conversational AIs that help the Ai achieve its task…
The terms Strong AI and Weak AI are related to how the AI achieves its task, a Strong AI can think and have a mind whereas a weak one just acts like it has one, to this date all our AIs are weak.
Intelligent behavior.
So maybe it is because we haven’t properly defined Intelligent Behavior, this is an interesting train of thought, maybe we are on the lower end of Universal Intelligence and trying to define a higher form might be difficult, maybe we are just getting started. Regardless of these possibilities, we can at least name a number of Intelligent elements an AGI might need:
Some but not all Intelligent Elements:Reason, memory (learn and recall), solve problems, solve problems fast, use logic, be capable of language and communication, be able to create new things, express itself into the real world or environment, be able to perceive the environment, be conscious, be aware of itself and others, understand and have emotions, have common sense...
Knowing these elements gets us closer to describing an AGI, but we still don’t know if just adding them together will produce one. On the other hand if recent experience serves, advancing each one and/or combining them will generate new technology we will certainly use.
A psychologist AI ?
The real problem in making an AGI lies here. If taken in isolation each one of these elements can be solved or described at least partially, yet generalizing and integrating other elements has proven difficult, a problem within a problem if you will but we are getting closer.
What’s so General about AGI ?
The term general in AGI can perhaps be better understood from the opposite end, you have an AI or Algorithm that solves one problem, let’s say image recognition…recognize circles, recognize text, recognize faces, these are sometimes called Narrow AI’s and complement the Strong vs Weak AIs we briefly mentioned.
Narrow AIs fall flat when asked about something else, if you give an image recognition AI a slightly different task like smell ( recognize odors ) it wont know what to do with the information given.
If you were to extend the domain ( in this case vision ) you will be reaching the limits of current AIs, an AI that can classify hundreds if not thousands of images is somehow common these days, adding more domains is also possible but quite clunky and experimental, you could for instance Identify faces and voices ( increasingly you do this on your phone to login and say commands).
If we ask for one more step of generalization ( from the intelligence list): do it fast and efficiently we reach software and hardware limits yet our brains do these tasks effortlessly.
What we want from an AGI ( at least right now ) is the next step, an AI that can generalize in multiple dimensions like we seem to do, which is a convoluted way to say we don’t know how we humans generalize, but since we do it all the time it is clear what the result is.
3 levels of generalization from narrow to multi domain, notice how more complex the information and capabilities required for the last AI are.
To emulate the brain or not ?
So the next consideration is how we solve these problems and more crucially if this in itself could be another problem; one could think of 2 main avenues to accomplish this.
The Neuroscience way: If all the intelligent elements that are needed are to be found in human brains one possible solution is to research, understand and emulate them.
The other sciences way: If on the other hand we know what intelligent behaviors we want and we can emulate them via alternative avenues like mathematics and computer science, that could in theory create an AGI.
Knowing how an AGI could be built is not the whole issue here, one still needs to build one, either for commercial or research purposes and hardware/software limits along with the necessity of informing the AI ( a self learning AI would still need material to learn from ) are still challenges.
Take for instance reason. If you were to define reason mathematically you would end up with a set of equations and probably an award, yet you would still need the hardware/software and particular experiences we know one needs to reason, all this goes to say that it is a complex endeavour.
A more formal framework for the discovery/invention of AIs can be found in David Marr's (Vision) 3 Levels of analysis mainly computational theory, representation/algorithm and hardware implementation. If you add some of the other levels needed outside of pure research to create a finished product like current image classifiers, you might end up with something like the following illustration:
The terminology or level of detail is not important here, what is important is to notice all the individual parts/levels needed, which leads us to ask a reasonable last question…
What needs to happen ?
Your guess is as good as mine as to what combination of methods will eventually generate an AGI, equally unknown is whether an individual, team or even a nation will be responsible ( same for a timeframe ), but we can at least enumerate some of the big missing pieces :
Neuroscience: We know more or less the lay of the land in the brain, that is we know that this area is involved in say speech production and that it is composed of neurons of certain type which behave in such and such way, missing is the complex work of the brain as a whole ( the connections ) along with a more detailed account of what’s happening through time, memory considerations and higher cognitive functions plus a ton of details.
Computer Science/AI : All the factors we have enumerated here which can be boiled down to generalization across domains and intelligent behavior, also formalizing them into algorithms and data structures would be needed.
The current narrow AIs ( Mainly ML detectors and forecasters) can trace their origins to different periods, Neuroscience in the late 1800, AI in the 1960’s, technological advances in several other periods and on an on, it's sort of a cauldron with a mix of ingredients and every now and then something bubbles up, who knows, it could even be accelerating with the late inclusion of commerce and ample computing power. https://en.wikipedia.org/wiki/History_of_artificial_intelligence
Experiments and applications : Billions of dollar are spent each year by multiple nations and research institutions in theoretical physics, in comparison there is no concerted effort to generate and AGI, that’s not to say that there aren’t any AI related efforts, it’s just that they seem to be narrow and commercial efforts, (OpenAI, Numenta, Allen institute, FAANG, etc,etc ), regardless of this challenge ( we need purpose built projects and funding ) we can at least describe the basic nature of AGI centered experimentation and why it’s challenging…
The following is a basic example of what would be needed if we were following the biological emulation route (other routes do exists):
We can divide the problem in multiple individual projects and later integrate them, we can start at the sampling level. What we want is to capture in real time the environment in a manner that we can later use for higher cognitive functions. Here existing technology fails us in a very interesting way, we can capture the environment (lets focus on audio and video) in great detail, but we can’t reliably do it in realtime and coded the same way neurons and receptors work.
So we’d need to somehow recreate the details by either creating new hardware/software or using existing technologies in an analogous way. We are just leaving the gate and there are already difficult technical considerations.
Let's say we solve these starter problems, the next steps would be to build the higher brain areas and cognitive behaviors into software/hardware and we unfortunately go back to the neuroscience/computer science challenge of knowing what to code/build next.
And what about our current narrow AIs, can they simply evolve into AGIs? I think it is possible although not very probable, narrow AIs are designed from the ground up to solve a specific type of problem albeit with a general set of tools (ANN, NLP, ML, DS) and as such leave aside many other capabilities needed to generalize, I say it’s possible because at some point they become black boxes and elements can surely be used as part of an AGI, but thinking that just adding more data or neuron layers will suddenly generate new AGI like behavior seems unlikely, but science and discovery can be surprising that way I guess.
Robotics & Hardware : Speaking of limits, another valid question we can ask is how good is the current hardware in emulating ours ? Once more it varies by domain, in some instances we can make better hardware than what we are born with ( movement, visual and auditory perception for instance ) yet we can barely emulate others ( The chemical senses, memory and distributed/parallel processing elements come to mind), it all really falls apart when we try to integrate domains though, but the main point here is that an AGI from our human centered point of view might need to emulate all our senses and motor functions to be considered sufficient and that’s one more thing we don’t currently have.
You might have heard the following: The processing power needed to emulate the brain is insufficient but in the near future we will achieve it and presumably an AGI or brain analog.I hope that by now you are suspicious of the totality of such claims, but let's briefly talk about numbers. A simple organism ( the small worm C.elegans - about 1mm in width or the size of this point: . ) contains about 300 neurons and a few thousand cells. Recreating one virtually would need very achievable computing power but a lot more in research of the inner connections and work in building one, plus and environment.A human brain has about 85 billion neurons, 16 billion if we take out the cerebellum and focus on the cortex where the bulk of higher cognitive behavior is thought to reside; a modern cpu has about 7 billion transistors but that doesn't really help in comparison because well a transistor is not a neuron. To equate them we would need to add more functions to the transistor, memory is equally hard to measure since we store information in a fundamentally different way than computers, so the numbers game only takes us so far.Then there is the matter of purpose built hardware vs the common hardware denominator, if you are trying to build an AGI out of your average computer/webcam/microphone and software (let's say python or some C flavor), you will very quickly find it's like trying to fit a square peg in a round hole, for our general computing needs we went a completely different route than nature and trying to reconcile them is frustrating and challenging. Going the other route (custom hardware/software) might get us there faster but at a considerably larger research cost and might not be possible for a single person or small team, pick your poison, the first results might even be underwhelming in comparison to narrow AIs.
Information: What’s the expression ? It takes a village to raise a child ? How does an AGI get to know everything there is to become one is not all that clear, sure we can feed it everything there is written, videos and other digitized information, but we would be leaving out everything we experience in the particular and in the real world, your brain for instance devotes a lot of real estate for social interactions, we also possess an emotional system and mostly hardwired instincts that help us navigate the day to day and be intelligent in our human way, at whatever age you find yourself you might be aware that you are only as intelligent as your experiences, makeup and environment have allowed, you’ve earned it. An AGI would need to earn this as well.
Another way to think about this problem is that a lot of what we call intelligent behavior hinges on the interplay between an organism and the environment, a wild animal will avoid humans and other larger animals, that is instinct, (hardwired intelligence if you will ) but if we start providing food or some other beneficial thing the animal will eventually take to us, that’s a slightly different type of intelligent behavior that needs an environment or information to arise, in short we will need to somehow provide this interplay.
Final thoughts
An AGI has the potential to fundamentally change the way we use and interact with technology and perhaps even eventually our place in the universe. It both seems distant due to a lack of understanding and somehow achievable due to the current use of Narrow AIs, existing knowledge base and computing power. as this was also a discovery post for myself, I thought a little map could be a good way to summarize it.
A draft, also here be dragons everywhere.
Thanks for Reading !

Images Powered by Shutterstock