Logo

The Data Daily

Can Artificial Intelligence Learn to Learn?

Can Artificial Intelligence Learn to Learn?

This leads to the next problem for AI: “Multi-tasking”.

“Multi-tasking” came about as technologists started to give AI tasks that are related but not sequential. What if independent tasks can be performed simultaneously? What if as AI performed certain tasks, knowledge, and data can be discovered that will help AI perform other tasks?

The problem of “Multi-tasking” takes the “learning to learn” problem to the next level.

For AI to be able to “multitask”, AI needs to be able to evaluate independent sets of data in parallel. It also needs to relate pieces of data and infer connections on that data. As AI performs the steps of one task, knowledge needs to be updated so that this knowledge can be applied and used in other situations. Since tasks are interrelated, the evaluations for the tasks will need to be done by the whole network.

Google’s MultiModel is an example of an AI System that learned to perform eight different tasks simultaneously. This system simulates the way the brain senses information. It can detect objects in images, provide captions, recognize speech, translate between four pairs of languages, and perform grammatical constituency parsing. This system achieves good performance while training jointly on multiple tasks. The neural network is also learning from different domains of data.

For AI to be more adaptive, AI will need to learn to multitask. One type of application of AI as adaptive learners is in robotics when robots learn to perform tasks in dangerous situations instead of humans. For instance, a robotic military dog will be able to adapt to situations without following specific commands from humans, as surveillance or capture situations change.

As we have seen from Google’s MultiModel, AI can certainly learn to become general-purpose learners like us. However, getting there will still take some time. There are two parts to this: meta-reasoning and meta-learning. Meta-reasoning focuses on the efficient use of cognitive resources. Meta-learning focuses on human’s unique ability to efficiently use limited cognitive resources and limited data to learn.

In Meta-reasoning, one of the key components is strategic thinking. If AI can draw inferences from different types of data, can it also employ efficient cognitive strategies in different situations? There are studies currently being conducted to figure out the gaps between human cognition and the way AI learns such as awareness of internal states, the accuracy of memory or confidence. But, ultimately, meta-reasoning relies on seeing the big picture and strategic decision making. Strategic decision making has two parts to it: selecting from existing strategies available and discovering the list of strategies based on situations. Each of these are areas of study in meta-reasoning.

In Meta-learning, one of the key components is to bridge the gap of using huge amounts of data to train models versus using limited data to train models. Models have to be adaptive to be able to accurately make decisions based on small sets of information across multiple tasks. There are different approaches in this area. Some models are implemented by learning the parameters of learners to find a set of parameters that will work well across different tasks. Some models will define an optimal learning space such as a metric space where learning can be the most efficient. There are also models such as few shots meta-learning that the algorithm learns the way babies learn, by mimicking from the minimum amount of data. Each of these are areas of study in meta-learning.

Meta-reasoning and Meta-learning are only one part of AI becoming a Generalized Learner. Putting them together along with information from the motor and sensory processing will allow the learner to be more human-like.

AI is still learning to become more like humans. Becoming an artificial generalized learner requires extensive research on how humans learn as well as research on how AI can mimic the way that humans learn. To adapt to new situations such as having the ability to “multitask”, and the ability to make “strategic decisions” with limited resources, are just a few of the hurdles that AI researchers will overcome along the way.

As businesses integrate Artificial Intelligence into their systems, technology professionals are looking at a new frontier of AI innovation. This is in the area of Meta-Learning. Meta-Learning is simply learning to learn. We humans have the unique ability to learn from any situation or surrounding. We adapt our learning. We can figure out how we can learn. To acquire this kind of flexibility in learning, AI needs Artificial General Intelligence.

In other words, it needs to have an effective and efficient way to learn about the learning process.

Scarcity is at the heart of the differences in the learning processes between humans and AI. Humans have the unique problem of limited capacity. We have limited brain power. We also have limited time. This is why the human brain is limited in its limited adaptability. It makes the most of each information that it receives. Then, it develops the ability to cultivate rich models of the world. We are general-purpose learners. If our learning process is efficient, then we can be fast learners of any subject. Not all of us are fast learners.

In contrast, AI has a lot more resources, such as computational power. However, AI learns from more data than the data our human brains use. Processing these vast amounts of data requires immense computational power.

At the same time, as the complexity of AI’s tasks grows, there’s also an exponential increase in computational power.Each inference that AI makes, (that span multiple repositories of data) AI relies on efficient algorithms to make connections between different pieces of data. If the algorithms are not efficient enough for the given sets of data, then there will be an exponential increase in computational power. These days, no matter how cheap computational power we have access to, an exponential increase is never the scenario that we want.

This is why currently, AI is designed to be specific-purpose learners. By learning from related data that are similar, AI can efficiently process the data and infer from it without too much cost.

The “learning to learn” problem appeared when technologists were trying to solve the exponential increase in computing power, as AI started to infer from data with increasing complexity.

To prevent the exponential increase in computational power, AI had to figure out the most efficient learning path to take and remember that path. Once the algorithms can determine the learning paths for different types of problems, then AI can self-regulate and guide itself to the solutions dynamically, by choosing the learning path, following it, and adjusting it for changes.

This leads to the next problem for AI: “Multi-tasking”.

“Multi-tasking” came about as technologists started to give AI tasks that are related but not sequential. What if independent tasks can be performed simultaneously? What if as AI performed certain tasks, knowledge, and data can be discovered that will help AI perform other tasks?

The problem of “Multi-tasking” takes the “learning to learn” problem to the next level.

For AI to be able to “multitask”, AI needs to be able to evaluate independent sets of data in parallel. It also needs to relate pieces of data and infer connections on that data. As AI performs the steps of one task, knowledge needs to be updated so that this knowledge can be applied and used in other situations. Since tasks are interrelated, the evaluations for the tasks will need to be done by the whole network.

Google’s MultiModel is an example of an AI System that learned to perform eight different tasks simultaneously. This system simulates the way the brain senses information. It can detect objects in images, provide captions, recognize speech, translate between four pairs of languages, and perform grammatical constituency parsing. This system achieves good performance while training jointly on multiple tasks. The neural network is also learning from different domains of data.

For AI to be more adaptive, AI will need to learn to multitask. One type of application of AI as adaptive learners is in robotics when robots learn to perform tasks in dangerous situations instead of humans. For instance, a robotic military dog will be able to adapt to situations without following specific commands from humans, as surveillance or capture situations change.

As we have seen from Google’s MultiModel, AI can certainly learn to become general-purpose learners like us. However, getting there will still take some time. There are two parts to this: meta-reasoning and meta-learning. Meta-reasoning focuses on the efficient use of cognitive resources. Meta-learning focuses on human’s unique ability to efficiently use limited cognitive resources and limited data to learn.

In Meta-reasoning, one of the key components is strategic thinking. If AI can draw inferences from different types of data, can it also employ efficient cognitive strategies in different situations? There are studies currently being conducted to figure out the gaps between human cognition and the way AI learns such as awareness of internal states, the accuracy of memory or confidence. But, ultimately, meta-reasoning relies on seeing the big picture and strategic decision making. Strategic decision making has two parts to it: selecting from existing strategies available and discovering the list of strategies based on situations. Each of these are areas of study in meta-reasoning.

In Meta-learning, one of the key components is to bridge the gap of using huge amounts of data to train models versus using limited data to train models. Models have to be adaptive to be able to accurately make decisions based on small sets of information across multiple tasks. There are different approaches in this area. Some models are implemented by learning the parameters of learners to find a set of parameters that will work well across different tasks. Some models will define an optimal learning space such as a metric space where learning can be the most efficient. There are also models such as few shots meta-learning that the algorithm learns the way babies learn, by mimicking from the minimum amount of data. Each of these are areas of study in meta-learning.

Meta-reasoning and Meta-learning are only one part of AI becoming a Generalized Learner. Putting them together along with information from the motor and sensory processing will allow the learner to be more human-like.

AI is still learning to become more like humans. Becoming an artificial generalized learner requires extensive research on how humans learn as well as research on how AI can mimic the way that humans learn. To adapt to new situations such as having the ability to “multitask”, and the ability to make “strategic decisions” with limited resources, are just a few of the hurdles that AI researchers will overcome along the way.

Images Powered by Shutterstock