Logo

The Data Daily

Can Artificial Intelligence Enable Machines with Fluid Intelligence?

Can Artificial Intelligence Enable Machines with Fluid Intelligence?

MIT and Austrian researchers have created “liquid” machine learning for a flexible artificial intelligence.

A novel class of artificial intelligence (AI) machine learning will be unveiled at the upcoming Thirty-Fifth AAAI Conference on Artificial Intelligence in early February 2021. Massachusetts Institute of Technology (MIT) and Austrian researchers have created a new neural network endowed with increased fluid artificial intelligence called “liquid” machine learning. This new class of machine learning may enable better adaptation to dynamic fluctuations of complex, real-world problems.

In areas where data streams change over time, developing more flexible AI that can learn quickly is mission-critical. Real-world applications with time-series data include video processing, epidemiology, financial markets, economics, gross domestic product (GDP), health monitoring, weather forecasting, atmospheric pollution, autonomous vehicles, robotics, aviation, and medical imaging, to name just a few.

The concept of fluid versus crystallized intelligence dates back to 1963, when it was introduced by one of the most influential psychologists of the 20th century, Raymond Cattell (1905-1998). Fluid intelligence is the ability to think flexibly, reason and process novel information in real-time. In contrast, crystallized intelligence refers to knowledge gained from prior learning of facts, skills, and experiences.

“We introduce a new class of time-continuous recurrent neural network models,” wrote the study’s authors. Ramin Hasani, a postdoc at the Computer Science and Artificial Intelligence Laboratory (CSAIL), and the lead author of the study. Other study researchers on the team include MIT professor and CSAIL Director Daniela Rus, MIT PhD student Alexander Amini, Mathias Lechner at the Institute of Science and Technology Austria, and Radu Grosu at the Vienna University of Technology.

Recurrent neural networks that use ordinary differential equations (ODEs) to determine continuous-time hidden states are often used when there is time-series data. The researcher team set out to improve this structure to “enable richer representation learning and expressiveness.”

“Instead of declaring a learning system’s dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems modulated via nonlinear interlinked gates,” wrote the researchers.

As an alternative, the researchers created a liquid time-constant (LTC) recurrent neural networks (RNN). The advantage of this new type of recurrent neural network is that it is more expressive by design, and therefore inherently more transparent and explainable.

This expressiveness enables researchers to have a better visibility into some of the neural network’s “thinking” process, a benefit that helps to demystify some of complex cognition of the “black-box” of artificial intelligence machine learning.

“The resulting models represent dynamical systems with varying (i.e., liquid) time-constants coupled to their hidden state, with outputs being computed by numerical differential equation solvers,” the research team wrote. “These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations, and give rise to improved performance on time-series prediction tasks.”

To evaluate their new model, the team performed a number of experiments with their liquid time-constraint recurrent neural network. The experiments include training a classifier to identify hand gestures from motion data, predicting room occupancy from sensor data streams (temperature, carbon dioxide levels, humidity, and other sensors), and recognizing human activity (e.g., standing, walking, and sitting), from smartphone data. Additional tests include sequential MNIST, kinematic dynamics modeling, and the prediction of traffic, hourly household power consumption, ozone concentration levels, and more types of human activity.

Compared to other recurrent neural network models (LSTM, CT-RNN, Neural ODE, and CT-GRU), the researchers observed that there was between a five to 70 percent improvement in four out of the seven experiments on time-series predictions.

Artificial intelligence is rapidly expanding across industries and many functions. The more flexible, fluid, and transparent AI machine learning becomes, the greater potential for improved AI safety and performance in the future.  

Images Powered by Shutterstock