Logo

The Data Daily

Can Artificial Intelligence imagine things?

Can Artificial Intelligence imagine things?

Imagination is a complex process. If we stop to think about the number of elements that can make up an imaginary scenario, it is easier to begin to approach a notion about how profound this mental procedure can be.

A team of researchers from the University of Southern California, United States, presented a project that seeks to provide a quality similar to AI, in order to implement systems capable of creating new concepts based on known elements.

At first glance, the idea of ​​training an artificial intelligence system with “human capabilities” may sound strange. However, this has a practical, interesting and approachable purpose.

The process of imagination is generally defined as a creative process in which, based on previously perceived elements, new mental images are constructed. Taking this to the level of AI, a system that masters a good number of drug formulas, for example, could break down their components and functions, to start testing new recipes.

Mechanisms of this class have been presented previously, but their action is limited to a certain context, such as the example just mentioned of drugs. Unlike those, the AI ​​developed by the USC researchers can be extrapolated to different applications. This means that in different scenarios, this system should be able to define its own rules and variables, to configure how many combinations of attributes are possible.

To achieve this versatility, the researchers used a mechanism similar to that used in the generation ofdeepfakes. As in the case of these fake audiovisual pieces, an algorithm can identify a person’s face and their gestures, to emulate them with a digitally replaced face; In the case of this AI, the system would be able to recognize the components of each analyzed scenario.

In aconversationWith his house of studies, student Yunhao Ge, part of the team behind this development, exemplified this process based on the movie Transformers.“It can take the form of a Megatron car, the color and pose of a yellow Bumblebee car, and the background of New York’s Times Square. The result will be a colored Bumblebee Megatron Car, driving in Times Square, even if this display was not witnessed during the training session »he commented.

In the same instance, another member of this team, Professor Laurent Itti, commented that“Deep learning has already shown unsurpassed performance and promise in many domains, but all too often this has happened through superficial mimicry and without a deeper understanding of the separate attributes that make each object unique”, adding that“This new approach to unraveling, for the first time, really unleashes a new sense of imagination in AI systems, bringing them closer to human understanding of the world”.

With systems of this type, autonomous cars could imagine as many scenarios as possible based on climatic and environmental factors, strengthening their safety. And thus, to the extent that compatible needs are identified with what this system offers, the catalog of possible applications could continue to grow.

The details of this investigation were made public through adocumenttitled“Zero-Shot Synthesis with Group-Supervised Learning”, unveiled at this year’s International Conference of Representations of Learning.

Images Powered by Shutterstock