Logo

The Data Daily

DeepMind AlphaTensor: The delicate balance between human and artificial intelligence

DeepMind AlphaTensor: The delicate balance between human and artificial intelligence

This article is part of our coverage of the latest in AI research.

DeepMind has made another impressive artificial intelligence announcement with AlphaTensor, a deep reinforcement learning system that discovers algorithms to make matrix multiplications much more efficient.

Matrix multiplication is at the heart of many computational tasks, including neural networks, 3D graphics, and data compression. Therefore, there are many immediate applications for an AI system that can improve the efficiency of matrix multiplication.

To create AlphaTensor, scientists at DeepMind used AlphaZero, the deep learning system that previously mastered board games like go, chess, and shogi. At first glance, it seems that DeepMind has managed to create an AI system that is general enough to tackle a wide range of unrelated problems. Given that AlphaTensor is finding faster matrix multiplication algorithms, you can even take a more ominous view of AI systems creating better AI systems.

But a deeper reality of AlphaTensor, which has been less highlighted in its media coverage, is how the right combination of human and artificial intelligence can help find the right solutions to the right problems.

In an essay not so long ago, I argued that the technology that we consider as artificial intelligence today is in reality a very good solution-finder. Humans are still the ones that find meaningful problems and formulate them in ways that can be tackled with computers. These are some of the skills that remain unique to humans for the time being.

And in a recent interview, computer scientist Melanie Mitchell explained this from a different perspective, namely that of concepts, analogies, and abstractions. Humans can turn their perceptions and experiences into abstract concepts and then project those abstractions to new perceptions and experiences—or create analogies. This ability is extremely important in solving problems in an ever-changing world where you’re constantly facing new situations. And it is sorely lacking in today’s AI systems.

The vanilla method for multiplying two matrices is to compute the dot product (or inner product) of their rows and columns. But there are numerous other algorithms for multiplying two matrices, many of which are computationally more efficient than the vanilla method. Finding these optimal algorithms, however, is very difficult because of the near-infinite ways that you can decompose the product of two matrices.  

So, we’re dealing with a very complex problem space. In fact, the problem space is so complicated that the DeepMind scientists only focused on solving two-dimensional matrix multiplications.

“We focus here on practical matrix multiplication algorithms, which correspond to explicit low-rank decompositions of the matrix multiplication tensor,” the researchers write. “In contrast to two-dimensional matrices, for which efficient polynomial-time algorithms computing the rank have existed for over two centuries, finding low-rank decompositions of 3D tensors (and beyond) is NP-hard and is also hard in practice. In fact, the search space is so large that even the optimal algorithm for multiplying two 3 × 3 matrices is still unknown.”

The researchers also note that previous attempts at matrix decomposition through human search, combinatorial search, and optimization techniques have yielded sub-optimal results.

DeepMind had previously tackled other very complicated search spaces such as the board game go. AlphaGo and AlphaZero, the AI systems used to master go, used deep reinforcement learning, which has proven to be especially good at tackling problems that can’t be solved through brute-force search methods.

But to be able to apply deep reinforcement learning to matrix decomposition, the researchers had to formulate the problem in a way that could be solved with a model like AlphaZero. Accordingly, they had to make modifications to AlphaZero so that it could find optimal matrix multiplication algorithms. This is where the power of abstraction and analogy-making is displayed to its fullest.

The researchers found that they could frame matrix decomposition as a single-player game, which makes it much more compatible with the kind of problems that AlphaZero has been applied to.

They call the game TensorGame and describe it as follows: “At each step of TensorGame, the player selects how to combine different entries of the matrices to multiply. A score is assigned based on the number of selected operations required to reach the correct multiplication result.”

Basically, they drew an analogy between board games and matrix decomposition, and formulated the latter into a reinforcement learning problem with states, actions, and rewards. The paper contains detailed and interesting information on how they designed the reward system to limit the number of moves the agent can make, penalizing longer solutions, and other details that I will not go through here for the sake of brevity.

It is interesting that from a zoomed-out view, board games and matrix decomposition have several things in common: they are perfect-information games (there is no hidden information from the agent), they are deterministic (things don’t happen at random in the environment), and they use discrete (as opposed to continuous) actions. This is why AlphaZero is a much better starting point than, say, AlphaStar, the deep reinforcement learning system that mastered StarCraft 2.

However, the problem space of matrix decomposition remains very complicated. The researchers describe TensorGame as “a challenging game with an enormous action space (more than 10 actions for most interesting cases) that is much larger than that of traditional board games such as chess and Go (hundreds of actions).”

This requires a model that can find the most promising directions out of the many ways it can go.

AlphaTensor is a modified version of AlphaZero but stays true to the main structure, which is composed of a neural network and a Monte Carlo tree search (MTCS) algorithm. At each step of the game, the neural network feeds a sampling of possible actions to the MTCS algorithm. The network gradually gets better as it receives feedback from its actions.

The neural network is a transformer model that “incorporates inductive biases for tensor inputs,” according to the paper. Inductive biases are design decisions that help the deep learning model learn the right representations for the model. Without inductive biases, the model would probably not be able to tackle the extremely large and complicated problem space of matrix decomposition or would have required much more training data.

Another important aspect of the neural network is the synthetic data used to train it, which is another break from the AlphaZero model. Here, again, the researchers took advantage of the nature of the problem to boost the training and performance of the model.

“Although tensor decomposition is NP-hard, the inverse task of constructing the tensor from its rank-one factors is elementary,” the researchers write.

Leveraging this characteristic, the researchers generated a set of “synthetic demonstrations” by first sampling factors at random and then constructing the original matrix. The model was then trained on both the synthetic data and the data it generated by exploring the problem space.

“This mixed training strategy—training on the target tensor and random tensors— substantially outperforms each training strategy separately. This is despite randomly generated tensors having different properties from the target tensors,” the researchers write.

AlphaTensor provided very impressive results, including the discovery of thousands of new algorithms and the capacity to optimize algorithms for specific types of processors (given the right reward function).

I will not go into the details here and highly recommend reading the full paper, which also enumerates some of the concrete applications that AlphaTensor can enable. What I wanted to highlight is the human side of these AI systems, which is often overlooked in the media coverage and taken for granted in the papers.

Like Google’s AI-designed chip and DeepMind’s AlphaCode, AlphaTensor is a prime example of how human intelligence and computing power can help solutions to interesting problems. Humans used their intuition, abstraction, and analogy-making skills to formulate matrix decomposition into a problem that can be solved with deep reinforcement learning. Then AI systems used computing power to search the vast space of possible solutions and pick potential candidates. This is a killer combination that can’t be underestimated.

Images Powered by Shutterstock