Logo

The Data Daily

Improving QA Game Testing with Evolved AI

Improving QA Game Testing with Evolved AI

Artificial intelligence is evolving, and with it, so is the process of quality assurance (QA) in game development. The traditional approach to game testing can no longer ensure that games are error-free. Developers must now use AI techniques to identify potential bugs and glitches before they can cause problems in the game and add months of delays to aggressive launch and update schedules. A new approach to QA promises to improve the overall quality of games, allow developers to scale their efforts and keep players happy.

It seems that with each new video game release, the bar is set higher and higher. The days of games relying solely on scripted responses and predetermined actions are long gone. Players today expect their adversaries to be as lifelike as possible, exhibiting believable decision-making skills and reacting realistically to whatever situation they find themselves in. Shaping the behavior of NPCs and other characters in the game world is a fascinating frontier in the field of AI, but before that can ever happen, it has become increasingly critical for developers to incorporate robust QA testing into their development process to ensure that gaming world works properly and does not disrupt players’ experience. Thankfully, advancements in AI technology are making this easier than ever before. Let’s take a closer look at some of these advancements and how they impact QA in game development.

AI tools are being used by developers for purposes such as bug detection and correcting game play errors. This trend is likely to continue as AI technology becomes more sophisticated. As a result, QA teams need to be prepared to use new AI models for testing in order to ensure the highest level of quality for their products.

QA has developed dramatically since the early days of game development. It was very Wild West, and people were making it up as they went along. Back then, the most popular method for testing game code was to simply run the game and look for any obvious bugs or glitches. This process was not only time-consuming, but it was also often ineffective, as many bugs would go undetected until after the game was released.

Game publishers soon brought on professional testers—it sounds like a great job, they play video games all day, but in reality, it can be brain-numbingly boring. They play the same thing over and over, go to as many places as they can in the world, look for shortcuts, test geometry, gravity glitches, where to get stuck and so on. QA tends to be a combination of internal people who are hired as game testers and teams from outside. The reason is when people play the same game over and over, they miss things —can't see the forest for the trees. Developers seek new perspectives from fresh eyes that can give input on not only the quality of the game world, but also the gameplay and content to help them better understand whether —and where in the storyline —it is interesting or boring.

With the advent of more sophisticated AI tools, developers can now detect potential bugs and glitches much earlier in the development process. These tools can be used to automatically test game code for errors and potential problems. This allows developers to fix issues before they cause problems in the game, which can save months of delays in the launch and update schedules.

The question many developers are asking is can they get through testing and QA by having bots play the game? Thanks to AI, the answer is a resounding, yes ... but that yes comes with a caveat. New AI models allow bots to scour the game world looking for bugs and where to break it. What is more interesting, however, is not just that bots can play the game and play it well, but that they can play it as a human would.

Bots that conduct human-like behavior and take the actions a human would take is only the beginning. They also need variance. Humans are complex creatures, and developers need bots that play like a new player, skilled player or casual player and so on. They also need expansive iterations of those profiles. For instance, how would an aggressive medium-skilled player react in a certain scenario, or what would a highly skilled player do when they are distracted or when they are angry. The number of player demographics is only eclipsed by the volume of play styles.

On top of that, there is work on how to explore a world. Testers are responsible for capturing all the relatively rare events. Players could be curious if it is possible to look behind the dragon, for example, even though it would do nothing to advance the game, they may want to know what's back there. Developers will absolutely need to know if that action crashes the game or if the player gets stuck. Diversity in play styles and total exploratory coverage are fundamental to QA. Today's AI is working to meet the challenge.

Bots that play the game well are only the first step. To get there, Reinforcement Learning has been widely used. Reinforcement Learning is the area of machine learning that deals with learning, what actions intelligent agents, such as bots, should take in an environment. It basically works by a kind of advanced trial and error; bots learn from their mistakes and then learn directly from those interactions so as to get optimum reward. AI Planning, which is a sister field to Reinforcement Learning, helps increase bots’ autonomy and flexibility through the construction of sequential decisions and actions to achieve their goals. Depending on the technical details of your game, you might be able to use both planning and reinforcement learning.

With the availability of large data sets, the industry is now seeing major growth of unsupervised learning or self-supervised learning. This is where agents/bots are trained on data sets that don’t have clear labels. Self-supervised learning is critical to QA in game development because it gives rise to an area that is changing everything, foundation models.

Basically,Foundation models are machine learning algorithms that are trained on a massive scale using self-supervised methods. There has been plenty of work recently on foundation models for text and images, but more recently, I and others have been working on foundation models of behavior. When you have these, you can condition them to play in various styles, such as socially, aggressively, and so on. They can be placed in new situations and told to play in a certain style. What’s more these agents can work across different games, including games they’ve never seen before. They can be asked to do specific actions such as, “Go look behind the dragon, but do it as a curious player would” or instructed to just go explore. These exploration algorithms can be unleashed by the thousands, millions and even billions to check every nook and cranny in a big game spaces the way a human would.

Not only does this enable absolutely comprehensive QA coverage of the game world, but it enables the developers to scale like never before. It’s not about replacing developers or even testers, but about giving them new tools. These bots, driven by foundation models that are in turn trained on huge amounts of game-playing data, will allow developers to test more, better, faster, and more often. This helps optimize the game development process. For example, it becomes easier to take risks and try new ideas when testing is cheap, good, and plentiful. Which will in turn lead to better games for all of us!

Julian Togelius is an Associate Professor of Computer Science and Engineering,New York University, and a co-founder ofmodl.ai, which empowers game developers and publishers with extraordinary AI technology that helps unlock the potential of their games. He also directs theNYU Game Innovation Laband was the editor-in-chief of the IEEE Transactions on Games Journal and Co-director of the. In 2018, he published the textbook “Artificial Intelligence and Games” with Georgios N.Yannakakis.

Images Powered by Shutterstock