Logo

The Data Daily

Artificial Intelligence and the Future of War

Artificial Intelligence and the Future of War

Consider an alternative history for the war in Ukraine. Intrepid Ukrainian Army units mount an effort to pick off Russian supply convoys. But rather than rely on sporadic air cover, the Russian convoys travel under a blanket of cheap drones. The armed drones carry relatively simple artificial intelligence (AI) that can identify human forms and target them with missiles. The tactic claims many innocent civilians, as the drones kill nearly anyone close enough to the convoys to threaten them with anti-tank weapons. While the Ukrainians attempt to respond to the setback with their own drones, they are overwhelmed by the more numerous Russian drones.

It is increasingly plausible that this scenario could be seen in the next major war. In fact, the future of AI in war is already here, even if it’s not yet being employed in Ukraine. The United States, China, Russia, Britain, Israel, and Turkey are all aggressively designing AI-enabled weapons that can shoot to kill with no humans in the decision-making loop. These include fleets of ghost ships, land-based tanks and vehicles, AI-enabled guided missiles, and, most prominently, aircraft. Russia is even developing autonomous nuclear weapons; the 2018 U.S. Nuclear Posture Review stated that Russia is developing a “new intercontinental, nuclear-armed, nuclear-powered, undersea autonomous torpedo.” Lethal autonomous weapons (LAWs) have already been used in offensive operations to attack human combatants. In March 2021, a Turkish Kargu-2 drone was used in Libya to mount autonomous attacks on human targets. According to a UN Security Council report, the Kargu-2 hunted down retreating logistics and military convoys, “attack[ing] targets without requiring data connectivity between the operator and the munition.”

In reality, autonomous weapons that kill without an active human decision are now hundreds of years old. Land and naval mines have been used since at least the 1700s. Missile defense systems such as the Patriot and Phalanx can operate autonomously to attack enemy aircraft or surface vessels. Furthermore, sentry guns that automatically fire at targets in combat patrol zones have been deployed on armored vehicles.

That said, these systems have largely been defensive in nature. The Rubicon the world is now crossing would allow offensive weapons—equipped with enhanced intelligence for more complex decisions—to play a major role in conflicts. This would create a battlefield on which robots and autonomous systems are more numerous than human soldiers.

The attraction of killer robots and autonomous systems is clear. Using them to do the dirty work means that valuable soldiers do not have to die and expensive pilots do not have to fly costly equipment. Robots don’t go to the bathroom, need water, or miss a shot when they sneeze or shake. While Robots make mistakes, so do humans. Protagonists of offensive AI assume that robot mistakes will be more predictable, with little regard to the increasing unpredictability of the behavior that arises from the emergent properties of complex systems. Finally, robots can be trained instantly, and replacing them is much faster and cheaper than replacing human combatants.

Most importantly, the political cost of using robots and LAWs is far lower. There would be no footage of captured soldiers or singed corpses, of pilots on their knees in a snowy field begging for mercy. This is why warfare will likely continue to become more remote and faceless. AI on weapons just takes the next logical step along this path. It enables robot weapons to operate at a wider scale and react without needing human inputs. This makes the military rationale crystal clear: not having AI capabilities will put an army at a great disadvantage. Just as software is eating the world of business, it is also eating the military world. AI is the sharp end of the software spear, leveling the playing field and allowing battlefield systems to evolve at the same speed as popular consumer products. The choice not to use AI on the battlefield will become akin to a bad business decision, even if there are tangible moral repercussions.

The Benefits and Risks of Fewer Humans in the Loop

As we explained in our book, Driver in the Driverless Car, supporters of autonomous lethal force argue that AI-controlled robots and drones might prove to be far more moral than their human counterparts. They claim that a robot programmed not to shoot women or children would not make mistakes in the pressure of battle. Furthermore, they argue, programmatic logic has an admirable ability to reduce the core moral issue down to binary decisions. For example, an AI system with enhanced vision might instantly decide not to shoot a vehicle painted with a red cross as it hurtles toward a checkpoint.

These lines of thought are essentially counterfactuals. Are humans more moral if they can program robots to avoid the weaknesses of the human psyche that can cause experienced soldiers to lose their sense of reason and morality in the heat of battle? When it is hard to discern if an adversary follows any moral compass, such as in the case of ISIS, is it better to rely on the cold logic of the robot warrior rather than on an emotional human being? What if a non-state terrorist organization develops lethal robots that afford them a battlefield advantage? Is that a risk that the world should be willing to take in developing them?

There are clear, unacceptable risks with this type of combat, particularly in cases when robots operate largely autonomously in an environment with both soldiers and civilians. Consider the example of Russian drones flying air cover and taking out anything that moves on the ground. The collateral damage and the deaths of innocent non-combatants would be horrific. In several instances, including a famous 1979 incident in which a human inadvertently set off alarms warning of a Russian nuclear strike, automated systems have given incorrect information that human operators debunked just in time to avert a nuclear exchange. With AI, decisions are made far too quickly for humans to correct them. As a result, catastrophic mistakes are inevitable.

We also shouldn’t expect that LAWs will remain exclusive to nation-states. Because their manufacturing costs follow Moore’s Law, they will quickly enter the arsenals of sophisticated non-state actors. Affordable drones can be fitted with off-the-shelf weapons, and their sensors can be tethered to home-grown remote AI systems to identify and target human-like forms.

We currently sit at a crossroads. The horrific brutality of Russia’s invasion of Ukraine has demonstrated yet again that even great powers may cast aside morality for national narratives that are convenient to autocrats and compromised political classes. The next great war will likely be won or lost in part due to the smart use of AI systems. What can be done about this looming threat?

While a full ban on AI-based technologies would have been ideal, it is now impossible and counterproductive. For example, a ban would handcuff NATO, the United States, and Japan in future combat and make their soldiers vulnerable. A ban on applying AI systems to weapons of mass destruction is more realistic. Some may say this is a distinction without a difference, but the world has successfully limited weapons that can have global impacts. However, we have crossed the Rubicon and have few choices in a world where madmen like Putin attack innocent civilians with thermobaric rockets and threaten nuclear escalation.

Vivek Wadhwa and Alex Salkever are the authors ofThe Driver in the Driverless Car andFrom Incremental to Exponential: How Large Companies Can See the Future and Rethink Innovation. Their work explains how advancing technologies can be used for both good and evil, to solve the grand challenges of humanity or to destroy it.

Images Powered by Shutterstock