Logo

The Data Daily

Civilian AI Tech Is Already Being Misused by the Bad Guys

Civilian AI Tech Is Already Being Misused by the Bad Guys

Civilian AI Is Already Being Misused by the Bad Guys
Share
Search:
Explore by topic
FOR THE TECHNOLOGY INSIDER
Topics
Follow IEEE Spectrum
Support IEEE Spectrum
IEEE Spectrum is the flagship publication of the IEEE — the world’s largest professional organization devoted to engineering and applied sciences. Our articles, podcasts, and infographics inform our readers about developments in technology, engineering, and science.
About IEEE Contact & Support Accessibility Nondiscrimination Policy Terms IEEE Privacy Policy
© Copyright 2022 IEEE — All rights reserved. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.
IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Enjoy more free content and benefits by creating an account
Saving articles to read later requires an IEEE Spectrum account
The Institute content is only available for members
Downloading full PDF issues is exclusive for IEEE Members
Access to Spectrum's Digital Edition is exclusive for IEEE Members
Following topics is a feature exclusive for IEEE Members
Adding your response to an article requires an IEEE Spectrum account
Create an account to access more content and features on IEEE Spectrum, including the ability to save articles to read later, download Spectrum Collections, and participate in conversations with readers and editors. For more exclusive content and features, consider Joining IEEE .
Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, archives, PDF downloads, and other benefits. Learn more →
Close
Access Thousands of Articles — Completely Free
Create an account and get exclusive content and features: Save articles, download collections, and talk to tech insiders — all free! For full access and benefits, join IEEE as a paying member.
artificial intelligence security risk management terrorism war autonomous weapons
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
Last March, a group of researchers made headlines by revealing that they had developed an artificial intelligence (AI) tool that could invent potential new chemicals weapons . What’s more, it could do so at an incredible speed: It took only six hours for the AI tool to suggest 40,000 of them.
The most worrying part of the story, however, was how easy it was to develop that AI tool. The researchers simply adapted a machine-learning model normally used to check for toxicity in new medical drugs. Rather than predicting whether the components of a new drug could be dangerous, they made it design new toxic molecules using a generative model and a toxicity dataset.
The paper was not promoting an illegal use of AI (chemical weapons were banned in 1997 ). Instead, the authors wanted to show just how easily peaceful applications of AI can be misused by malicious actors—be they rogue states, non-state armed groups, criminal organizations, or lone wolves. Exploitation of AI by malicious actors presents serious and insufficiently understood risks to international peace and security.
Many “responsible AI” initiatives share the same blind spot. They ignore international peace and security.
People working in the field of life sciences are already well attuned to the problem of misuse of peaceful research, thanks to decades of engagement between arms control experts and scientists .
The same cannot be said of the AI community, and it is well past time for it to catch up.
We serve with two organizations that take this cause very seriously, the United Nations Office for Disarmament Affairs and the Stockholm International Peace Research Institute . We’re trying to bring our message to the wider AI community, notably future generations of AI practitioners, through awareness raising and capacity building activities.
A blind spot for responsible AI
AI can improve many aspects of society and human life, but like many cutting-edge technologies it can also create real problems, depending on how it is developed and used. These problems include job losses , algorithmic discrimination , and a host of other possibilities. Over the last decade, the AI community has grown increasingly aware of the need to innovate more responsibly. Today, there is no shortage of “responsible AI” initiatives—more than 150, by some accounts —which aim to provide ethical guidance to AI practitioners and to help them foresee and mitigate the possible negative impacts of their work.
The problem is that the vast majority of these initiatives share the same blind spot. They address how AI could affect areas such as healthcare, education, mobility, employment, and criminal justice, but they ignore international peace and security. The risk that peaceful applications of AI could be misused for political disinformation, cyber-attacks , terrorism or military operations is rarely considered, unless very superficially.
This is a major gap in the conversation on responsible AI that must be filled.
Most of the actors engaged in the responsible AI conversation work on AI for purely civilian end-uses, so it is perhaps not surprising that they overlook peace and security. There’s already a lot to worry about in the civilian space, from potential infringements of human rights to AI’s growing carbon footprint .
AI practitioners may believe that peace and security risks are not their problem, but rather the concern of nation states. They might also be reluctant to discuss such risks in relation to their work or products due to reputational concerns, or for fear of inadvertently promoting the potential for misuse.
The misuse of civilian AI is already happening
The diversion and misuse of civilian AI technology are, however, not problems that the AI community can or should shy away from. There are very tangible and immediate risks.
Civilian technologies have long been a go-to for malicious actors, because misusing such technology is generally much cheaper and easier than designing or accessing military-grade technologies. There are no shortage of real-life examples, a famous one being the Islamic State’s use of hobby drones as both explosive devices and tools to shoot footage for propaganda films.
The misuse of civilian technology is not a problem that states can easily address on their own, or purely through intergovernmental processes. However, AI researchers can be a first line of defense as they are among the best placed to evaluate how their work may be misused.
The fact that AI is an intangible and widely available technology with great general-use potential makes the risk of misuse particularly acute. In the cases of nuclear power technology or the life sciences, the human expertise and material resources needed to develop and weaponize the technology are generally hard to access. In the AI domain there are no such obstacles. All you need may be just a few clicks away.
As one of the researchers behind the chemical weapon paper explained in an interview : “You can go and download a toxicity dataset from anywhere. If you have somebody who knows how to code in Python and has some machine learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic datasets.”
We’re already seeing examples of the weaponization of peaceful AI. The use of deepfakes , for example, demonstrates that the risk is real and the consequences potentially far-ranging. Less than 10 years after Ian Goodfellow and his colleagues designed the first generative adversarial network, GANs have become tools of choice for cyber-attacks and disinformation —and now, for the first time, in warfare. During the current war in Ukraine, a deepfake video appeared on social media that appeared to show Ukrainian President Volodymyr Zelenskyy telling his troops to surrender .
The weaponization of civilian AI innovations is also one of the most likely ways that autonomous weapons systems (AWS) could materialize. Non-state actors could exploit advances in computer vision and autonomous navigation to turn hobby drones into homemade AWS . These could not only be highly lethal and disruptive (as depicted in the Future of Life Institute ’s advocacy video Slaughterbots ) but also very likely violate international law , ethical principles, and agreed standards of safety and security.
Nation states can't address AI risks alone
Another reason the AI community should get engaged is that the misuse of civilian products is not a problem that states can easily address on their own, or purely through intergovernmental processes. This is not least because governmental officials might lack the expertise to detect and monitor technological developments of concern. What’s more, the processes through which states introduce regulatory measures are typically highly politicized and may struggle to keep up with the speed at which AI tech is advancing.
Moreover, the tools that states and intergovernmental process have at their disposal to tackle the misuse of civilian technologies, such as stringent export controls and safety and security certification standards , may also jeopardize the openness of the current AI innovation ecosystem . From that standpoint, not only do AI practitioners have a key role to play, but it is strongly in their interest to play it.
AI researchers can be a first line of defense as they are among the best placed to evaluate how their work may be misused. They can identify and try to mitigate problems before they occur—not only through design choices but also through self-restraint in the diffusion and trade of the products of research and innovation.
AI researchers may, for instance, decide not to share specific details about their research (the researchers that repurposed the drug-testing AI did not disclose the specifics of their experiment), while companies that develop AI products may decide not to develop certain features, restrict access to code that might be used maliciously, or add by-design security measures such as anti-tamper software, geofencing, and remote switches. Or they may apply the know-your-customer principle through the use of token-based authentication.
Such measures will certainly not eliminate the risks of misuse entirely—and they may also have drawbacks—but they can at least help to reduce them. These measures can also help keep at bay potential governmental restrictions, for example on data sharing, which could undermine the openness of the field and hold back technological progress.
The responsible AI movement has tools that can help
To engage with the risks that the misuse of AI pose to peace and security, AI practitioners do not have to look further than existing recommended practices and tools for responsible innovation. There is no need to develop an entirely new toolkit or set of principles. What matters is that peace and security risks are regularly considered, particularly in technology impact assessments . The appropriate risk-mitigation measures will flow from there.
Responsible AI innovation is not a silver bullet for all the societal challenges brought by advances in AI. However, it is a useful and much-needed approach, especially when it comes to peace and security risks. It offers a bottom-up approach to risk identification, in a context where the multipurpose nature of AI make top-down governance approaches difficult to develop and implement, and possibly detrimental to progress in the field.
Certainly, it would be unfair to expect AI practitioners alone to foresee and to address the full spectrum of possibilities through which their work could be harmful. Governmental and intergovernmental processes are absolutely necessary, but peace and security, and so all our safety, are best served by the AI community getting on board . The steps AI practitioners can take do not need to be very big, but they could make all the difference.
Authors’ note: This post was drafted as part of a joint SIPRI-UNODA initiative on Responsible Innovation in AI which is supported by the Republic of Korea. All content is the responsibility of the authors.
From Your Site Articles
Vincent Boulanin
Vincent Boulanin is a senior researcher at the Stockholm International Peace Research Institute (SIPRI). He leads SIPRI’s research on the development, use, and control of autonomous weapons systems and military artificial intelligence. His current work focuses on the responsible development and use of artificial intelligence.
Charles Ovink is the political affairs officer at the United Nations Office for Disarmament Affairs (UNODA). He specializes in responsible innovation, the impact of emerging technologies on disarmament and non-proliferation, and the militarization of AI.
The Conversation (0)
16 Jul 2022
11 min read
The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.
U.S. Navy
Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.
The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark , a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “ nearly impossible to find with radar and active sonar .” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.
And now the game of submarine hide-and-seek may be approaching the point at which submarines can no longer elude detection and simply disappear. It may come as early as 2050 , according to a recent study by the National Security College of the Australian National University, in Canberra. This timing is particularly significant because the enormous costs required to design and build a submarine are meant to be spread out over at least 60 years. A submarine that goes into service today should still be in service in 2082. Nuclear-powered submarines, such as the Virginia-class fast-attack submarine, each cost roughly US $2.8 billion , according to the U.S. Congressional Budget Office. And that’s just the purchase price; the total life cycle cost for the new Columbia-class ballistic-missile submarine is estimated to exceed $395 billion.
The twin problems of detecting submarines of rival countries and protecting one’s own submarines from detection are enormous, and the technical details are closely guarded secrets. Many naval experts are speculating about sensing technologies that could be used in concert with modern AI methodologies to neutralize a submarine’s stealth. Rose Gottemoeller , former deputy secretary general of NATO, warns that “the stealth of submarines will be difficult to sustain, as sensing of all kinds, in multiple spectra, in and out of the water becomes more ubiquitous.” And the ongoing contest between stealth and detection is becoming increasingly volatile as these new technologies threaten to overturn the balance.
We have new ways to find submarines
Today’s sensing technologies for detecting submarines are moving beyond merely hearing submarines to pinpointing their position through a variety of non-acoustic techniques. Submarines can now be detected by the tiny amounts of radiation and chemicals they emit, by slight disturbances in the Earth’s magnetic fields, and by reflected light from laser or LED pulses. All these methods seek to detect anomalies in the natural environment, as represented in sophisticated models of baseline conditions that have been developed within the last decade, thanks in part to Moore’s Law advances in computing power.
Airborne laser-based sensors can detect submarines lurking near the surface.
IEEE Spectrum
According to experts at the Center for Strategic and International Studies, in Washington, D.C., two methods offer particular promise . Lidar sensors transmit laser pulses through the water to produce highly accurate 3D scans of objects. Magnetic anomaly detection (MAD) instruments monitor the Earth’s magnetic fields and can detect subtle disturbances caused by the metal hull of a submerged submarine.
Both sensors have drawbacks. MAD works only at low altitudes or underwater. It is often not sensitive enough to pick out the disturbances caused by submarines from among the many other subtle shifts in electromagnetic fields under the ocean.
Lidar has better range and resolution and can be installed on satellites, but it consumes a lot of power —a standard automotive unit with a range of several hundred meters can burn 25 watts . Lidar is also prohibitively expensive , especially when operated in space. In 2018, NASA launched a satellite with laser imaging technology to monitor changes in Earth’s surface—notably changes in the patterns on the ocean’s surface; the satellite cost more than $1 billion.
Indeed, where you place the sensors is crucial. Underwater sensor arrays won’t put an end to submarine stealth by themselves. Retired Rear Adm. John Gower , former submarine commander for the Royal Navy of the United Kingdom, notes that sensors “need to be placed somewhere free from being trolled or fished, free from seismic activity, and close to locations from which they can be monitored and to which they can transmit collected data. That severely limits the options available.”
One way to get around the need for precise placement is to make the sensors mobile. Underwater drone swarms can do just that, which is why some experts have proposed them as the ultimate antisubmarine capability.
Clark, for instance, notes that such drones now have enhanced computing power and batteries that can last for two weeks between charges. The U.S. Navy is working on a drone that could run for 90 days. Drones are also now equipped with the chemical, optical, and geomagnetic sensors mentioned earlier. Networked underwater drones, perhaps working in conjunction with airborne drones, may be useful for not only detecting submarines but also destroying them , which is why several militaries are investing heavily in them.
A U.S. Navy P-8 Poseidon aircraft, equipped to detect submarines, awaits refueling in Okinawa, Japan, in 2020.
U.S.Navy
For example, the Chinese Navy has invested in a fishlike undersea drone known as Robo-Shark , which was designed specifically for hunting submarines. Meanwhile, the U.S. Navy is developing the Low-Cost Unmanned Aerial Vehicle Swarming Technology , for conducting surveillance missions. Each Locust drone weighs about 6 kilograms, costs $15,000, and can be outfitted with MAD sensors; it can skim low over the ocean’s surface to detect signals under the water. Militaries study the drone option because it might work. Then again, it very well might not.
Robo-Shark, a 2.2-meter-long submersible made by Boya Gongdao Robot Technology, of Beijing, is said to be capable of underwater surveillance and unspecified antisubmarine operations. The company says that the robot moves at up to 5 meters per second (10 knots) by using a three-joint structure to wave the caudal fin, making less noise than a standard propeller would.
robosea.org
Gower considers underwater drones to be “the least likely innovation to make a difference in the decline of submarine stealth.” A navy would need a lot of drones, data rates are exceedingly slow, and a drone’s transmission range is short. Drones are also noisy and extremely easy to detect. “Not to mention that controlling thousands of underwater drones far exceeds current technological capabilities,” he adds.
Gower says it could be possible “to use drones and sonar networks together in choke points to detect submarine patrols.” Among the strategically important submarine patrol choke points are the exit routes on either side of Ireland, for U.K. submarines; those around the islands of Hainan and Taiwan, for Chinese submarines; in the Barents or Kuril Island chain, for Russian submarines; and the Straits of Juan de Fuca, for U.S. Pacific submarines. On the other hand, he notes, “They could be monitored and removed since they would be close to sovereign territories. As such, the challenges would likely outweigh the gains.”
Gower believes a more powerful means of submarine detection lies in the “persistent coverage of the Earth’s surface by commercial satellites,” which he says “represents the most substantial shift in our detection capabilities compared to the past.” More than 2,800 of these satellites are already in orbit. Governments once dominated space because the cost of building and launching satellites was so great. These days, much cheaper satellite technology is available, and private companies are launching constellations of tens to thousands of satellites that can work together to image every bit of the Earth’s surface . They are outfitted with a wide range of sensing technologies, including synthetic aperture radar (SAR), which scans a scene down below while moving over a great distance, providing results like those you’d get from an extremely long antenna. Since these satellite constellations view the same locations multiple times per day, they can capture small changes in activity.
Experts have known for decades about the possibility of detecting submarines with SAR based on the wake patterns they form as they move through the ocean. To detect such patterns, known as Bernoulli humps and Kelvin wakes, the U.S. Navy has invested in the AN/APS-154 Advanced Airborne Sensor , developed by Raytheon. The aircraft-mounted radar is designed to operate at low altitudes and appears to be equipped with high-resolution SAR and lidar sensors.
Commercial satellites equipped with SAR and other imaging instruments are now reaching resolutions that can compete with those of government satellites and offer access to customers at extremely affordable rates. In other words, there’s lots of relevant, unclassified data available for tracking submarines, and the volume is growing exponentially.
One day this trend will matter. But not just yet.
Jeffrey Lewis , director of the East Asia Nonproliferation Program at the James Martin Center for Nonproliferation Studies, regularly uses satellite imagery in his work to track nuclear developments . But tracking submarines is a different matter. “Even though this is a commercially available technology, we still don’t see submarines in real time today,” Lewis says.
The day when commercial satellite imagery reduces the stealth of submarines may well come, says Gower, but “we’re not there yet. Even if you locate a submarine in real time, 10 minutes later, it’s very hard to find again.”
Artificial intelligence coordinates other sub-detecting tech
Though these new sensing methods have the potential to make submarines more visible, no one of them can do the job on its own. What might make them work together is the master technology of our time: artificial intelligence.
“When we see today’s potential of ubiquitous sensing capabilities combined with the power of big-data analysis,” Gottemoeller says, “it’s only natural to ask the question: Is it now finally possible?” She began her career in the 1970s, when the U.S. Navy was already worried about Soviet submarine-detection technology.
Submarines can now be detected by the tiny amounts of radiation and chemicals they emit, by slight disturbances in the Earth’s magnetic fields, and by reflected light from laser or LED pulses.
Unlike traditional software, which must be programmed in advance, the machine-learning strategy used here, called deep learning, can find patterns in data without outside help. Just this past year, DeepMind’s AlphaFold program achieved a breakthrough in predicting how amino acids fold into proteins, making it possible for scientists to identify the structure of 98.5 percent of human proteins. Earlier work in games, notably Go and chess , showed that deep learning could outdo the best of the old software techniques, even when running on hardware that was no faster.
For AI to work in submarine detection, several technical challenges must be overcome. The first challenge is to train the algorithm, which involves acquiring massive volumes and varieties of sensor data from persistent satellite coverage of the ocean’s surface as well as regular underwater collection in strategic locations. Using such data, the AI can establish a detailed model of baseline conditions, then feed new data into the model to find subtle anomalies. Such automated sleuthing is what’s likeliest to detect the presence of a submarine anywhere in the ocean and predict locations based on past transit patterns.
The second challenge is collecting, transmitting, and processing the masses of data in real time. That task would require a lot more computing power than we now have, both in fixed and on mobile collection platforms. But even today’s technology can start to put the various pieces of the technical puzzle together.
Nuclear deterrence depends on the ability of submarines to hide
For some years to come, the vastness of the ocean will continue to protect the stealth of submarines. But the very prospect of greater ocean transparency has implications for global security. Concealed submarines bearing ballistic missiles provide the threat of retaliation against a first nuclear strike. What if that changes?
“We take for granted the degree to which we rely upon having a significant portion of our forces exist in an essentially invulnerable position,” Lewis says. Even if new developments did not reduce submarine stealth by much, the mere perception of such a reduction could undermine strategic stability.
A Northrop Grumman MQ-8C, an uncrewed helicopter, has recently been deployed by the U.S. Navy in the Indo-Pacific area for use in surveillance. In the future, it will also be used for antisubmarine operations.
Northrop Grumman
Gottemoeller warns that “any perception that nuclear-armed submarines have become more targetable will lead to questions about the survivability of second-strike forces. Consequently, countries are going to do everything they can to counter any such vulnerability.”
Experts disagree on the irreversibility of ocean transparency. Because any technological breakthroughs will not be implemented overnight, “nations should have ample time to develop countermeasures [that] cancel out any improved detection capabilities,” says Matt Korda , senior research associate at the Federation of American Scientists, in Washington, D.C. However, Roger Bradbury and eight colleagues at the National Security College of the Australian National University disagree , claiming that any technical ability to counter detection technologies will start to decline by 2050.
Korda also points out that ocean transparency, to the extent that it occurs, “will not affect countries equally. And that raises some interesting questions.” For example, U.S. nuclear-powered submarines are “the quietest on the planet. They are virtually undetectable . Even if submarines become more visible in general, this may have zero meaningful effect on U.S. submarines’ survivability.”
Sylvia Mishra , a new-tech nuclear officer at the European Leadership Network, a London-based think tank, says she is “more concerned about the overall problem of ambiguity under the sea.” Until recently, she says, movement under the oceans was the purview of governments. Now, though, there’s a growing industry presence under the sea . For example, companies are laying many underwater fiber-optic communication cables, Mishra says, “which may lead to greater congestion of underwater inspection vehicles, and the possibility for confusion.”
A Snakehead, a large underwater drone designed to be launched and recovered by U.S. Navy nuclear-powered submarines, is shown at its christening ceremony in Narragansett Bay in Newport, R.I.
U.S. Navy
Confusion might come from the fact that drones, unlike surface ships, do not bear a country flag, and therefore their ownership may be unclear. This uncertainty, coupled with the possibility that the drones could also carry lethal payloads, increases the risk that a naval force might view an innocuous commercial drone as hostile. “Any actions that hold the strategic assets of adversaries at risk may produce new touch points for conflict and exacerbate the risk of war,” says Mishra.
Given the strategic importance of submarine stealth, Gower asks, “Why would any country want to detect and track submarines? It’s only something you’d do if you want to make a nuclear-armed power nervous.” Even in the Cold War, when the United States and the U.K. routinely tracked Soviet ballistic-missile submarines, they did so only because they knew their activities would go undetected—that is, without risking escalation. Gower postulates that this was dangerously arrogant: “To actively track second-strike nuclear forces is about as escalatory as you might imagine.”
“All nuclear-armed states place a great value on their second-strike forces,” Gottemoeller says. If greater ocean transparency produces new risks to their survivability, real or perceived, she says, countries may respond in two ways: build up their nuclear forces further and take new measures to protect and defend them, producing a new arms race; or else keep the number of nuclear weapons limited and find other ways to bolster their viability.
Ultimately, such considerations have not dampened the enthusiasm of certain governments for acquiring submarines. In September 2021 the Australian government announced an enhanced trilateral partnership with the United States and the United Kingdom. The new deal, known as AUKUS, will provide Australia with up to eight nuclear-powered submarines with the most coveted propulsion technology in the world. However, it could be at least 20 years before the Royal Australian Navy can deploy the first of its new subs.
The Boeing Orca, the largest underwater drone in the U.S. Navy’s inventory, was christened in April, in Huntington Beach, Calif. The craft is designed, among other things, for use in antisubmarine warfare.
The Boeing Company
As part of its plans for nuclear modernization, the United States has started replacing its entire fleet of 14 Ohio-class ballistic-missile submarines with new Columbia-class boats. The replacement program is projected to cost more than $128 billion for acquisition and $267 billion over their full life cycles. U.S. government officials and experts justify the steep cost of these submarines with their critical role in bolstering nuclear deterrence through their perceived invulnerability.
To protect the stealth of submarines, Mishra says, “There is a need for creative thinking. One possibility is exploring a code of conduct for the employment of emerging technologies for surveillance missions.”
There are precedents for such cooperation. During the Cold War, the United States and the Soviet Union set up a secure communications system—a hotline—to help prevent a misunderstanding from snowballing into a disaster. The two countries also developed a body of rules and procedures, such as never to launch a missile along a potentially threatening trajectory. Nuclear powers could agree to exercise similar restraint in the detection of submarines. The stealthy submarine isn’t gone; it still has years of life left. That gives us ample time to find new ways to keep the peace.
From Your Site Articles

Images Powered by Shutterstock