Logo

The Data Daily

Why We Worry About the Ethics of Artificial Intelligence

Why We Worry About the Ethics of Artificial Intelligence

Why We Worry About the Ethics of Artificial Intelligence
Written by Jared Sylvester
Abstract
Jared Sylvester, Booz Allen data scientist, explores  the data science community's preparedness as the capabilities of Artificial Intelligence (AI) grow more powerful. In this piece, he dissects the current landscape and discusses how the data science community can move forward together.
As the capabilities of Artificial Intelligence (AI) grow more powerful, we are concerned the data science community is unprepared for the power we now wield.  
To be clear, we’re big believers in the far-reaching good AI can do. Every week we learn of new advances that will dramatically improve the world. Recently we’ve seen research that could improve the way we control prosthetic devices, detect pneumonia, understand long-term patient trajectories, and monitor ocean health. By the time you read this, there will be even more examples. Booz Allen contributes to the flow of discovery, researching and deploying new AI capabilities and products.
There is, however, a darker side. One example is a recent study by Stanford University researchers who developed an algorithm to predict sexual orientation from facial images. When you consider recent news of the detainment and torturing of more than 100 male homosexuals in the Russian republic of Chechnya, you quickly see the cause for concern. This software and a few cameras positioned on busy street corners would allow the targeting of homosexuals at industrial scale—hundreds could quickly become thousands. The researchers pointed out that their findings “expose[d] a threat to the privacy and safety of gay men and women.” That disclaimer does little to prevent outside groups from implementing the technology for mass targeting and persecution. The potential for this isn’t so far-fetched: China is already using CCTV and facial recognition software to catch jaywalkers.
Many technologies can be applied for nefarious purposes. This is not new. What is new about AI is the scale and magnitude of its potential impact. This scope is what will allow it to do so much good, but also so much evil. It is like no other technology that has come before. The notable exception is atomic weapons, a comparison others have already drawn. We hesitate to draw on this comparison for fear of perpetuating a sensationalistic narrative that detracts from the conversation. However, it’s the closest parallel we can think of in terms of the scale (potential to impact tens of millions of people) and magnitude (potential to do physical harm). 
These are not the only reasons we worry about the ethics of AI. We worry because AI is unique in so many ways that we must frame a new discussion rather than draw on guidelines from other disciplines. Consider these points:
Ethics is not [yet] a core commitment in the AI field. Compare this with medicine, where a commitment to ethics has existed for millennia in the form of the Hippocratic Oath. Members of the physics community now pledge their intent to do no harm with their work. In other fields, ethics is part of the very ethos. Not so with AI. Compared to other disciplines, the field is so young we haven’t had time to mature our thinking through lessons from the past. We must look to these other fields and their hard-earned lessons to guide our decisions.    
Computer scientists and mathematicians have never before wielded this kind of power. The atomic bomb is one exception; cyber weapons may be another. Both of these, however, represent intentional applications of technology.  While the public was unaware of the Manhattan Project, the scientists involved knew the goal and made an informed decision to take part. The Stanford study described above has clear nefarious applications; many other research efforts in AI may not. Researchers run the risk of unwittingly conducting studies that have applications they never envisioned and would not condone. Furthermore, research into atomic weapons could only be implemented by a small number of nation-states with access to proper materials and expertise. Contrast that with AI, where a reasonably talented coder who has taken open-source machine learning classes can easily implement and effectively ”weaponize” published techniques. Within our field, we have never had to worry about this degree of power to harm. We must reset our thinking and approach our work with new rigor, humility, and caution.     
Ethical oversight bodies from other scientific fields seem ill-prepared for AI. Looking to existing supervisory authorities seems a logical approach. We’re among those who have suggested that AI is a “grand experiment on all of humanity” and should follow principles borrowed from human subject research. However, the fact that Stanford’s Institutional Review Board (IRB), a respected body within the research community, reviewed and approved research with questionable applications should raise concern. Researchers have long raised questions about the broken IRB system. An IRB system designed to protect the interests of study participants may be unsuited for situations in which potential harm accrues, not to the subjects, but to society at large. It’s clear that the standards that have served other scientific fields for decades or even centuries may not be prepared for AI’s unique data and technology issues. These challenges are compounded even further by the general lack of AI expertise, and sometimes even technology expertise, the members of these boards possess. We should continue to work with existing oversight bodies, but we must also take an active role in educating them and evolving their thinking about AI.
AI ethical concerns are often not obvious. This differs dramatically from other scientific fields where ethical dilemmas are self-evident. That’s not to say they are easy to navigate. A recent story about an unconscious emergency room patient with a “Do Not Resuscitate” tattoo is a perfect example. Medical staff had to decide whether they should administer life-saving treatment despite the presence of the tattoo. They were faced with a very complex, but very obvious, ethical dilemma. The same is rarely true in AI, where unintended consequences may not be immediately apparent and issues like bias can be hidden in complex algorithms. We have a responsibility to ourselves and our peers to be on the lookout for ethical issues and raise concerns as soon as they emerge.     
AI technology is moving faster than our approach to ethics. Other fields have had hundreds of years for their ethical approach to evolve alongside the science. AI is still nascent, yet we are already moving technology from the lab to full deployment. The speed at which that transition is happening has led to notable ethical issues, including potential racism in criminal sentencing and discrimination in job hiring. The ethical implications of AI need to be studied as much as the core technology if we ever hope to avoid these issues in the future. We need to catalyze an ongoing conversation around ethics much as we see in fields like medicine.  
“The issue that looms behind all this, however, is the fact that we can’t put the genie back in the bottle.”
The issue that looms behind all this, however, is the fact that we can’t put the genie back in the bottle. We can’t undo the Stanford research now that it’s been published. As a community, we will forever be accountable for the technology that we create.
In the age of AI, corporate and personal values take on new importance. We have to decide what we stand for and use that as a measure to evaluate our decisions. We can’t wait for issues to present themselves. We must be proactive and think in hypotheticals to anticipate the situations we will inevitably face.
Be assured that every organization will be faced with hard choices related to AI—choices that could hurt the bottom line or, worse, harm the well-being of people now or in the future. We will need to decide, for example, if and how we want to be involved in government efforts to vet immigrants or create technology that could ultimately help hackers. If we fail to accept that these choices inevitably exist, we run the risk of compromising our values. We need to stand strong in our beliefs and live the values we espouse for ourselves, our organizations, and our field of study. Ethics, like many things, is a slippery slope. Compromising one time makes it easier to compromise the next time.
We must also recognize that the values of others may not mirror our own. We should approach those situations with empathy. Instead of reacting in anger or defensiveness, we should use them as an opportunity to have a meaningful dialog around ethics and values. When others raise concerns about our own actions, we must welcome those conversations with humility and civility. Only then can we move forward as a community.  
Machines are neither moral or immoral. We must work together to ensure they behave in a way that benefits, not harms, humanity. We don’t purport to have the answers to these complex issues. We simply ask that we all keep asking the right questions.  
We’re not the only one discussing these issues. Check out this Medium post by the NSF-Funded group Pervasive Data Ethics for Computational Research, Kate Crawford’s amazing NIPS keynote, and Mustafa Suleyman’s recent essay in Wired UK. 

Images Powered by Shutterstock