Logo

The Data Daily

Rob Reich: AI Developers Need a Code of Responsible Conduct

Rob Reich: AI Developers Need a Code of Responsible Conduct

Rob Reich wears many hats: political philosopher, director of the Center for Ethics in Society, and associate director of the Stanford Institute for Human-Centered Artificial Intelligence. In recent years, Reich has delved deeply into the ethical and political issues posed by revolutionary technological advances in artificial intelligence. His work is not always easy for technologists to hear. In his book, System Error: Where Big Tech Went Wrong and How We Can Reboot, Reich and his co-authors (computer scientist Mehran Sahami and social scientist Jeremy M. Weinstein) argued that tech companies and developers are so fixated on “optimization” that they often trample on human values.

More recently, Reich argues that the AI community is badly behind on developing robust professional norms. That poses risks to a host of democratic values, from privacy and civil rights to protection against harm and exploitation.

He spoke about the importance of community norms at the Spring 2022 HAI Conference on Key Advances in AI.

In an interview, he elaborated on what this professional code of conduct might look like and who should be involved.

You say that AI and computer science in general are “immature” in their professional ethics. What do you mean?

AI science is like a late-stage teenager, newly aware of its extraordinary powers but without a fully developed frontal cortex that might guide its risky behavior and lead it to consider its broader social responsibilities. Computer science didn’t come into existence until the ’50s and ’60s, and people who had computer science degrees only became socially powerful in the 2000s. In comparison with older fields like medicine or the law — or even garden-variety professions that have licensing requirements — the institutional norms for professional ethics in computer science are developmentally immature.

What kind of ethics and norms is the field of AI lacking?

Think about what happened with a different technological leap — CRISPR, the gene-editing tool that has created transformative opportunities in fields from therapeutics to agriculture. One of its co-inventors, Jennifer Doudna, who shared a Nobel prize for chemistry, has told the story of waking up from a nightmare one night and asking herself: What would happen if Hitler had this? She decided that biomedical researchers needed to put some limits on the technique, and she helped to convene her fellow biomedical researchers and their respective professional societies. They adopted a moratorium on using CRISPR for germ line editing (on human eggs, sperm, or embryos).

A few years later, when a researcher actually did use CRISPR on human embryos, he was immediately ostracized by other scientists and disinvited from every professional meeting. No journal would publish his articles. In fact, the Chinese government ultimately put him in prison.

Can you name any AI scientists whose AI model led to their being cast out of the respectable practice of AI science? In my experience, almost no one can. Imagine a person who develops an AI model that looks at your face print and predicts the likelihood of your committing a crime. That strikes me as the equivalent of phrenology and the discredited practice of race science. But right now, my sense is that such work wouldn’t cost a person anything in terms of professional opportunities.

AI has nothing comparable to the footprint of ethics in health care and biomedical research. Every hospital has an ethics committee. If you want to do biomedical research, you have to go through an institutional review board. If you tinker away at a new drug in your garage, you can’t just go out and try it on people in your area — the FDA has to approve trials. But if you have an AI model, you can train it however you please, deploy it as you wish, and even share the model openly with other potential bad actors to use as well.

Individual companies, of course, have developed corporate codes of conduct. But unless the corporate practices filter up to become industry-wide practices, or professional norms for all responsible researchers, wherever they happen to work, corporate ethics standards don’t amount to much. They don’t change whether bad practices happen elsewhere, and therefore society is no better off for the gold star affixed to an individual company.

What are the benchmark principles that might underlie a code of ethics or an AI bill of rights?

Some of the norms from health care and biomedical research provide a starting point, though I don’t believe one can just export such norms wholesale from medicine to AI.

Take for example the Hippocratic Oath — first, do no harm. In AI, researchers and developers could have strong norms for understanding the ways in which algorithmic models may have adverse impacts on marginalized groups before releasing or deploying any model. They could have norms about privacy rights, drawing on human rights doctrines, which limit the widespread practice of scraping personal data from the open internet without first obtaining consent. They could develop norms that place appropriate limits on how facial recognition tools are deployed in public. In biometrics, you can point to some basic human interests on surveillance, whether it’s carried by a drone, a police camera, or some guy with a cellphone.

What are some actionable ideas to create real traction for a code of ethics?

First, just as happened with CRISPR, it’s important for the most prominent AI scientists to speak out in favor of professional ethics and a broader code of responsible AI. High-status AI scientists are essential to the development of responsible AI.

Second, beyond the actions of individuals we need a more institutionally robust approach. Responsible AI is not just a matter of internal regulation through professional norms but external regulation via algorithmic auditing agencies and appropriate civil society organizations that can hold companies to account. The work of the Algorithmic Justice League is an exemplary example of the latter.

We don’t necessarily need to create or invent new agencies. We already have, for example, the Equal Employment Opportunity Commission. If they’re not doing it already, they should be looking at how some of these AI-powered hiring tools and resume-screening systems work.

We could also have some analog to institutional review boards that oversee research involving human subjects. When someone decides to go scraping images off the web to identify criminal tendencies on the basis of photos and face prints, I ask myself what would have happened if they had gone through an institutional review board. Perhaps it would have said no. But if you’re an AI scientist, you typically don’t have to deal with an institutional review board. You just go off and do it.

Again, that’s where the institutional norms need to catch up with the power of AI.

Should developers be required to carry out an audit for potential biases or other dangers?

Of course. Any significant building project has to have an environmental impact survey. If it turns out you’re going to develop a piece of land in way that will threaten an endangered species, at a minimum the developers have to adopt mitigation strategies before going ahead. Analogously, you could imagine algorithmic impact statements. You’d have to show there’s minimal risk of bias before it’s put into practice. There are technical approaches to this as well, such as the use of model cards and data sheets for data sets.

We also have to significantly upskill the talent that’s put in algorithmic auditing agencies. My hope is that technical career pathways extend more broadly beyond startups and big-tech companies. Think of public interest law. Why is it more competitive to get a low-paying job at the Department of Justice than a corporate law gig? At least in part because of the opportunity to do something for the public good.

What will it take to establish the kind of professional or community norms you envision?

Lamentably, it often takes scandals like the Nazi-era medical experiments or the Tuskegee experiments on Black men to provoke a significant reaction from either policymakers or the profession.

But it needn’t be a reactive process. I’d rather see AI science take a proactive approach.

One example is a recent blog post from members of the Center for Research on Foundation Models that called for the establishment of a review board that would establish norms about the responsible release of foundation models.

Another example is a pilot project here at Stanford HAI that requires an Ethics and Society Review for any project that seeks grant funding. The review panel consists of an interdisciplinary team of experts from anthropology, history, medicine, philosophy, and other fields. Just last December, members of the team published a paper in Proceedings of the National Academy of Sciences that details the findings and how the ESR could be applied to other areas of research in industry as well as within academia.

It’s a familiar pattern across history that scientific discovery and technological innovation race ahead of our collective capacity to install sensible regulatory guidelines. In System Error, we call this the race between disruption and democracy. With AI, the pace of innovation has accelerated and the frontier of innovation is far ahead of our public policy frameworks. That makes it ever more important to lean on professional norms and codes of conduct so that the development and deployment of novel technologies in AI are pursued with social responsibility.

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.

Images Powered by Shutterstock