Logo

The Data Daily

Here Is How The United States Should Regulate Artificial Intelligence

Here Is How The United States Should Regulate Artificial Intelligence

In 1906, in response to shocking reports about the disgusting conditions in U.S. meat-packing facilities, Congress created the Food and Drug Administration (FDA) to ensure safe and sanitary food production.

In 1934, in the wake of the worst stock market crash in U.S. history, Congress created the Securities and Exchange Commission (SEC) to regulate capital markets.

In 1970, as the nation became increasingly alarmed about the deterioration of the natural environment, Congress created the Environmental Protection Agency (EPA) to ensure cleaner skies and waters.

When an entire field begins to create a broad set of challenges for the public, demanding thoughtful regulation, a proven governmental approach is to create a federal agency focused specifically on engaging with and managing that field.

The time has come to create a federal agency for artificial intelligence.

Across the AI community, there is growing consensus that regulatory action of some sort is essential as AI’s impact spreads. From deepfakes to facial recognition, from autonomous vehicles to algorithmic bias, AI presents a large and growing number of issues that the private sector alone cannot resolve.

In the words of Alphabet CEO Sundar Pichai: “There is no question in my mind that artificial intelligence needs to be regulated. It is too important not to.”

Yet there have been precious few concrete proposals as to what this should look like.

The best way to flexibly, thoroughly, and knowledgeably regulate artificial intelligence is through the creation of a dedicated federal agency.

Though many Americans do not realize it, the primary manner in which the federal government enacts public policy today is not Congress passing a law, nor the President issuing an executive order, nor a judge making a ruling in a court case. Instead, it is federal agencies like the FDA, SEC or EPA implementing rules and regulations.

Though barely contemplated by the framers of the U.S. Constitution, federal agencies—collectively referred to as “the administrative state”—have in recent decades come to assume a dominant role in the day-to-day functioning of the U.S. government.

There are good reasons for this. Federal agencies are staffed by thousands of policymakers and subject matter experts who focus full-time on the fields they are tasked with regulating. Agencies can move more quickly, get deeper into the weeds, and adjust their policies more flexibly than can Congress.

Imagine if, every time a pharmaceutical company sought government approval for a new drug, or every time a given air pollutant’s parts-per-million concentration guidelines needed to be revised, Congress had to familiarize itself with all of the relevant technical details and then pass a law on the topic. Government would grind to a halt.

Like pharmaceutical drugs and environmental science, artificial intelligence is a deeply technical and rapidly evolving field. It demands a specialized, technocratic, detail-oriented regulatory approach. Congress cannot and should not be expected to respond directly with legislation whenever government action in AI is called for. The best way to ensure thoughtful, well-crafted AI policy is through the creation of a federal agency for AI.

How would such an agency work?

One important principle is that the agency should craft its rules on a narrow, sector-by-sector basis rather than as one-size-fits-all mandates. As R. David Edelman aptly argued, AI is a “tool with various applications, not a thing in itself.”

Rather than issuing overbroad regulations about, say, explainability or data privacy to which any application of AI must adhere, policymakers should identify concrete AI use cases that merit novel regulatory action and develop domain-specific rules to address them.

Stanford University’s One Hundred Year Study on AI made this point well: “Attempts to regulate ‘AI’ in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains. Instead, policymakers should recognize that to varying degrees and over time, various industries will need distinct, appropriate, regulations that touch on software built using AI or incorporating AI in some way.”

This new federal agency would need to work closely with other agencies, as there will be extensive overlap between its mandate and the work of other regulatory bodies.

For instance, in crafting policies about the admissible uses of machine learning algorithms in criminal sentencing and parole decisions, the agency would collaborate closely with the Department of Justice, lending its subject matter expertise to ensure that the regulations are realistically designed.

Similarly, the agency might work in tandem with the Treasury Department and the CFPB to create rules about the proper use of AI in banks’ loan underwriting decisions. Such cross-agency collaboration is the norm in Washington today.

There are numerous additional areas in which smart, well-designed AI policy is already needed: autonomous weapons, facial recognition, social media content curation, and adversarial attacks on neural networks, to name just a few.

As AI technology continues its breathtaking advance in the years ahead, it will create innumerable benefits and opportunities for us all. It will also generate a host of new challenges for society, many of which we cannot yet even imagine. A federal agency dedicated to artificial intelligence will best enable the U.S. to develop effective public policy for AI, protecting the public while positioning the nation to capitalize on what will be one of the most important forces of the twenty-first century.

Images Powered by Shutterstock