The trouble with AI is that it lacks a clear definition.
Lacking a clear definition hasn’t stopped many people from fretting about this new, very powerful technology—it is some kind of “intelligence” after all. So we hear calls for “responsible AI,” and watch the establishment of “AI ethics”-related corporate departments, research centers, consulting practices, and new job categories.
Most important, we see the emergence of new, irresponsible government regulation.
A new New York City regulation, going into effect next January, will require companies to conduct audits to identify biases in the AI programs they use for hiring employees. Local Law Int. No. 1894-A requires New York City employers using an “automated employment decision tool” in their hiring and promotion processes to develop a “bias audit” of such tools. Penalties for non-compliance will be between $500 to $1,500 for each violation.
What is an Automated Employment Decision Tool? "Any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”
To overcome the challenge of poorly defined “AI,” the regulatory mind elected to include all related—and better understood—terms. But the result is a very broad definition that captures tools that have been used long before the current version of “AI” (aka “deep learning” or advanced statistical analysis of lots of data) became popular in business circles and a catalyst for new regulation. Tools such as automated screening of employees along specific criteria (e.g., academic qualification or GPA) and testing software.
The “bias audit” is defined by the law as an impartial evaluation by an independent auditor. Sounds like a new employment act for accountants except that the law does not define the qualifications of an “independent auditor.” More important, and even more irresponsible, the law does not provide any guidelines as to what a “bias audit” may look like. “A spokesman for New York City said its Department of Consumer and Worker Protection has been working on rules to implement the law, but he didn’t have a timeline for when they might be published,” the Wall Street Journal recently reported.
The broader question is do we need this regulation in the first place. Bias in hiring decisions has existed even before computers were invented. These decisions, after all, are made by humans about other humans. You could even argue that a clever use of the new automated tools, broadly or narrowly defined, could even help reduce or counter human bias. And a not very clever use of any automated tools can lead to much more than bias. It can result, unfortunately too often, even in the loss of lives.
There is no question that the fear of being “disrupted” for not using “AI,” has led many companies to use these new computer programs, in some cases even if they are ill-prepared to incorporate them in their processes, including their employment practices. But is badly articulated regulation the solution? Could it only lead to superficial “audits” acting as a smoke-screen, not really “leveling the playing field” for all applicants? The reputation of a company, the potential impact of bad decisions on its bottom line, the pressure to compete for workers in a very tight labor market—could these be stronger incentives than badly-written regulation?
A recent survey of more than 1,000 managers found nearly a quarter of respondents reporting that “their organization has experienced an AI failure, ranging from mere lapses in technical performance to outcomes that put individuals and communities at risk.” But only 42% of respondents say that AI is a top strategic priority for their organization, and even among those respondents, only 19% report that their organization has a fully implemented Responsible AI program.
The study’s authors, the Boston Consulting Group (BCG) and MIT Sloan Management Review, concluded that Responsible AI (RAI) leaders “take a more strategic approach to RAI, led by corporate values and an expansive view of their responsibility toward a wide array of stakeholders, including society as a whole. For Leaders, prioritizing RAI is inherently aligned with their broader interest in leading responsible organizations.”
If you are a responsible organization, you use your AI programs (and all your automated tools) responsibly.
Responsible organizations care about the sources and the quality of the data used by their AI programs, they clearly communicate (internally and externally) the limitations and possible misuse of the programs’ outputs, and are conscious about over-reliance on automated decision-making. They act as their own regulators with clearly established rules and guidelines.
Given that today’s “AI” is based in many cases on data collected online by third-parties, sometimes government regulation is required, especially regarding individuals’privacy and security. But it must be responsible—it clearly defines what it is about and clearly sets forth what compliance with it means.