Logo

The Data Daily

6 reasons to pump the brakes on AI

6 reasons to pump the brakes on AI

Hype over artificial intelligence reached its zenith in 2017, with CIOs, consultants and academics touting the technology as potentially automating anything from business and IT operations to customer connections. Yet through the first calendar quarter of 2018 several media organizations  reported on the dangers of AI, which involves training computers to perform tasks normally requiring human intelligence.

“There’s been so much hype in the media about it and this is just journalists trying to extend the hype by talking about the negative side,” says Thomas Davenport, a Babson College distinguished professor who teaches a class on cognitive technologies.

Perhaps, but the concerns are hardly new and very persistent, ranging from fears about racial, gender and other biases to automated drones running amok with potentially lethal consequences.

One week after the MIT Technology Review published a story titled "When an AI finally kills someone, who will be responsible?" raising the issue of what laws apply should a self-driving car strike and kill someone, a self-driving Uber car struck and killed a woman in Arizona. Timing, as they say, is everything.

Here CIO.com details some of the concerns regarding adoption of AI, followed by recommendations for CIOs who want to begin testing the technology.

1. How rude As we learned from Microsoft’s disastrous Tay chatbot incident, conversational messaging systems can be nonsensical, impolite and even offensive. CIOs must be careful of what they use and how they use it. All it takes is one offensive, epithet-spewing outburst from a chatbot to destroy a brand’s friendly image.  

2. Poor perception Though developed by humans, AI is, ironically, not very much like humans at all, according to Google AI scientist and Stanford University Professor Fei-Fei Li, in a column for The New York Times. Li noted that while human visual perception is deeply contextual, AI’s ability to perceive images is quite narrow.

Li says that AI programmers will likely have to collaborate with domain experts — a return to the academic roots of the field — to close the gap between human and machine perception.

3. The black box conundrum Many enterprises want to use AI, including for some activities that may provide a strategic advantage, yet companies in sectors such as financial services must be careful that they can explain how AI arrives at its conclusions. It might be logical to infer that a homeowner who manages their electricity bills using products such as the Nest thermostat might have more free cash flow to repay their mortgage. But enabling an AI to incorporate such a qualification is problematic in the eyes of regulators, says Bruce Lee, head of operations and technology at Fannie Mae.

"Could you start offering people with Nest better mortgage rates before you start getting into fair lending issues about how you’re biasing the sample set?" Lee tells CIO.com. "AI in things like credit decisions, which might seem like an obvious area, is actually fraught with a lot of regulatory hurdles to clear. So a lot of what we do has to be thoroughly back-tested to make sure that we’re not introducing bias that’s inappropriate and that it is a net benefit to the housing infrastructure. The AI has to be particularly explainable."

Without a clear understanding of how AI software detects patterns and observes outcomes, companies with risk and regulations on the line are left to wonder how strongly they can trust the machines. “Context, ethics, and data quality are issues that affect the value and reliability of AI, particularly in highly regulated industries,” says Dan Farris, co-chairman of the technology practice at law firm Fox Rothschild. “Deploying AI in any highly regulated industry may create regulatory compliance problems.”

4. Ethnographic, socioeconomic biases While running a project that used Google Street View images of cars to determine the demographic makeup of towns and cities across the U.S., Stanford University Ph.D. student Timnit Gebru became concerned about racial, gender and socio-economic biases in her research. The revelation prompted Gebru to join Microsoft, where she is working to ferret out AI biases, according to Bloomberg.

Even the AI virtual assistants are encumbered by bias. Have you ever wondered why virtual assistant technologies such as Alexa, Siri and Cortana are female? "Why are we gendering these ‘helper’ technologies as women?" Rob LoCascio, CEO of customer service software concern LivePerson, tells CIO.com. "And what does that say about our expectations of women in the world and in the workplace? That women are inherently ‘helpers;’ that they are ‘nags;’ that they perform administrative roles; that they’re good at taking orders?”

5. AI leveraged in hack, deadly attacks AI's rapid advances means risks that malicious users will soon exploit the technology to mount automated hacking attacks, mimic humans to spread misinformation or turn commercial drones into targeted weapons, according to a 98-page report crafted by 25 technical and public policy researchers from Cambridge, Oxford and Yale universities.

“We all agree there are a lot of positive applications of AI,” Miles Brundage, a research fellow at Oxford’s Future of Humanity Institute, told Reuters. “There was a gap in the literature around the issue of malicious use.” The New York Times and Gizmodo also covered this report, titled, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation."

6. AI will turn us into house cats Then there is the enslavement theory. Entrepreneur Elon Musk, of Tesla and SpaceX fame, warned that humans run the risk of becoming dependent "house cats" to AI of superior intelligence and capabilities. More recently, Israeli historian Yuval Noah Harari posited that the emergence of AI that automates everything could create a "global useless class." In such a world, democracy will be threatened because humans don't understand themselves as well as machines do.

Generally, these concerns are largely overblown, according to Davenport. For example, he says that biases have also long existed within the scope of normal analytics projects. "I don’t know anyone who ever worked with analytics who will say bias doesn’t exist there," Davenport says. Davenport, who recently completed a new book, "The AI Advantage. All about big enterprise adoption of AI," says that several large companies are testing AI responsibly.

"We’re now seeing a lot of enterprise applications, and I haven’t heard anyone saying we’re going to discontinue our IT program," Davenport says, adding that the technology remains immature. “The smart companies just keeping working on this stuff and try not to get deterred by pluses and minuses coming from the media.”

Indeed, IT leaders appear to remain largely undeterred by the hype, as more than 85 percent of CIOs will be piloting AI programs by 2020 through a combination of buy, build and outsourcing, according to Gartner. And while Gartner  recommends CIOs start building intelligent virtual support capabilities in areas that customers and citizens increasingly expect to be mediated through AI-based assistants, they must also work with their business peers to create a digital ethics strategy.

Images Powered by Shutterstock