Logo

The Data Daily

Periodic review of the artificial intelligence industry reveals challenges

Periodic review of the artificial intelligence industry reveals challenges

As part of Stanford's ongoing 100-year study on artificial intelligence, known as the AI100, two workshops recently considered the issues of care technologies and predictive modeling to inform the future development of AI technologies.

"We are now seeing a particular emphasis on the humanities and how they interact with AI," said Russ Altman, Stanford professor of engineering and the faculty director of the AI100. The AI100 is project of the Stanford Institute for Human-Centered Artificial Intelligence.

After the first meeting of the AI100, the group planned to reconvene every five years to discuss the status of the AI industry. The idea was that reports from those meetings would capture the excitement and concerns regarding AI technologies at that time, make predictions for the next century and serve as a resource for policymakers and industry stakeholders shaping the future of AI in society.

But the technology is moving faster than expected, and the organizers of the AI100 felt there were issues to discuss prior to the next scheduled session. The reports that resulted from those workshops paint a picture of the potential pitfalls of outsourcing our problems for technology to solve rather than addressing the causes, or allowing outdated predictive modeling to go unchecked. Together, they provide an intermediate snapshot that could guide discussions at the next full meeting, said Altman.

"The reports capture the cyclical nature of public views and attitudes toward AI," said Peter Stone, professor of computer science at the University of Texas in Austin who served as study panel chair for the last report, and is now chair of the standing committee. "There are times of hype and excitement with AI, and there are times of disappointment and disillusionment—we call these AI winters."

This longitudinal study aims to encapsulate all the ups and downs—creating a long-term view of artificial intelligence.

Although artificial intelligence is widespread in healthcare apps, participants in the workshop debating AI's capacity to care concluded that care itself isn't something that can be encoded in technology. Based on that, they recommend that new technologies be integrated into existing human-to-human care relationships.

"Care is not a problem to be solved; it is a fundamental part of living as humans," said Fay Niker, a philosophy lecturer at the University of Stirling, and chair of the Coding Caring workshop. "The idea of a technical fix for something like loneliness, for example, is baffling."

The workshop participants frame care technologies as tools to supplement human care relationships like those between a caregiver and care-receiver. Technology can certainly give reminders to take medication or track health information, but is limited in the ability to display empathy or provide emotional support which cannot be commodified or reduced into outcome-oriented tasks.

"We worry that meaningful human interaction could be frozen out by tech," said Niker. "The hope is that the AI2020 report, and other work in this area, will contribute to preventing this 'ice age' by challenging and hence changing the culture and debate around the design and implementation of caring technologies in our societies."

AI technologies may be capable of learning, but they are not immune to becoming outdated, prompting participants in the second workshop to introduce the concept of "expiration dates" to govern their deployment over time. "They train on data from the past to predict the future," said Altman. "Things change in any field, so you need to do an update or a reevaluation."

"It means we have to pay attention to the new data," said David Robinson, a visiting scientist from Cornell's College of Computing and Information Science, and one of the workshop organizers. Unless otherwise informed, the algorithm will blindly assume that the world has not changed, and will provide results without integrating newly introduced factors.

Important decisions can hinge on these technologies, including risk assessment in the criminal justice system and screenings by child protection services. But Robinson stressed that it is the net combination of the algorithm results and the interpretation from those using the technology that results in a final decision. There should be as much scrutiny on the information that the AI is providing as there is on the users who are interpreting the algorithm's results.

Both workshops came to the conclusion that regulation is needed for AI technology, according to Altman, which should come as no surprise to those attuned to popular culture references of the field. Whether the industry can self-regulate, or what other entities should oversee the progress in the field, is still in question.

Participants and organizers alike feel that the AI100 has a role to play in the future of AI technologies. "I hope that it really helps educate people and the general public on how they can and should interact with AI," said Stone. Perhaps even more importantly, the outcomes from the AI reports can be referenced by those policymakers and industry insiders, shaping how these technologies are developed.

Images Powered by Shutterstock