Logo

The Data Daily

How Artificial Intelligence is combatting COVID-19 in a Battle that knows no borders.

How Artificial Intelligence is combatting COVID-19 in a Battle that knows no borders.

Diverse Innovations aims to stimulate discussion related to emerging technologies and their greater implications within global affairs and design. This is the last article in our technology and COVID-19 series.

On Monday, December 30th, 2019, an Artificial Intelligence (AI) platform spotted an abnormal trend. Bluedot, a Toronto-based AI and digital health firm, detected a cluster of “unusual pneumonia” infections in Wuhan, China. It alerted various governments, as well as medical, business, and public health officials to an outbreak, signifying the first COVID-19-related warning on New Year’s Eve. Nine days later, the emergence of a novel coronavirus received official recognition from the World Health Organization.

New AI technologies that help tackle health crises are on the rise. With capabilities able to surpass that of humans, AI technologies can be a valuable weapon to regain control in the fight against COVID-19. In the case of BlueDot, the company applies natural language processing (NLP) and machine learning to track, monitor, and report on infectious disease outbreaks.

BlueDot CEO and Founder Kamran Khan envisioned AI as a tool that could “spread knowledge faster than the diseases spread themselves.” Indeed, it does appear that we are in a race to beat COVID-19. Governments and institutions around the globe are sprinting to limit the further spread of COVID-19 by exploring the uses of various technologies as prevention, diagnostic, and treatment tools.

In this article, I discuss a type of AI technology utilized by BlueDot to help combat COVID-19 and the challenge to data privacy concerns on AI data usage.

AI systems reproduce a faster and more consistent version of human capabilities or perform functions that humans cannot. The technology uses algorithms to draw inferences from sets of data. As with any technology, it does not come without its limitations. AI outputs signal patterns in historical training data, meaning that the quality of output relies upon the quality and diversity of available data. It relies on data to function effectively, and various algorithms with predictive capabilities are no exception to this requirement.

So if the technology is to make accurate predictions, it requires mass amounts of data. But there are trade-offs. For predictive algorithms, that tradeoff could be your data privacy and heightened surveillance. This appears to be a major concern of Americans in arecent study by the University of Oxford’s Center for the governance of AI. The Center’s survey found that more Americans see AI technology as harmful than those who see it as beneficial to humanity. In fact, some of the most prominent fears around AI were in the form of data privacy and surveillance. Even so, as the novel coronavirus began to sweep the globe, we saw a sharp increase in the use of AI tools to enhance efficiencies in tracking and tracing outbreaks. Why?

One might attribute the increase in AI use to how powerful the technology can be in combating health crises. By furthering treatment research, custom information and learning, AI tools are able to detect, diagnose, and predict the evolution of a virus to help respond to outbreaks. This could prevent an infection’s spread (e.g. contact tracing), track and improve early warning tools. Types of AI applications to the COVID-19 crisis include detection, prevention, responses, and recovery.

BlueDot leverages NLP algorithms to analyze various news media and official medical publications around the world, in many languages, and identifies any mentions of “high-priority diseases.” The results produced by this system are of decent viability. Metabiota, a company similar to BlueDot, demonstrates reasonable accuracy in the results of its algorithms. In late February, it reported that by March 3, there would be 127,000 cumulative cases worldwide, a prediction greater than the actual number but well within the margin of error.

The software company stated it would help the federal government respond to COVID-19 by providing it information to monitor how the health response is progressing. In doing so, it uses anonymous location data from millions of mobile devices to identify how people are following proper health advice and regulations.

Though this technology is promising and carries many benefits, it does not come without risks and new threats.

It is much more challenging for AI systems to build algorithms and make predictions as outbreaks grow. Misinformation and misrepresentations online and in news media distort data to make it less accurate. But there is a price to pay if you want to produce optimal machine learning predictions.

If an AI system requires more data than it can access, the need for reliable sources poses a controversial question: Do I accept the trade-off to share more of my personal data with business entities and governments for more accurate machine-learning predictions? It depends on the context.

Given the reality of the world we currently live in, I am willing to share additional personal data to some extent, but what public opinions exist on AI data usage? When various business and tech firms were asked the same question, two-thirds of firms were willing to share internal data externally. This openness to data sharing was to help develop new AI-enabled efficiencies, products, or value chains. Interestingly, the Center for the Governance of AI foundthat Americans trust tech companies and non-governmental organizations morethan they do governments to manage the use and development of AI technology.

Without trust from the people, AI adoption into society could be limited and hinder its access to the massive amounts of data it requires. Without such access, AI tools may be unable to reach their full potential to help health crises. Leaders, researchers, and companies must not forget that the risks around privacy, accountability, social biases and accuracy of AI still stand. If we are to empower AI algorithms, it is vital to establish trust between government leaders, business officials, health care authorities and AI technology.

Distrust for AI as a result of data privacy concerns leaves many people hesitant on its use; meanwhile, others cite it as the key to fighting pandemics. Such interpretive flexibility in which divergent perceptions exist on the use of AI tools suggests that considerable tension exists between technology and society. Are these AI tools an intentional creation to control and adapt to the COVID-19 environment? Does society shape technology, or are our behaviours influenced by technologies?

A pandemic borne from a virus that knows no borders can have a debilitating effect on perceptions of uncertainty and control. A global epidemic that brings the world to an abrupt stop disrupts society. But how do individuals within communities respond? In a COVID-19 world, closure of non-essential businesses, quarantine mandates, physical distancing rules, make access to various commodities privileges no longer available to most. Countries and multilateral institutions all struggle to control the spread of infections, sustain supply chains, and develop cures for the novel coronavirus.

At this point in time, AI is not able to predict disease outbreaks by itself, no matter how much data it receives. Mobilizing government, business, and health care leaders to trust these tools will remodel our ability to coordinate and respond quickly to the spread of new diseases.

Images Powered by Shutterstock