Logo

The Data Daily

Explaining Explainable AI for Conversations | 7wData

Explaining Explainable AI for Conversations | 7wData

Within the space of just two or three decades, artificial intelligence (AI) has left the pages of science fiction novels and become one of the cornerstone technologies of modern-day society. Success in machine learning (ML) has led to a torrent of new AI applications that are almost too numerous to count, from autonomous machines and biometrics to predictive analytics and chatbots.   

One emerging application of AI in recent years has been conversational intelligence (CI). While automated chatbots and virtual assistants are concerned with human-to-computer interaction, CI aims to explore human-to-human interaction in greater detail. The potential to monitor and extract data from human conversations, including tone,sentiment and context, has seemingly limitless potential.  

For instance, data from call center interactions could be generated and logged, with everything from speaker ratio and customer satisfaction to call summaries and points of action being automatically filed. This would dramatically cut down the bureaucracy involved in call center handling and give agents more time to speak with customers. What’s more, the data generated could even be used to shape staff training programs, and even recognize and reward outstanding work.   

But there’s something missing – trust. Deploying AI in this way is incredibly useful, but at the moment it still requires a leap of faith on behalf of the businesses using it.   

As businesses, and as a society at large, we place a great deal of trust in AI-based systems. Social media companies like Twitter now employAI-based algorithms to clamp down on hate speech and keep users safe online. Healthcare providers around the world are increasingly leveraging AI, from chatbots that can triage patients to algorithms that can help pathologists with more accurate diagnoses. TheUK government has recently adopted an AI tool known as “Connect” to help parse tax records and detect fraudulent activity. There are even examples of AI being used to improve law enforcement outcomes, using tools such as facial recognition, crowd surveillance and gait analysis to identify suspects.   

We make this leap of faith in exchange for a more efficient, connected and seamless world. That world is built on “big data”, and we need AI to help us manage the flow of that data and put it to good use. That’s as true in a macro sense as it is for individual businesses. But despite our increasing dependence on AI as a technology, we know precious little about what goes on under the hood. As data volume increases, and the paths taken by AI to make a determination become more elaborate, we as humans have lost the ability to comprehend and retrace those paths. What we’re left with is a “black box” that’s next to impossible to interpret.   

It begs the question; how can we trust AI-based decisions if we can’t understand how those decisions are made? It’s an increasing source of frustration for businesses that want to ensure their systems are working correctly, meeting the correct regulatory standards, or that they’re operating at maximum efficiency. Consider the recruitment team at Amazon, who had toscrap their secret AI recruiting tool after they realized it was showing bias against women. They thought they had the “holy grail” of recruiting – a tool that could scan hundreds of resumes and pick out the top several for review, saving them countless hours of work. Through repetition and reinforcement, the AI managed to convince itself that male candidates were somehow preferable to female ones. Had the team trusted blindly in the AI – which they did for a very short period – the consequences for the company would have been devastating.   

When it comes to business frustration and the fear of putting too much trust in AI, the emerging field of CI is an ideal case in point.   

The world of human interaction has been a hive of AI innovation for years.

Images Powered by Shutterstock