Logo

The Data Daily

Is your company failing at chatbot AI trust and transparency?

Is your company failing at chatbot AI trust and transparency?

Is your company failing at chatbot AI trust and transparency?
How to cut 3 key risks that can offset chatbot benefits
Robot Head Doodle 7, JakeOlimb, Courtesy Getty Images
Chatbot adoption is projected to increase more than 100% over the next two years.1 For marketing or operations leaders, chances are good your organization has one or more chatbots deployed, and more in the works. But those chatbots may expose you to unforeseen risks, especially in the areas of trust and transparency.
Why does this matter now?
AI-infused technologies like chatbots are increasingly in the public eye and under scrutiny.2 New regulations are on the horizon for the EU, and current GDPR and CCPA regulation can impact data management related to chatbots.
Chatbots can improve your business — their automation saves labor costs, their augmentation improves human performance, and the data from chatbot sessions drives innovation for competitive advantage. As noted in Forbes , chatbots are the leading AI use cases in enterprises today.3 Forrester Research’s Mike Gualtieri refers to AI as being “ the fastest-growing workload on the planet ,” and notes that standard IT infrastructure won’t keep pace.
With this explosive growth comes broader attention, and compliance needs. Here are three areas to discuss with your chatbot project teams, as well as your CROs and risk teams.
“Too good” simulated human — transparency risks, reputational risks. Researchers are studying whether more sophisticated chatbots — those that are designed to emulate human speech patterns on calls, or human conversational patterns in text — are perceived more favorably by those who engage with the bots.
Findings indicate that the more effective bots are at emulating human communication style, the less comfortable the experience is for the human — but this can be mitigated by a clear notification that the artificial assistant is a bot.4 Without such notification, your organization runs the risk that people engaging with a chatbot feel deceived, damaging your reputation.
This notification is not only a research-based best practice, it is in some regulatory environments a compliance requirement. Penalties can be significant; for example, for just one scenario, “[if] your chatbot talks to 100 consumers without proper disclosure, you’re looking at a fine of $25,000.”5
Chat data capture and storage — compliance and governance risks. The data captured by chats can be extensive, including Personal Identifiable Information (PII), data and metadata that connects chat to customer accounts and history, and geolocation data. Storing that data, using it for predictive or prescriptive analytics, using chat logs to train machine learning models, all require best practices in governance and data management. From Article 22 of the GDPR:
“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” Note that there is a human-centered requirement implied by “a decision based solely on automated processing”.
The EU’s Article 29 Data Protection Working Party offers this guidance on the level of human involvement required to avoid the prohibition. “To qualify as human involvement, the controller must ensure that any oversight of the decision is meaningful, rather than just a token gesture. It should be carried out by someone who has the authority and competence to change the decision. As part of the analysis, they should consider all the relevant data.”
Keeping humans at the center of AI chatbot processes is not just a best practice — it supports regulatory needs.
There are penalties associated with failure to adequately protect chat data in the GDPR, CCPA, and soon the EU Artificial Intelligence Act. Penalties can be frighteningly significant. For example, failure to protect chatbot data from a hack resulted in a GDPR-related fine of £1.25 million. And those systems noted as “limited risk” in the proposed EU Artificial Intelligence Act Article 52 include chatbots.
For a more detailed discussion of the EU Artificial Intelligence Act and risks, see Eve Gaumond’s article in Lawfare.
Algorithmic and machine learning risk — modeling and bias risks, reputational risks. AI-infused chatbots pose an interesting challenge when trying to reduce risks from bias. To add value, chatbots must be trained — but on what training data? As discussed by Cari Jacobs in “ Data Collection Strategies for Building an AI Chatbot ,” conventional training sets “often lack the most crucial data needed for training a chatbot: examples of how users express their goals and needs (intents)…”
Cari strongly recommends data be collected from the same sorts of end-users who will be using the chatbot.
She writes, “When data is collected from SME’s, developers, and other people close to the project, they often introduce bias in their terminology. They submit questions with an idea in mind about the expected response. They often lack the background and real-world circumstances that drive users to engage with a chatbot solution.”
However, even the act of collecting end-user data to train chatbots can itself introduce bias.
Why?
As Cassie Kozyrkov, Head of Decision Intelligence, Google, said, “Bias doesn’t come from AI algorithms, it comes from people.” Bias adds reputational risk — from issues with regulatory compliance noted above, as well as from poor user experiences . (For a deeper dive into AI bias, see Cassie Kozyrkov’s superb Towards Data Science article .)
What can you do now to reduce these risks?
You can and should be connected to those teams that have developed and are maintaining your organization’s chatbots. In the next chatbot-related steering committee or Center of Excellence meeting, if you have not had access to the following, discuss:
How chatbot projects do (or will) meet regulatory compliance needs. Your company’s Chief Risk Officer, or a similar executive-level role, should have a line of sight into compliance needs here.
Where training data is sourced, and how model drift is addressed. You don’t need to be a data scientist to ask for clear, non-technical information on this topic — in your role as a chatbot stakeholder, you simply need an understanding of the ways in which potential bias is taken into account.
Plans for future enhancements and sophistication, and associated notification requirements. As your organization’s chatbots become more sophisticated, they could be confused with human agents. Notifications for users that the chatbot is not a human are an important facet of compliance.
To stay up to date on topical issues regarding AI chatbots, follow the authors linked above, as well as look into the resources below.
Additional resources.

Images Powered by Shutterstock