The financial services sector has been an eager adopter of robotic process automation (RPA): by one estimate, it accounts for 29% of the RPA market, more than any other sector. So it stands to reason that the industry is an early adopter of intelligent automation, the combination of RPA with AI.
“Financial services [institutions] have always been among of the top adopters of intelligent automation,” says Sarah Burnett, industry analyst and evangelist at process mining vendor KYP.ai.
Financial institutions have adopted a range of use cases for intelligent automation, from simple integrations of cognitive services into RPA systems to, in a few cases, AI-powered decision making. As such, they have also encountered the security risks and governance challenges that arise from intelligent automation sooner than most.
Intelligent automation is a broad term, representing a range of possibilities for integrating AI and machine learning into process automation. This stretches as far as AI-powered decision making, but so far most use cases exploit AI’s potential to process unstructured data, such as text and images, to automate steps in a process that would otherwise require human perception.
One use case is making customer service chatbots more responsive and more useful. Recent advances in natural language processing (NLP) have improved chatbots’ ability to understand customer requests and form naturalistic responses, explains John Murphy, head of intelligent automation at accounting and consultancy provider Grant Thornton. “It is an area where machine learning and AI have made huge leaps and bounds in the last few years,” he says.
Integrating an NLP-powered chatbot with an RPA system to retrieve information and handle the customer’s request is an increasingly prevalent use case for intelligent automation, LSE professor Leslie Willcocks told MIS Quarterly Executivein an interview last year. “A bank, for example, will have an interactive chatbot for dialogue with customers, but it will draw on RPA to get the information it needs to be able to have a more accurate conversation with the customers,” he said.
Another use case for intelligent automation is remotely verifying customers’ identities. This combines machine vision to scan verify identity documents, such as a passport or driving licence, with RPA to cross-check those documents against an identity database. This application has helped banks to cut their customer on-boarding processes significantly, says Burnett. “They’ve gone from taking weeks to literally a matter of ten or 15 minutes at most, and that is improving the bottom line.”
Other intelligent automation applications aid data processing. WTW, the insurance provider and advisory, had previously employed people to scrub data collected by its survey division of any personally identifiable information. But it was laborious work to which humans are ill-suited, says Dan Stoeckel, digital workforce solutions architect at the company. Instead, WTW used a combination of RPA and a cloud-based NLP service to scan files and remove personal data.
Some institutions have had success in using machine intelligence to understand and optimise their business processes, says Grant Thornton’s Murphy. Process mining and intelligence can help organisations identify opportunities for automation and, in some cases, run A-B tests to see which process design works most effectively, he says.
Arguably the most sophisticated applications of intelligent automation seek to replace human decision making with AI. IBM’s Operational Decision Manager allows organisations to integrate cognitive services, whether they be IBM’s own Watson suite of offerings or their own, self-built machine learning models, says Doug Coombs, IBM’s business development leader for business automation.
Financial services customers include US bank PNC Financial, which uses the system to automate approvals for certain loans. The bank combines prescriptive business rules with predictive data modelling to assess applicants’ eligibility for credit, Combs says.
Not all clients are integrating AI into their automated decisions, however. The day when “AI is infused into everything is a little way off,” he says.
A report entitled ‘Good Bots and Bad Actors‘ by IT consultancy Accenture identifies a number of security risks emerging from intelligent automation. Many of these relate to AI security threats, such as tampering with machine learning models or their training data to influence outcomes.
Examples include “[i]njection of adversarial training instances to introduce a ‘new norm’ to the model, for example to forge credit card eligibility or disable fraud alert mechanisms”, and “gaining possession of the model’s explainability logs to understand its decision logic and ‘trick the system’ by providing input data guaranteeing favourable outcome”.
However, experts interviewed for this article said that the ‘intelligence’ incorporated into intelligent automation is usually provided by packaged software or cloud services from third parties. Maintaining the security of the underlying ML models is unlikely to be the direct responsibility of all but the most sophisticated IA users, for the time being at least.
Instead, the primary security risks of intelligent automation are similar to those of RPA. “If malicious code is introduced [to an automated process], it can be amplified multiple times very, very easily,” explains Manu Sharma, head of cybersecurity resilience at Grant Thornton. In particular, access privileges, which are often allocated to RPA ‘bots’ to allow them to conduct certain tasks, must be carefully controlled.
Nevertheless, the risks identified by Accenture underscore the need to hold suppliers to account for cybersecurity. “A SolarWinds-type hack on [RPA suppliers] UIPath or Automation Anywhere would be devastating,” says WTW’s Stoeckel. Happily, he says, RPA vendors are “starting to put significant investment into the security layer”.
The governance challenges that arise from many intelligent automation use cases are similar to those of RPA. At WTW, Stoekel has established a centre of excellence that runs automations developed by business users through a series of governance checks. These include security and other technical controls, privacy impact assessments, and quality measures.
Creating centres of excellence is a common approach to governing automation, says Burnett, although they must be well integrated into the business they are to succeed. “The model that seems to work best is a mix of central and federated,” she says. “You might have centres of excellence in different divisions or geographies, depending on what work is being delivered, and those are under the governance of a centralised body.
“You don’t want centres of excellence to become bottlenecks because then they run out of resources,” Burnett adds.
If and when intelligent automation incorporates AI-powered decision making, it can present new governance challenges, such as the risk of AI bias.
This requires simulating the outcome of automated decisions and testing them before and after deployment, Coombs says. “It’s vital that you have governance in place as you move those decisions through the design, test, simulate and live phase.”
Financial institutions that develop their own models to automate decisions, such as loan applications, will have to take particular care.
The EDM Council, a trade association that advises financial organisations on data management, has created a cloud data management capabilities framework that includes guidance on ‘model operationalisation’. Key capabilities include managing the release procedure of machine learning models, applying version control to both the models themselves and their training data, and regular review.
“There’s lots of stories about how when people’s behaviour changed in the pandemic, recommendation engines started doing strange things because they weren’t trained on that sort of behaviour,” explains Colin Gibson, a senior adviser at the council. “It’s not just a release once, review once [approach], you have to have ongoing review of your models.”
Given the risks that arise from fully automated decisions, and regulators’ eagerness to ensure AI does not harm consumers, Grant Thornton’s Murphy does not expect major financial institutions to add AI-powered decision making to processes that affect customers’ access to loans any time soon.
“I think the major banks will be still using humans in the loop and augmenting their experts with more intelligence,” he says. “Rather than getting rid of [humans], best practice would be to run the machines alongside the humans for a few years and be patient about the savings, to make sure it is thoroughly tested and you’ve got the regulator on board with the outcomes.”