Intelligent Automation in Financial Services: Use Cases and Risks

The financial services industry has enthusiastically embraced robotic process automation (RPA): by one estimate, it accounts for 29% of the RPA market, more than any other industry. So it stands to reason that the industry is an early adopter of intelligent automation, the combination of RPA with AI.

“Financial services [institutions] have always been early adopters of intelligent automation,” says Sarah Burnett, industry analyst and evangelist at process mining provider KYP.ai.

Financial institutions have embraced a range of use cases for intelligent automation, ranging from simple cognitive service integrations into RPA systems to, in a few cases, AI-based decision making. As such, they’ve also encountered the security risks and governance challenges that come with intelligent automation sooner than most.

Automated loan approvals are still on the cutting edge of intelligent automation. (Photo by courtneyk/iStock)

Intelligent Automation in Financial Services: Use Cases

Intelligent automation is a broad term, representing a range of possibilities for integrating AI and machine learning into process automation. This extends to AI-based decision making, but so far most use cases are harnessing the potential of AI to process unstructured data, such as text and images, to automate steps in a process that would otherwise require human perception.

One use case is to make customer service chatbots more responsive and helpful. Recent advances in natural language processing (NLP) have improved chatbots’ ability to understand customer requests and formulate naturalistic responses, says John Murphy, head of intelligent automation at accounting and consulting provider Grant Thornton. . “This is an area where machine learning and AI have made huge strides in recent years,” he says.

Integrating an NLP-powered chatbot with an RPA system to retrieve information and process the customer’s request is an increasingly popular use case for intelligent automation, says LSE Professor Leslie Willcocks MIS Quarterly Executive in an interview last year. “A bank, for example, will have an interactive chatbot to engage with customers, but they will rely on RPA to get the information they need to be able to have a more accurate conversation with customers,” he said. .

Content from our partners
How should companies select cloud service providers for their data needs?
How co-innovation is driving industrial transformation in Singapore's manufacturing sector
How can MEA healthcare providers best harness the power of AI?

Another use case for intelligent automation is remote customer identity verification. This combines machine vision to scan identity documents, such as a passport or driver’s license, with RPA to cross-check those documents against an identity database. This app has helped banks drastically reduce their customer onboarding processes, says Burnett. “They’ve gone from weeks to 10 or 15 minutes at most, and that’s improving the results.”

Other intelligent automation applications facilitate data processing. WTW, the insurance and advisory provider, had previously employed people to clean the data collected by its investigation division of any personally identifiable information. But it was laborious work that humans are ill-suited to, says Dan Stoeckel, the company’s digital workforce solutions architect. Instead, WTW used a combination of RPA and a cloud-based NLP service to scan files and remove personal data.

Some institutions have had success using artificial intelligence to understand and optimize their business processes, says Murphy of Grant Thornton. Process mining and intelligence can help organizations identify automation opportunities and, in some cases, run AB tests to see which process design works most efficiently, he says.

Arguably, the most sophisticated applications of intelligent automation seek to replace human decision-making with AI. IBM’s Operational Decision Manager allows organizations to integrate cognitive services, whether it’s IBM’s own suite of Watson offerings or their own machine learning models, says development manager Doug Coombs commercial from IBM for commercial automation.

Financial services customers include US bank PNC Financial, which uses the system to automate certain loan approvals. The bank combines prescriptive business rules with predictive data modeling to assess the eligibility of applicants for credit, Combs says.

However, not all customers integrate AI into their automated decisions. The day when “AI is infused into everything is a bit far off,” he says.

Intelligent Automation in Financial Services: Cybersecurity Risks

A report entitled “Good bots and bad actors‘ by IT consultancy Accenture identifies a number of security risks arising from intelligent automation. Many of these relate to AI security threats, such as tampering with machine learning models or their training data to influence outcomes.

Examples include “[i]injecting conflicting training instances to introduce a “new normal” into the model, such as falsifying credit card eligibility or disabling fraud alert mechanisms”, and “taking ownership of explainability logs of the model to understand its decision logic and “trick the system” by providing input data guaranteeing a favorable outcome”.

However, experts interviewed for this article said that the “intelligence” built into intelligent automation is usually provided by third-party software or cloud services. Maintaining the security of the underlying ML models is unlikely to be the direct responsibility of all but the most sophisticated AI users, at least for now.

Instead, the main security risks of intelligent automation are similar to those of RPA. “If malicious code is introduced [to an automated process], it can be amplified many times over very, very easily,” says Manu Sharma, cybersecurity resilience manager at Grant Thornton. In particular, access privileges, which are often granted to RPA “bots” to enable them to perform certain tasks, must be carefully controlled.

Nonetheless, the risks identified by Accenture underscore the need to hold vendors accountable for cybersecurity. “A SolarWinds-like hack on [RPA suppliers] UIPath or Automation Anywhere would be devastating,” says WTW’s Stoeckel. Fortunately, he says, RPA vendors are “starting to invest heavily in the security layer.”

Intelligent Automation in Financial Services: Governance Challenges

The governance challenges that arise from many intelligent automation use cases are similar to those of RPA. At WTW, Stoekel has established a Center of Excellence that manages automations developed by business users through a series of governance controls. These include security and other technical controls, privacy impact assessments, and quality metrics.

Creating centers of excellence is a common approach to governing automation, Burnett says, although they need to be well-integrated into the business they need to succeed. “The model that seems to work best is a mix of central and federated,” she says. “You can have centers of excellence in different divisions or geographies, depending on the work being done, and these are under the governance of a centralized body.

“You don’t want the centers of excellence to become bottlenecks because then they run out of resources,” adds Burnett.

If and when intelligent automation incorporates AI-based decision making, it can present new governance challenges, such as the risk of AI bias.

This requires simulating the outcome of automated decisions and testing them before and after deployment, Coombs says. “It’s critical that you have governance in place as you move these decisions through the design, test, simulation, and go-live phase.”

Financial institutions that develop their own models to automate decisions, such as loan applications, will need to be especially careful.

The EDM Council, a trade association that advises financial organizations on data management, has created a cloud data management capabilities framework which includes advice on “operationalizing the model”. Key features include managing the release process for machine learning models, applying version control to the models themselves and their training data, and regular review.

“There are a lot of stories about how when people’s behavior changed during the pandemic, the recommendation engines started doing weird things because they weren’t trained in this kind of behavior” , says Colin Gibson, senior adviser to the council. “It’s not just a one time outing, a one time review [approach]you need to have a continuous review of your models.

Given the risks that come with fully automated decisions and regulators’ eagerness to make sure AI doesn’t harm consumers, Grant Thornton’s Murphy doesn’t expect big financial institutions to add anytime soon. AI-based decision making to processes that affect client access to loans. .

“I think big banks will always use humans in the loop and augment their experts with more intelligence,” he says. “Rather than get rid of [humans]best practice would be to run the machines alongside humans for a few years and be patient with the savings, to ensure it is thoroughly tested and you have the regulator on board with the results.

Learn more about intelligent automation:

Low code and intelligent automation will have a profound effect on IT teams

Intelligent Automation in the UK Public Sector

The Mayflower will cross the Atlantic again – this time with an AI captain

Pandemic recovery gives high-risk jobs a ‘temporary reprieve’ from automation