The near future for ML & AI in Financial Services

Financial services firms will lean on Artificial Intelligence (AI) and Machine Learning (ML) more heavily in the coming decade, but where are the most likely places in which financial services will opt to leverage the potential of AI & ML?

teamwork iconteamwork icon
teamwork iconteamwork icon

Diverse People, Diverse Perspectives

Waracle is an inclusive, inspiring & developmental home to the most talented and diverse people in our industry. The perspectives offered in our insights represent the views and opinions of the individual authors and not necessarily an official position of Waracle.

A piece of late 2022 research from The Bank of England and the FCA, purported that 72% of UK firms in the financial services (FS) sector are already developing and deploying Machine Learning algorithms in their organisations.

Machine Learning or ML, as many will know, is the foundational component of what is broadly known as Artificial Intelligence (AI), which allocates computing power to the endeavour of learning from vast quantities of data to improve the performance of warning, decision-making and/or optimisation systems.

As the hype around ChatGPT, Bard and other adjacent GenerativeAI solutions has grown over the past six months, the hypothetical use cases have grown in Healthcare & Energy, but FS use cases have still been somewhat limited, especially as many institutions like Bank of America, Citi, JP Morgan & Deutsche Bank have banned ChatGPT and it’s compatriots outright.

With the Bank of England & FCA research also predicting that the median number of ML applications in the FS sector will grow by 3.5 times over the coming years. We ask the questions:

Where will we see AI and ML being used in financial services? And… how will they integrate the benefits of tools that the sector seems incredibly wary of?

Let’s find out.

Where next for AI in financial services?

Regardless of research results and market hyperbole, the reality of AI and ML use in financial services organisations is sparse and the truth is that these technologies are being used in isolated, specialised cases – predominantly to streamline operational processes and support decision making.

In our experience, the business area in which machine learning is being used most commonly is financial crime – i.e. fraud detection, anti-money laundering & sanctions, as using algorithms and statistical models trained on vast real time datasets is much more versatile and accurate, than slower operating models that were employed before the advent of machine learning.

The big challenge for AI in the highly regulated environment of FS is that, whilst it is reasonably simple to take a scientific approach and explore research prototypes, it can be very difficult to focus the potential of ML & AI on operational opportunities that can make their way from prototype through into production.

Some of the potential areas that may drive the pipeline of research prototypes, ask the question ‘can ML & AI…’::

  • Refine operations through intelligent automation
  • Reimagine the contact centre experience
  • Optimise lending decisions
  • Leverage intelligence for investment decisions
  • Improve employee productivity
  • Identify & serve vulnerable customers
  • Optimise employee experience – less repetition, more creativity
  • Be more responsive to customers through personalisation

All of these and more will be getting the attention of data scientists, if they aren’t already… but there is a problem.

The data.

Do financial services organisations trust their data?

The answer for many is ‘Absolutely not’.

Siloed customer data, 3rd party product data, commercial data, marketing data, data stored in archaic relational databases, data backed up on prem, data across multiple cloud providers…

As any data and analytics professional in legacy financial services organisations will attest to, data completeness, data accuracy and data quality is an ever-present issue… and one in this case, which has the biggest potential ramifications.

Data quality, is the single largest issue that can impact the outputs of ML & AI systems, as bad data in equals bad decisions out. And if an organisation is looking for intelligent systems to streamline complex decisions like who to lend to, where to invest capital and more… then you know exactly where you need to focus your efforts.

If financial services organisations are going to leverage the benefits of the nascent AI tooling, then they need to place equal importance on the foundational data engineering and data enrichment, as much as they need to fund inquisitive minds to create research prototypes.

Structuring the data in a way that aligns with AI and ML is 75% of the effort. By then taking that same data structure and applying a different model that solves for a different business challenge, you should be able to roll out more use cases, quicker.

Using ML & AI safely, ethically and effectively?

AI quite simply, is the most transformative technology of our lifetime.

But as we’ve discussed, it needs high quality data, high quality hypotheses, a testing ground for innovation and a safe route to production… it also needs checks and balances at every juncture to provide the organisation, the sector and the regulator with the kind of assurances that black boxes with ‘emergent properties’ usually don’t provide.

Businesses need to know how to use AI effectively… whilst guaranteeing safety, security and ethics. There are several factors that will be essential to organisations getting AI and ML right.

  • Humans feedback loops

A recent survey suggested that 75% of the development community surveyed were utilising github copilot. What does that allude to?

Humans plus AI. Not humans replaced by AI.

Decisions should be guided by data, not dictated by it. There is more work to do here, as only 30% of business leaders are currently confident that AI and ML are being applied ethically inside their business.

  • Transparent AI development

That means a laser focus on AI and ML ethics and customer trust, and an eye on emerging regulation globally. It also means transparency about how AI and ML models are designed and function.

Follow Explainable AI and ML principles to help customers understand how models are intended to function, what data is used in their build and how it is used, what outcomes to expect, how the models were trained and tested, and how they were tested for bias.

What should you be doing in your organisation?

It is our belief that burying your head in the sand and giving new technologies zero ‘eye contact’ is a competitive risk.

Regardless of your maturity level when it comes to ML & AI, you now find yourself in the age of the ‘AI gold rush’… that means that you are fighting the prevailing winds if you do not follow suit and leverage ML & AI to your own business advantage.

Starting small with research hypotheses and research prototypes in a sandbox environment might well be the best place to get started. This will ensure that you can orient your problem-solving around business value and establish the kind of transparent practices that will give infosec, risk and assurance teams the comfort that you are problem-solving with security, fidelity and ethics at the front of mind.

Here at Waracle, we help our clients leverage their intelligent digital products to create high-quality, clean datasets that will power the next generation of machine learning and AI business systems. If you want to find out more, please get in touch.

Share this article


Blair Walker
Blair Walker
Head of Marketing