The Data Practices That Help Identify Vulnerable Customers

Our series of articles on identifying & serving vulnerable customers looks at how British businesses can meet the regulatory standards laid out for them to ensure they serve this growing cohort of the UK in an adequate and timely manner.

teamwork iconteamwork icon
teamwork iconteamwork icon

Diverse People, Diverse Perspectives

Waracle is an inclusive, inspiring & developmental home to the most talented and diverse people in our industry. The perspectives offered in our insights represent the views and opinions of the individual authors and not necessarily an official position of Waracle.

In the second of our series of articles about how digital infrastructure and products can help identify and serve vulnerable customers, our data specialists reflect on how data engineering, data analysis and data science can play a vital role in uncovering the objective cues that will help people in need get the support they require.

Over the last ten years, it has become increasingly apparent that the databases that accrue 1st party customer data aren’t just useful for next-best-action trigger communications but are more useful in merging, unifying and creating a single source of the truth for customer data to understand more about the people behind the account.

Senior stakeholders have come to recognise the intrinsic value of these customer databases in identifying vulnerable customers, which in turn can ensure that people who live on the edge of economic hardship are supported and not overlooked.

Let’s find out more about how modern data engineering, data science and data analysis contribute to the processes and procedures that help people when they need it most.

Seeing the wood for the trees

At first glance, identifying vulnerable customers could be considered by many to be a reasonably straightforward issue. After all, financial services companies have long been in the business of analysing customer data to determine creditworthiness and assessing their risk exposure. Similarly, energy companies have been collecting data on energy consumption for many years, with the aim of identifying patterns and trends that can be used to improve efficiency, adjust storage and reduce waste.

However, the task of identifying vulnerable customers goes far beyond basic data collection techniques.

In order to truly understand the complex web of factors that can lead to economic hardship, companies must employ a range of sophisticated data engineering, analysis, and modelling techniques… and work to refine them over time as new data points allude to complex human experiences such as bereavement, stress, physical injury, mental health issues and more.

One key challenge in identifying vulnerable customers is exactly the fact that economic hardship can take many different forms. For example, one customer might be struggling to pay their energy bills due to low income, whilst another might be struggling to pay off their loan due to a medical condition. And for many others, they will likely be dealing with a combination of factors, which may make it hard to isolate the core data points that may flag a particular customer or family.

Given this complexity, it is clear that identifying vulnerable customers requires a sophisticated and nuanced approach.

Let’s find out how we can plan to ‘see the wood in spite of the trees’.

Building the Foundations for success

Any company in 2023 should be able to collect, store and organise large volumes of data from various sources. Cloud computing environments have allowed elastic, scalable, secure, robust and cost-efficient data pipelines to be architected and managed in service of creating flexible 1st party (and 3rd party enriched) datasets.

But collecting data is just the beginning. In order to truly understand the underlying drivers of economic hardship, companies must also be able to analyse this data, enrich it, merge it with adjacent data sources and create the kind of querying flexibility that allows talented data professionals to ask difficult questions of these data sets.

Foundational data engineering involves collecting, cleaning, and organizing raw data to facilitate said flexibility. This process includes extraction, transformation, and loading (ETL) of data from various sources, ensuring data quality and data fidelity to create focused, scalable data storage solutions.

We believe that It is essential to align data engineering objectives & data science goals with the wider business objectives to ensure that data engineering efforts are focused on achieving targeted business outcomes.

This can be achieved by identifying the key business objectives and determining how data planning & data engineering can support these goals.

Once the foundations are in place, we can move to the really interesting phase – querying the data!

Asking great questions, finding novel answers

Collaboration between data engineering and data science teams is essential for achieving great data science & analysis outcomes. Data engineering teams should work closely with data science teams to ensure that data is of high quality and accessible. Additionally, data engineering may also have to provide support for data scientists by providing them with the tools and infrastructure they need to be successful.

In an ideal scenario, data teams within financial services or energy organisations must be able to use their data analysis and modelling capabilities to develop targeted, data-driven interventions that are tailored to the specific needs of individual customers. This requires a deep understanding and discipline expertise in areas like:

  • Feature engineering – Transforming raw data by identifying the variables and aligned metrics that are associated with vulnerability measures
  • Predictive modelling – Analysing historical data and feeding in real-time data outputs to try and make predictions based on context from the objective data sets
  • Data visualisation – Creating environments, where humans can interact with quant analysis outputs to qualify and contextualise model outputs and decide to review automated decision tree communications

By adopting a product-mode, iterative approach, teams can look to explore hypotheses, user needs and feature suitability over time, whilst training the models to ensure that they are accurately identifying early warning signs, red flags and pressing issues adequately.

By utilising machine learning algorithms to analyse patterns in customer data and identify early warning signs of potential financial distress or fuel poverty. Organisations can become orders of magnitude better at identifying vulnerable customers. And they can then prove to the regulators that they are on the front foot in terms of supporting vulnerable cohorts towards better outcomes.

Ultimately businesses in highly regulated environments want to embark on these projects and programmes of work, not only to show the regulators that they are taking widespread societal issues seriously but also to drive towards their overall ambition of being (or becoming) ‘purpose-driven organisations’.

Our perspective

It is our belief that now, more than ever, there is a dire need for accurate and high-quality data engineering, data analysis and data science to support the scalable identification and service of vulnerable customers. 

Our data engineering teams have vast experience in setting up Azure and AWS environments for enterprise success and we know first-hand how to build amazing digital products that map to the user needs of people with complex needs.

If you need feel your business could benefit from a consultation to leverage our experience for your own business, get in touch today.


Join Our Enterprise Workshop

How to Identify & Serve Vulnerable Customers

Sign Up Today!
Share this article


Blair Walker
Blair Walker
Head of Marketing