Skip to main content

Highlights

  • Identified and addressed bottlenecks in the bank’s data management systems, simplifying processes and enhancing the flow and usage of data.
  • Securely integrated engine for customer insights to make critical banking business intelligence intuitively available.
  • The reliability of data processing improved massively, enabling wealth management advisors to request/rerun data processing jobs on demand.

The Problem

The major Australian Bank’s wealth managers and advisors relied on legacy systems to collect massive amounts of customer banking data and crunch large numbers to draw portfolio performance insights and trends.

However, the ageing technologies that traditionally powered their wealth management began to show difficulty keeping up with the massive surge in the amount of data that has been generated over time. The reliability of these systems dropped to the point that rerunning failed batch jobs began to overshoot daily SLAs.

Lack of fresh data availability from disparate systems in a fast, event-driven way was increasingly turning into a major bottleneck, preventing wealth advisors from being able to provide a customised and timely wealth management advisory to their clients.

The Opportunity

The decline in data freshness and the inability to provide customised and timely wealth management services to clients was affecting overall customer experience as well as advisor job satisfaction, thereby leading to a drop in customer conversion and retention. This presented an opportunity to reimagine and rebuild a mechanism for customer insights on the Google Cloud platform. By doing this, the company could address current issues while also laying the foundation for a robust, data-driven engine for future decision making.

The Solution

The solution implemented was a cloud-based data management system built on the Google Cloud Platform. It was designed to ingest and process both batch and streaming data from multiple sources in real-time. The system also standardised disparate data into uniform, globally renowned data formats and conventions. The solution included fully automated Extract, Transform, Load (ETL) pipelines, built using Google’s Dataflow, for data ingestion, wrangling, modelling, aggregation, and standardisation. A central component of the solution was a customer insights engine, which was securely integrated with an online web application. The system also utilised several Google Cloud products and employed a serverless architecture, allowing it to scale according to data demand.

The benefits of this solution are as follows:

  • Improved Data Processing: The solution’s ability to ingest and process both batch and streaming data in real-time significantly improved the speed and efficiency of data processing. This allowed for much more timely analysis and decision-making.
  • Enhanced Interoperability: By standardising disparate data into uniform, globally renowned data formats and conventions, the solution enhanced interoperability across different systems. This made the data more usable and accessible, leading to better decision-making and more efficient operations.
  • Accessible Business Intelligence: The customer insights engine, integrated with an online web application, made critical banking business intelligence intuitively available to wealth managers, advisors, and customers. This democratised access to important information and insights.
  • Scalability and Cost Efficiency: The serverless architecture of the solution, built on Google Cloud Platform, allowed it to scale according to data demand. This eliminated the need for the bank to manage server infrastructure and optimised operational costs.

Our Approach

Identify Bottlenecks and Opportunities: The first step was to work closely with stakeholders to identify major bottlenecks and opportunities to simplify complex business processes. This involved understanding the existing workflows and identifying areas where they could be streamlined or improved.

Standardise Data: Once the bottlenecks and opportunities were identified, the next step was to standardise the disparate data into uniform, globally renowned data formats and conventions. This was crucial for increasing interoperability and making the data more usable across different systems.

Architect ETL Pipelines: Mantel Group architected fully automated big data processing ETL (Extract, Transform, Load) pipelines. These pipelines were designed to ingest batch and streaming data securely from several sources in real-time.

Data Wrangling and Modeling: After the data was ingested, it was wrangled and modelled at scale to aggregate and standardise data from a number of incompatible data silos. This involved cleaning the data, handling missing data, transforming variables, etc.

Leverage Google Cloud Products: The solution leveraged powerful big data processing products on GCP such as PubSub and Cloud Spanner. These tools were used to unlock critical business insights with the optimal schema and performance.

Integrate Customer Insights Engine: The next step was to securely integrate an engine for customer insights with an online web application. This made critical banking business intelligence intuitively available to wealth managers, advisors, and customers through an online portal for advisors.

Deploy the Solution: Once the solution was built and tested, it was deployed to production. A key component of this was ensuring that the solution was secure and that it met all the necessary regulatory and compliance requirements.

Monitor and Optimise: After deployment, the solution was continuously monitored and optimised to ensure it was running efficiently and effectively. This involved using Google’s Observability Suite to monitor the performance of the solution and make any necessary adjustments or improvements.

Download this case study

Reach out today!