Not Without Hurdles

It’s no secret that banks have more data on their customers than most brands. This presents an exciting opportunity for them to leverage their data to better personalize user experiences. However, it’s clear that there are issues that banks have when dealing with that data, which can include:

Hundreds of internal data sources:

The number of data sources is ever-growing. After every M&A, banks add new systems and more streams of data into their IT infrastructure. Each system holds different information and a limited view of the consumer. The data could be stored in many forms, including relational databases, XML data, data warehouses and enterprise applications, such as ERP and CRM. This creates hundreds of data silos, each reflecting a small slice of the consumer. The fragmented data must be addressed in order to gain a better view of the customer.

Growth of external and unstructured data:

Banks also have a large amount of external and unstructured data about their customers in the form of social media and web searches, website visits, streams, videos and so forth. In fact, a large portion of the data being created is either unstructured or semi-structured, and cannot be easily stored and analyzed using traditional systems. At the same time, the percentage of data that businesses can process is steadily decreasing as traditional systems, which are not designed for today’s depth and breadth of unstructured data, are inadequate.

Storing, indexing and analyzing massive data:

Dynamically scaling storage capacity without any disruption to mission critical applications is a big challenge. Finding actionable insight among the massive structured and unstructured datasets, and delivering that with sub-millisecond latency is like finding a needle in a haystack. Being able to query data across multiple clusters of commodity servers and aggregate the results into meaningful insights is increasingly difficult with traditional technologies.

Velocity of data creation:

The speed of data creation across multiple channels is unprecedented. Banks need to be able to process data more quickly than in a batch mode or they will lose precious time in making offers to gain customer value. It’s become critical to not only process static data and consumer profiles, but also their interactions with the data in real-time so banks can gain actionable insights to make more timely offers.

Fragmented view of consumer:

Even if banks were to aggregate data from hundreds of internal and external sources, and put it into a unified system, the information would still be in multiple silos. Additionally, matching information about a customer from multiple data sources will be important – especially down to the individual-level. In a nutshell, simply integrating and aggregating data from multiple sources does not provide a single view of the customer – something that is essential for more sophisticated personalized marketing and loyalty programs.

Organizational readiness and skillsets:

The volume, velocity and variety of unstructured data makes it impossible for organizations to store, index, search and analyze massive amounts of data using traditional systems. In fact, the traditional systems are inadequate for unstructured data, rapidly changing schema and elastic scaling of storage. On the other hand, most banks don’t have sufficient organizational expertise and skillsets to deal with the complexities of big data management systems. Banks used to have to rely solely customer-driven decisions. Now, once you are able to acknowledge and overcome the hurdles, productively utilizing customer data can allow businesses to determine what a customer is most interested in and create a personalized experience where content, products and/or services are presented to customers before they even realize they need them.