Many companies are accelerating their efforts to build analytics capabilities that will position them to realize the power of big data. The problem is, many companies exhibit key flaws in their core processes for managing and monitoring data quality in their current data assets, which can lead to costly data cleanup efforts and even compliance violations as companies look to scale their data acquisition and utilization. In this article we will cover 3 common fixes that most companies can implement to protect their data quality and prepare them to enjoy the tremendous benefits of big data before it becomes a big problem.
The first step in positioning your company to leverage big data is to first clean up what you already have. Insurance companies have reams of data rich with intelligence but much of the existing value is lost through data redundancies and poorly captured data across business and core processes. Companies should focus on cleaning and integrating existing data source and piloting analytics capabilities. This is how you start small, start fast, and capture return quickly. Chances are you’ll discover a few extra kinks you’ll be glad you found before making the leap into larger initiatives.
Second, look to begin standardizing the business rules for how data is captured and managed to avoid mutations or duplications in the data you already have. A common flaw seen across industries is a lack of standard and centralized business rules and data dictionaries that govern how data is captured and maintained. This lack of standardization will continue to leak data quality issues into your organization and undermine expensive initiatives to implement data cleansing or new capabilities.
Third, you need to make sure you have the right governance management framework in place to drive data-centric change and monitor compliance with policies and rules. At a minimum, a hierarchical structure should exist similar to a project structure. There should be a set of responsibilities for an executive steering committee to monitor issue management, return on investment, and overall company compliance with internal controls and external regulation. A middle tier, usually composed of Senior Managers, VP’s or high level Directors, is responsible for monthly planning and monitoring of data issue resolution. They are, effectively, the day to day managers of data-centric change and should meet regularly to make decisions, remove barriers, and ensure the appropriate business stakeholders are ‘in-the-know’. This group should include a rotating spectrum of representatives (depending on the agenda) from functions across the business and not be managed in a silo by a single data or technology team. The remaining tier is the execution level. The day-to-day executioners of tasks and projects put in place to achieve governance maturity. This layer consists of data stewards and other data SMEs from across the organization. Often times, existing meetings and platforms already exist so implementing a few small changes and getting these groups integrated is very practical.
Leveraging Big Data certainly has inherent possibilities, but ignoring internal realities can definitely turn the great data opportunity into a costly nightmare. Achieving an increased state of data management maturity and realizing returns can be more easily achieved by focusing on leveraging existing assets and human resources as opposed to investing in new data technologies and capabilities. Companies have a shared opportunity and responsibility for standardizing the management of their data, improving data quality, and implementing appropriate governance structures that tie it all together.
Securing your data architecture is a continuous process but the benefits will keep you “out of the line of fire”.