Skip to main content

Squaring the Circle: Your data governance webinar questions

Answers from the recent webinar by Nicola Askham and Janani Dumbleton

There were a vast amount of questions that were asked before, during and after our  webinar ‘Squaring the Circle: Using a data governance framework to support Data Quality’ on the 4th September and unfortunately we ran out of time to answer them all.  So, Janani and I thought we would incorporate your data governance questions and turn this blog post into a virtual Q&A session.

What qualifications should members of the data governance committee have?

Nicola:

Choosing who should be a member of your data governance committee is not so much about qualifications but rather position and role within the organisation. I find that the most successful data governance committees are made up of two distinct groups of individuals:

  • The Data Owners for the organisation who are full members of the committee and get to approve or reject the various proposals considered.
  • Individuals with relevant expertise who are involved in approving or rejecting proposals, but who attend to advise the committee. This group of people will include roles such as: Data Governance Manager, Head of Compliance, Enterprise Data Architect and Head of Audit.

When identifying Data Owners, I never consider the qualifications they may have. I’m much more focused on identifying individuals who have the correct level of seniority in the organisation (as they need to have budget or resources available to support data quality activities) and are keen to support your data quality initiatives.

How do you structure a data governance team in a business with multiple brands collecting and using data?

Nicola:

This is not an easy answer to give without a better understanding of the structure of the organisation and whether the multiple brands are using or sharing any of the same data. However if the organisation is a considerable size, with the brands run fairly separately, I would consider having a federated approach, with separate data governance teams supporting the separate brands, but you would need some kind of central function or team to coordinate efforts across the organisation and ensure uniformity in approach.

What is realistic for one person to achieve when introducing governance over a 2 year period?

Nicola:

Another difficult question, similar to the how long is a piece of string question! It depends very much on your organisation, how mature they are in terms of data management and how open to change they are. Speaking as someone who has been that one person introducing data governance, it is possible to make a real difference. In a two-year period I have been able to design and implement the foundations of a data governance framework. This has included drafting and getting approval for a data quality policy, identifying and briefing the data owners and the majority of the data stewards, setting up and running the data governance committee and implementing a data quality issue management process organisation wide.

How do you keep 20 or more data stewards engaged in the data governance process?

Nicola:

Lots of communication, stakeholder management and energy! As you implement your data governance framework it is extremely important but sometimes challenging to keep your stakeholders engaged.

One mistake I made in the past was to spend a considerable amount of time identifying and briefing every data steward in the organisation before I asked them to start following any data governance processes. By the time I went back to some of them to implement a process, they had forgotten that they were a data steward, had changed job or even left the company! These days I only identify and brief data stewards as and when I have something for them to do. This makes a huge difference in keeping them engaged.

You also need to ensure that they are reminded regularly of what they need to do as data stewards and you need to have a communications plan to ensure that you are communicating with them at appropriate intervals. By this I don't just mean firing off a bland email every few months reminding them that they are a data steward. It needs to be varied and appropriate as to what actions you want them to take at that particular time.

If you do not have a regular meeting of your data stewards, it is important that the Data Governance Manager builds and maintains a good relationship with them all, speaking to them regularly (even if it's just for a coffee) and seeking their feedback on how things are going and taking that opportunity to deal with any concerns that they may have.

My company is launching a CRM platform. What would a data governance framework look like within that context?

Janani:

The data governance framework consisting of policies, roles, and processes would still apply to the CRM platform, and you can use the launch of a new CRM platform to implement data governance or even revisit the existing data governance framework.

Policies: When planning the CRM platform, it is critical to understand what data will be captured by the CRM and what the corresponding policies that should be in place. Now, these policies may be inherited from your previous CRM or existing systems that contain similar data and is imperative that you do not reinvent the wheel if these already exist, however, the CRM platform launch would be an ideal opportunity to review these policies to understand if they are still relevant, if they need changing or require more stringent monitoring and embedding within the data processes. For example, legacy systems may not have had policies around the consistency of email address capture due to the still growing importance of the email channel when the systems were set up. However, over time email address may have become more critical to the online operations of a business, and you may need to revisit the rules around email address capture and usage within the new CRM system.

Processes and Roles: Part of your CRM planning should have included a review of the business processes that would be executed and the roles and responsibilities of users within the CRM. These are ideal steps to latch on to, and overlay the data processes as well as the roles and responsibilities of the users within the CRM towards the data. For example, the lead to opportunity process within a CRM would be mapped out as a part of the system design, and as a business you would have discussed the fields being captured on a prospect, the information about the lead and also discuss the criteria when a lead becomes an opportunity. Additionally there would have been discussion on who was able to add a prospect and lead, as well as promote the lead to an opportunity. This is an ideal place to then test if the policies around quality and integrity are eligible for each of the fields being captured (such as what makes data being entered valid, should the field be completed on entry etc). In addition you have an opportunity to determine what is the responsibility of the users towards the data; who can add new data or update records; can data be deleted, or who can view data?

What is the most efficient way of cleaning old data?

Janani:

I would recommend businesses cleaning legacy data for the first time or after a long time to review the scope and relevance of the data being cleansed, do you need to clean all the data, and what will the impact of the clean mean to you as a business. If the data being cleaned is related to consumer or business data, things to consider include:

  • Determine what minimum standards of uniqueness, completeness, validity and accuracy of data is required by the business. For example, to determine accuracy, do you have the right reference data at hand and will that be adequate?
  • Validate addresses, email addresses and telephone numbers to see if they are actually deliverable.
  • Validate if the consumers or businesses are actually at the address you have for them. Particularly for consumers, suppression checks on gone-aways or deceased records may be a compliance requirement.
  • Determine what data will help you weed out duplicates, for example you may want to use a combination of name, address, email and telephone, and vary the confidence in results based on how populated and valid the data exists in these fields. Cleaning and standardizing data will help improve duplicate matching.

Often legacy data may contain data about customers or products that are no longer relevant, or the related transactional data has already been archived. It is important that you determine a priority for the data being cleansed to ensure resources are effectively used. However this is also an opportunity to determine what the next best steps should be. Once the data is cleaned, it will deteriorate again, and if you do not have a data quality strategy for managing ongoing changes, it can snowball into a bigger problem if only reviewed once every few years. Compliance heavy industries such as financial services are more at risk of fines from the regulators for not having the most up to date and accurate information about customers.

When considering a cleanse, I would recommend a review of the types of data quality issues, and see if these can be addressed through preventative measures when entering data, such as real-time validation of address, email and mobile data or change to the data entry processes to ensure that validity rules are adhered to new data being entered. Frequent bulk data cleanses will also help keep on top of the problem.

Have you used an overarching IT system in order to marry all data quality strands you are discussing under data governance?

Janani:

The data governance framework that I use is split across the three stages of data quality management: Analyse, Improve and Control. Typically, you can use one technology platform such as the Experian Pandora to manage the three stages. This ensures you have one consistent approach to the governance framework, reduce redundant documentation and ensure that the governance team can collaborate efficiently. The single platform approach will allow you to execute across the stages in a controlled manner.

  1. Discover and analyse the state of data quality, document data quality issues, establish data definitions and determine the policies or rules that govern the data.
  2. Prototype improvements and bulk cleanse data, including standardizations and transformations through usage of built in functions and use of external reference data.
  3. Monitor the performance of data quality rules and policies over time, report on performance and refine data dictionary and data quality rules with changing data, ensuring you have audit control.

However, improving data quality is not just being reactive with a bulk cleanse. It will include changes and the introduction of real-time data quality validations in front-end systems, as well as involving changes to the actual data processes within multiple systems. The end goal of data quality governance should entail the prevention of issues from entering your data real estate. Now this is where you may need to rely on technical solutions that sit outside the central data quality management platform; purely as your systems infrastructure may determine how these preventative measures are implemented. However, if you have a centralised platform that stores your businesses policies and rules, best practice would be to capitalise on the central repository, so that rules and policies are not duplicated and fall out of the governance framework.

If you want to discuss your data governance and data quality issues with myself and Janani; and learn with your industry peers, then join us at the upcoming roundtable event on the 18th September. Register for your place today!

 

Comments