I was recently invited to take part in a ‘Chat with Channeliser’ about data quality. I always welcome any opportunity to talk about data quality because it’s a topic often dismissed as quite operational, but one which has big implications for organisations that don’t take it seriously. You can watch the full interview here but to whet your appetite, here’s a few key questions that we covered.
These days the problem of inaccurate data is complex - as it’s no longer just a case of entering data into a system without controls. It’s important to distinguish between bad data and data quality issues because data isn’t always bad when it’s originally entered. Here are a few examples:
Simply it costs trust. Consumers are more aware than ever and expect organisations to manage their data in a secure and ethical way. If you have good data quality and can use it to serve the right message at the right time, you’ll have earnt their loyalty by showing the value you can provide. The opposite is also true however – get it wrong and you risk a customer looking elsewhere.
Data quality can sometimes appear less attractive to organisations who are eyeing up the next big thing in data. If, however the data is not fit for the task required, you need to act before undertaking transformative projects. After all, the last thing you want to do is move bad data into your shiny new system.
Fortunately, there is a growing realisation that data quality is vital. AI and new technologies for example, require visualisation of the data and that can make it more obvious when there are errors. What used to be a dry subject is now front of mind for organisations who want to drive value from new technology.
The cloud is currently driving an increase in the number of data migrations. It’s an industry that’s worth billions and there are more people moving more data between more systems than ever before. It’s huge and yet the user acceptance of a new system will hinge on the quality of the data. Quite simply it just won’t be acceptable to bring old, inaccurate data into a shiny new cloud environment. It is in these instances that data quality tools and services can help with key activities. For example, bulk cleansing in the pre-migration stage, establishing the data quality rules to manage quality over time and managing the migration in a controlled way.
Without doubt, it’s an important factor in the growth in interest in data quality, especially with the GDPR deadline in May. Many of the GDPR’s clauses will require the personal data that businesses hold to be accurate.
One example is Subject Access Requests (SARs). Organisations have just 30 days to respond and they’ll need to locate every instance of a personal record across their systems. This can be tricky if you have more than one. The first task is to find where each record sits and reconcile them together. If the data isn’t accurate, then there is a risk that you may not be able to locate the record.
When it comes to the practicalities, the first task is finding all the data, seeing how it can be consolidated and brought into a centralised area – perhaps a data warehouse.
At Experian, we can help by applying methodologies and tools that can look across systems to identify and catalogue where data is found. We then use additional tools and data governance rules to help implement the right data procedures, cleansing and governance.
In terms of data quality, we can also support organisations to bring their existing data up to a state where it is clean and fit for purpose.
If you’ve read this far then hopefully you’ll agree that data quality really does matter. If you still need convincing, why not listen to me talk more about it here. You can also read more about Experian’s solutions on our data quality page.