Every industry has a need for higher standards of data quality. The bigger the business you run, the harder it becomes to keep a finger on the pulse of customer engagement, and it's no longer possible to observe public opinion merely through face-to-face communication. When you're dealing with millions of consumers instead of dozens, you need data to paint the complete picture.
Too many companies are reliant upon inaccurate data. They've collected massive troves of information about their customers - past, present and future - but they have no way of verifying it. People may have filled out forms incorrectly. They might have lied. They might have given contact information that was correct at the time but has since become outdated. The reasons for potential errors in data quality are numerous.
Companies need to take action if they want to combat this problem. Data quality woes will not go away on their own. According to Computer Weekly, these issues will persist until business IT leaders work actively to eliminate them. Tony Lock, program director at analyst group Freeform Dynamics, likens the problem to a common scientific theory.
"One of the most fundamental laws of physics, entropy, essentially states that unless energy from the outside is applied, the amount of disorder in an enclosed system will increase over time," Lock wrote. "Restated in IT terms, this means the quality of data held in IT systems will deteriorate unless steps are taken to maintain its accuracy and consistency."
The difficulty lies in the diversity of issues that companies face with data. There's no one singular problem, and there's therefore no one singular way to address data quality at the enterprise level.
Rather, there are a few different areas companies need to focus on.
The conventional wisdom is that every time a company adds a large cluster of data, it's time to check for quality. Say you've just put a new form on your website, and thousands of people have filled it out. Or maybe you've just completed a big business deal and acquired a large amount of data. With a big influx, now would be the time for data quality errors, so now's the time to check for them.
In reality, though, quality mistakes don't always crop up suddenly - instead, they drift into the picture gradually. "Data drift" is an ongoing issue, and companies should be making routine checks for data quality. It shouldn't be a one-time pursuit.
People make mistakes all the time when filling out forms online. They misspell their email addresses, they accidentally give the wrong numbers or they give an outdated address instead of a correct one. These things happen, and they can be hard to detect.
It's important that companies try, though. If they allow address management errors to slip through the cracks undetected, they risk sending erroneous mailings or making unsuccessful attempts at marketing and sales efforts. Human errors aren't easy to spot, but businesses must do what they can.
It's very common for any company that works with data to engage in verification - put simply, the act of spot-checking data as it comes in. The moment that data is created, captured or updated, companies should be checking it for accuracy. Verification can be done at the point of capture on the front end, as well as on the back end, after the data has been captured. Ideally, users would do both, accounting both for human error and for data degradation over time.
Quality isn't a "one and done" deal. It's a constant point of emphasis for businesses that care about understanding their clientele.