Every data quality initiative eventually arrives at the same conclusion: the data is only as good as the behaviour that produces it. And behaviour is a culture question, not a systems question. The organisations that crack data quality do not do it by installing better tooling. They do it by changing what gets rewarded.
The tooling trap
When data quality degrades, the instinctive response is to add a layer of validation. A new field becomes mandatory. A new approval step is inserted. A new dashboard tracks completeness scores. None of this addresses why the data was poor in the first place.
Poor data quality almost always reflects a rational decision by someone under time pressure. Filling in a field accurately takes thirty seconds longer than filling it in approximately. Thirty seconds, repeated across ten thousand entries, is not trivial. Staff will consistently choose speed unless accuracy is made to matter — to them, not just to the analysts downstream.
Making accuracy matter
The organisations with consistently high data quality share a common characteristic: data accuracy is visible, and it is tied to outcomes that the person entering the data can see. When a sales team’s pipeline tool shows them — immediately — how incomplete records affect their own forecast accuracy, behaviour changes without any mandate.
This is a design principle, not a disciplinary one. It is about closing the feedback loop between the person producing data and the consequences of its quality. AI systems are particularly well positioned to close this loop, because they can surface downstream impacts faster and more specifically than any human reviewer.
The identity dimension
There is also an identity dimension to data quality that rarely appears in technical documentation. In organisations where data entry is seen as administrative — beneath the role — quality suffers structurally. The task is resented and delegated to whoever is least busy, regardless of whether they have the context to complete it accurately.
Changing this requires changing the narrative around data, not the validation schema. It requires leaders who talk about data as a strategic asset in terms that frontline staff find credible, not abstract. When people understand why their input matters, and can see that it does, quality follows.
Data quality problems are almost never solved by better databases. They are solved by organisations that understand what they are asking people to do, and why.
CONTINUE READING
OPERATIONS — MAY 2026
The Hidden Cost of Generic AI
What off-the-shelf AI tools rarely disclose about the organisational friction they introduce.
READ MORE →MEASUREMENT — MAY 2026
Measuring AI Impact That Actually Matters
Defining success in terms of operational outcomes — before a single line of code is written.
READ MORE →