Data Quality: The Operating Stepchild That Deserves a Seat at the Table
Part 1 of 4 in our series on data quality, analytics and AI readiness
Noland Cheng, Managing Partner, C&A Consulting
March 2026
For many decades, data quality has been treated as a back-office burden managed either, or both, by operating staff throughout a firm or data analysts or stewards tasked inside of data roles or discrete functions, rather than committed to as a strategic enabler of enterprise performance. We would even pose that firms that carry a high-performing, metrics-driven culture need to tilt their operating organizations towards data and statistical understanding to maximize their opportunities, including AI, in a more competitive world where ‘the numbers don’t lie’.
As analytics and AI reshape financial services, organizations must finally elevate data accuracy and completeness to the core of business performance and decision-making.
Data Quality – Still Elusive After 40 years
Even after decades of investment, many financial institutions still struggle to trust their data. Despite advances in analytics, automation, and AI, data quality remains an unresolved impediment, one that quietly erodes performance, and in many cases drives up operating costs, and undermines confidence in decision-making.
The root cause isn’t the technology. It’s ownership – ownership of the business processes, but even more pointedly, ownership of the data output from those processes and its accuracy that can then be relied upon. The accountability chain for data completeness, transparency and accuracy is still missing in most organizations.
Data quality lives everywhere but its accountability often is found nowhere or piecemeal, too often viewed as a control function rather than a business driver. The truth lies somewhere in between. As analytics, robotics/machine learning and AI initiatives accelerate, firms are discovering that without a sound data foundation, expectations and even the most advanced models deliver inconsistent or misleading results. Even bringing in 3rd party platforms can be modestly successful but short-lived and more deep-seated, data-centric practices can be overlooked in a tactically managed program.
C&A’s experience across the financial services sector demonstrates how firms can successfully embed data management as a core tenet of their operating culture at the line management level, a culture that respects and demands end-to-end data integrity. In some regards we see taking a page from history in bringing back Deming’s Management Theory as the data driver of today – how can one design the defects out of Data Management. It’s all about quality of the work performed and organizationally improving total performance. These factors align data quality and governance with business accountability, measure quality as a performance metric, and continuously engineer defects out of processes instead of compensating for them downstream. It creates a virtuous cycle of Plan – Do – Check – Act with data at the heart of its fact-based methodology.
The payoff is significant: lower exception costs, sharper customer insight, stronger risk control, and faster, more attainable high-end automation to robotic processing to AI adoption. It also connects people to this virtuous cycle that represents the interface into how today’s ‘human in the loop’ context is still relevant tomorrow. In today’s environment, quality data is no longer a support function. It’s the foundation for strategic advantage.
Why Data Quality Still Fails: Ownership, Accountability, and Culture
After decades of investment, most financial institutions still can’t fully trust their own data. Despite modern tools, dashboards, and analytics platforms, the same issue persists. Data quality remains an unspoken barrier between potential and performance. Management highlights it, but the undertaking to remediate the causes of quality failures is often left unquestioned, unanswered or simply deferred indefinitely. The tougher questions aren’t addressed.
It’s not that leaders don’t care about their data. It’s that the ownership of data quality has never truly found a home and therefore isn’t prioritized, budgeted or resourced. In this absence, business and operating units rely on or default to technology to ‘fix it’ or at least control it. Data Management Offices try to govern it without true operating partners to dedicate time, budget and resources. Operations teams live with the fallout not being acknowledged that ‘paybacks’ are not always in the form of P&L impact in the early stages of creating results ‘…If you only have a hammer (financial results), every problem you solve is a nail….’ In this case, the problems are many, besides ‘a nail’.
And now, with the rise of analytics and AI, the cost of neglecting the foundation is becoming impossible to ignore if data quality isn’t kept close to the center of the deliverables of an organization.
The Foundation Beneath Every Decision
In one major global financial institution, C&A Consulting found that 30% of operating costs were tied to the processing and resolution of exceptions. Hundreds of millions of dollars in this enterprise were bleeding through various gaps and quality issues; mistakes, rework, and manual corrections caused by inconsistent or incomplete data or processes that allowed for incorrect outcomes. Even getting to this analytical measurement took a completely new way of looking at organizational performance and seeing measurement over time had no prior sustainable framework before a ‘new approach’ exposed the fragility of the organization’s historical state.
That ’cost of exception’ factor (exposed as data issues) is not a rounding error; it’s a drag on profitability and productivity that compounds year after year, draining the firm of its earnings versus fixing the material causes of errors and then recapturing the lost dollars and redeploying them into investments to strengthen the firm. Every reconciliation cycle, every duplicate entry, every time someone manually “fixed” a record, value was leaking from the organization.
When firms deploy AI on top of an error-prone environment, they’re risking amplifying the problem because the ‘margin for error’ in a model’s assumptions and its results can reach a level where the results are not ‘trustworthy’.
Machine learning models can’t always readily distinguish good data from bad data. Trying to teach a model to avoid a ‘false positive’ piece of data has risks. Will the model screen out true outliers that look like bad data and therefore the model misinterprets the remaining ‘good’ data? Isn’t a model only as good as the ‘teacher’ that it had, and are we vulnerable to ‘rationalizing’ the model to fit the results when too much ‘data distortion’ exists when the data is ‘dirty’?
The cost of trying to guess what is bad data (and having confidence in the ‘guess’) does have its limits. It’s probably cheaper just to correct the bad data and have near 100% confidence in the model.
Why So Many Firms Still Struggle
Despite the sophistication of the financial industry, data quality often sits in a gray zone, somewhere between technology and operations but not truly owned by either if the front-end business isn’t actively part of the accountability structure.
C&A has seen the same pattern repeated across banks, broker dealers, investment managers, and many third party recordkeepers:
- Data programs are launched but not sustained.
- Governance councils are formed but lack teeth.
- Metrics exist but aren’t tied to accountability.
- Data stewards and analysts are siloed while business leaders chase near-term results.
- Advanced automation projects announce success, but widespread adoption of the underlying practices stays fragmented.
The irony is that data quality failures are rarely visible until they become expensive in regulatory breaches, customer dissatisfaction, or failed automation initiatives. Because the benefits of data quality are incremental and continuous, they rarely make headlines. But they make a measurable difference over time. In a good analogy, if you are off a couple of compass degrees at the start of a sailing trip across a large ocean, you can wind up hundreds of miles from your desired destination at the end of your journey.
The Hidden Obstacles
The truth is most data quality initiatives don’t fail because of technology; they fail because of organizational gravity.
At C&A, we’ve seen it firsthand:
- No clear authority. Data spans multiple departments, but few leaders have the mandate, resources or political cover to enforce enterprise-wide standards. Projects compete to deliver results, unfortunately data quality regularly gets crowded out because it is complicated to diagnose, permanently remediate and achieve.
- Misaligned incentives. Bonuses and KPIs often reward growth, not governance. There’s little upside for “unsung heroes” who prevent errors. Also data processes often span across different silos. Cross-enterprise coordination and achievement is not always easy and maintaining that sponsorship is hard when competing priorities are in flux.
- Fragmented funding. Data programs are funded like IT projects, not core business initiatives, where paybacks and deficit funding are made available for longer term results. When budgets tighten, they’re first on the chopping block because their results often aren’t seen in the short run. How can one prove the value or return on “improved data quality and governance?”
- The complexity tax. Legacy systems and M&A integrations leave firms with dozens of overlapping data sources, and in some cases, none of which fully align because of unaddressed or long deferred architectural maintenance or projects. No one is paid for the quality of their maintenance work until a problem is big enough to be visible to management. Reopening some legacy projects is often ‘not easily define-able in scope and cost’. Even coming up with a project path can be sometimes difficult, hence the deferment.
- Short-term bias. Executives want visible wins, but quality improvements show results over time, so momentum fades before maturity is reached. Longer term goals aren’t necessarily popular unless very large outcomes can be achieved.
Data quality isn’t a technology problem. It’s a leadership and accountability problem hiding inside your org chart. That said, some firms figure out this puzzle and the stronger ones set out and execute it.
These invisible barriers are exactly why data quality remains a “homeless function.” It lives everywhere but belongs nowhere until leadership reframes it as a business enabler rather than a control exercise.
- Why Hasn’t This Been Solved Yet?
After decades of tools, frameworks, and investment, data quality still undermines decision-making in most organizations.
- Where does data quality ownership really live in your organization today?
- Is it treated as a business responsibility, or something technology is expected to “handle”?
- What’s the real cost of data issues that never make it into a management report