Earlier this year, McKinsey released a report called The data-driven enterprise of 2025, which illustrated the journey that organisations have to go on to reach the ideal of being able to make intelligent, informed decisions based on facts. It’s a great idea and for some businesses probably achievable, but the reality could be very different if organisations don’t get a grip on data quality issues. If businesses persist with poor-quality data, decision-making will at best revert back to old-school tactics. So much for the data-driven enterprise.
It is reminiscent of what CB Insights co-founder Anand Sawal said a few years ago when, talking about decision-making, the rise of data analytics and the need for business leaders to find a sweet spot between data and human experience, he said: “We joke with our clients that too often, these big strategic decisions rely on the three Gs – Google searches, Guys with MBAs, and Gut instinct.” Of course, we’ve moved on since then, haven’t we? Haven’t we?
Not quite. As some recent research by enterprise intelligence firm Quantexa revealed, 95% of European organisations are “crippled by the data decision gap”, where “inaccurate and incomplete datasets” are undermining organisations’ ability to make accurate and trusted decisions. Also, research by marketing analytics platform Adverity found that 63% of chief marketing officers (CMOs) make decisions based on data, but 41% of marketing data analysts are “struggling to trust their data”.
The Adverity report suggests there is misplaced optimism among marketing departments, with two-thirds identifying as “analytically mature” and yet, for 68%, data reports are spreadsheet-based. Manual data wrangling is a big challenge that many organisations are having to contend with. As the report says: “The number of manual processes the dataset goes through must be called into question. If data is being transferred from Facebook and LinkedIn to Excel and then into PowerPoint, this creates more cracks for human error to seep in.”
Chris Hyde, global head of data solutions at data management firm Validity, cites a customer example where high volumes of duplicate records were creating data distrust and an increased workload. Akamai Technologies manually verified data, made updates and merged duplicates on a daily basis, he says, leading to an overhaul of the customer relationship management (CRM) system to enable easier access to data management tools.
This sort of thing is only made worse by organisations amassing larger data volumes from internal and external sources, but not joining the data points. As Vishal Marria, founder and CEO of Quantexa, points out, this problem is exacerbated as business leaders look to increase growth via mergers and acquisitions, for example, and in the process inherit additional data silos into an existing fragmented data cluster.
“Data is only useful if it is managed in the right way, and legacy technologies – which are typically rules-based and reliant on batch processing – are falling short,” says Marria.
The Covid-19 pandemic has only made things worse. Validity’s Hyde cites a statistic from his firm’s The state of CRM data health in 2022 ebook, in which 79% of respondents agree that data decay has increased as a result of the pandemic. He says a lot of this has to do with employees transitioning into new roles and, in turn, their phone numbers, addresses and job titles changing with them. Also, with more remote working, office locations and addresses are becoming irrelevant.
All of this means that lead and contact information in the CRM is rapidly going stale, and team members who stay behind face growing workloads as their co-workers leave.
For many organisations, this inability to cope with change is symptomatic of poor data management plans and processes. Organisations cannot make decisions that are truly data-driven without a solid, leadership-backed approach to data management.