lower-cost rivet makes technical and financial sense in several product lines to counter a competitor’s similar move.
After trying to get the data himself for several hours, Mike realizes that this will involve some major analytical homework. He wants to do stellar work, but it will require him to find, understand, and analyze heaps of data, which are dispersed all over the company.
So, he talks to his analytics SWAT IT team and they log his request behind a myriad of similar requests from other parts of the business. Wait time: four weeks – if he is lucky. Level of accuracy: adequate – if remotely possible.
This and similar requests for relatively basic business insight occur in most manufacturing organizations all over the world every day. It is the reason why IT and even data scientists are now considered a bottleneck for quick, accurate, and increasingly highly transformative initiatives, which determine the commercial success or failure of a new product, service, equipment, or market entrance.
In light of next week's Digital Transformation Conference, let's analyze how we got here. What can be done to improve the status quo and this perception of IT?
The Beauty of Walking Before Running
The fact is that manufacturing organizations are a bit late to enterprise self-service analytics, or should I say self-service data management, compared to more centrally managed or highly regulated organizations like financial services or healthcare companies.
Such organizations have already been dabbling in big data, cloud, and machine learning with varying degrees of success for a decade. Many deployed self-service analytics environments years ago. Nowadays, they are experiencing the “trough of disillusionment,” setting them up to finally realize the fruits of artificial intelligence (AI) adoption. They’ve learned that going back to basics around data quality, governance, cataloging, and cloud-based data integration to facilitate “data democratization” is needed to take full advantage of more advanced technologies.
Manufacturers can avoid the mistakes and costly learnings of other industries by doing it right the first time. However, their traditional plant-centric approach and tactile-oriented innovation viewpoint permeate – and potentially limit – IT-related innovation.
A plant operations manager at a large consumer product firm once told me that his organization thinks of innovation in terms of slapping another actuator on a product. To expand revenue or reduce cost decision makers immediately look at shutting down a production line or adding another plant. They would embark upon these investments before they would even consider consulting their existing data to find new ways of doing things.
Layer this perspective on top of “cheap money-driven” M&A and you get the piecemeal acquisition of increasingly more bolt-on IT infrastructure and the data it produces.
Suddenly, your data pipe becomes not bigger but increasingly narrower. It is akin to squeezing a melon through a garden hose.
The Pitfalls of Running Your Company On Excel
The challenge for process engineers, as well as supply chain, sales, marketing, and financial analysts, is all about how to leverage the growing flood of data themselves without going through the IT bottleneck every time.
Each of these stakeholders should be empowered to collect, massage, and use data from MES, historian, supply chain, CRM, and ERP systems in a scalable and repeatable fashion. They should also be able to repurpose other “data” analysts’ scrubbed data sets and final analyses. Furthermore, they should not have to look for updates when operations change, such as a new supplier part number or a decrease in procured quantities on a purchase order. All interested parties should be able to subscribe to alerts that automatically flag changes and update data sets accordingly.
Today, most manufacturers struggle with this level of sophistication on an enterprise-wide, strategic level.