Data—The Dirty Little Secret

Data—The Dirty Little Secret

The Dirty Little Secret is that PLM systems and others are only as good as the data in them—garbage in, garbage out (GIGO).  In some cases, a poorly planned PLM system can proliferate data quality issues on the supply chain.  Many of us have seen the Gartner People, Processes, and Technology triangle, but data (a subset of all three) is equally as important. The Gartner Master Data Management (MDM) Analysts agree, that data is the fundamental underpinning, hence the revision.

Hidden Data Factory

In the following Harvard Business Review Article, HBR Article Bad Data Costs the US 3 Trillion per Year IBM claimed this was a $3 trillion dollar a year problem in 2016.

As referenced in the article, this problem is often referred to as the hidden data factory—the resources wasted trying to get products shipped, reworked BOMs, incorrect data, missing data, the manual hand-offs impact the entire value chain and cut directly into a company’s profit margin.

As an example, if you don’t have rigorous new creation processes, you can produce duplicate part numbers for parts with the same form, fit and function (FFF), which can result in many cost of quality (COQ) issues and millions of dollars wasted in the supply chain. Additionally, thousands of hours per year are routinely wasted manually correcting product data due to a poor data strategy and a PLM system that doesn’t act as a Product Innovation Platform connecting quality data across a Digital Thread.

If you consider two Item MDM objects, “parts” and “raw materials,” they represent approximately 50% of the revenue of a manufacturing company. Despite such a large percentage, there’s rarely an emphasis on the quality of the data and it’s often ignored as an area of cost reduction. The reason it’s ignored, I believe, is two-fold. First, a part typically travels across different vertical functions (data silos). This is challenging because one group isn’t aware of the unintended consequences of bad data or lack of data that it imposes on another. Second, we’re down at the sub-atomic level of a part as it moves from function to function, and if you lucky enough to have people with the expertise to understand this, they’re not C-level executives. Isn’t this why we hire Data Scientists, Big Data, Machine Learning, and AI to make sense of it?

A recent study of industry data scientists shows that 79% of their time is spent collecting, organizing, and cleaning data—a waste of their time. Data Scientist, Big Data, Machine Learning, and AI are all factors in making the best use of the data, but if you can’t follow the best practices around Item MDM, you’ll end up in a defensive data-wrangling exercise that’ll result in a data strategy which looks more like a data landfill than a competitive company out in front of its data.

 

Data—the Oil of the Digital Era

Good data quality is the fuel for connectivity and innovation. The Economist called data “the oil of the digital era.” We’re now witnessing a fundamental shift in business to provide capacity as a service. Future-ready enterprises must exploit their data to gain a competitive advantage or potentially fall victim to those who can.

Today, enterprises require a Digital Thread, a platform that allows for connected data flow of the asset’s product data throughout its lifecycle across traditionally vertical silos. The Digital Thread raises the bar for delivering the “right data to the right place at the right time.” The future is all about customer-driven enterprises that exploit data to the fullest. Companies like Amazon, Apple, Microsoft, and many others use and re-use data in extraordinary ways that are changing the way business is done.

Process and technology, while critical to success, are only as good as the data they use. A company that has strong data and a fluid Digital Thread can use Analytics, Machine Learning, and AI to not only be more efficient, but free people up to be more creative. Everything is enhanced with good data and diluted with bad. Good data is fundamental to the success of any enterprise and will outlive the people, processes, and technology, as well as data standards and its own product. As a result, it’s critical that any technology be open and be able to handle multiple data models.

The technology also needs to be extremely scalable. The amount of data is exploding. IDC predicts the global spend will reach $1.2 trillion by 2020—a 15.5% compound annual growth rate—and that the “digital universe” (the data created and copied every year) will reach 180 zettabytes (that’s 21 zeros) in 2025. This necessitates gaining control of your product data throughout the lifecycle. To do that, you must control the product data flowing across the Digital Thread.

Item MDM Data

Item MDM data, also referred to as “Material” or “Product” data versus customer, supplier, prospect, citizens, sites, or chart of accounts. Item MDM is the critical business data used across the enterprise that makes up a product (e.g., Parts, Materials, Tools, etc.). The Item MDM object, “Part,” tracks attributes such as part number, name, long description, unit of measure, etc., critical to the business. Every MDM object has a Source of Truth (SOT)—where it originates—and spans across many systems where data is added and updated, which is known as the System of Record (SOR).  Together a SOT and multiple connected SORs makes up a Digital Thread.

Let’s assume, using a “New Part Creation” process, that we create a new part that’s not a duplicate in FFF in a PLM—it’s SOT. Aras challenges companies who have millions of parts with a high percentage of duplicates by offering a Component Engineering module in partnership with IHS to extend part searches to IHS CAPS database, which provides data on 430 million components from close to 4,000 different manufacturers in the Cloud. Aras identifies which parts are consumed on existing designs in-house and/or come from preferred manufacturers, thus facilitating part reuse, which is explained here: Aras Component Engineering CIMdata Whitepaper

As a part moves along its journey across different Sources of Record (SOR) like PLM, ERP, and SCM, attribute information is added or manipulated by different functional Data Owners. PLM might send 10-30 attributes to ERP, where another 150+ attributes are added with some being manipulated, and so on. The important thing to control is the Data Owner for each of the Item MDM attributes in each area, so, as an example, an ERP Planner cannot change the Part Number attribute owned by Engineering and the Engineer in PLM, who can see Actual Cost, cannot change it because it is owned by Finance in ERP. When this works well, there are Functional Process Owners and Governance that oversee this process.

You might be thinking that because you’ve just drained every last dime into PLM and ERP initiatives, you’re in good shape. I doubt that, as this is a problem that cuts across the Digital Thread and I see few companies with a continuous Digital Thread.

If you can control the SOT, the Digital Thread (a journey through many SORs), and you can control an accurate configuration of your asset in the field, then you can use a Digital Twin (a virtual representation of your physical asset) and take advantage of not just the IOT data streaming in but understand everything about the way the asset works or can be improved for your customers, giving you a competitive advantage.

Good Item MDM data quality doesn’t replace the work done by any functional group, but it allows everyone to work better with reduced rework, better cost of quality, less recalls, the ability to make data-driven decisions, fewer lost opportunities, a better view of your costs, a better customer experience, and the knowledge to innovate faster.

It’s easier said than done, but as you continue on your digital transformation journey, I’d urge you make data a key component of an open, flexible Product Innovation Platform. I welcome any questions or thoughts.