Effective SPDM Is More Crucial Than Ever

While simulation tools have been getting more powerful over the last few decades, Simulation Process and Data Management (SPDM), which is a broader enterprise need, hasn’t been getting the attention it deserves, until relatively recently.

At several major global NAFEMS conferences I attended this past year (the SPDM Conference in Munich in November of 2018 and the World Congress in Quebec City in June of 2019), global manufacturing companies have begun showcasing their own in-house custom SPDM solutions. When manufacturers see the need and invest in their own solutions, it’s time to examine why they set out on this journey and what lessons they learned from these endeavors that other companies can benefit from, too.

What Challenges Does SPDM Address?
Every major manufacturing company that has made extensive use of simulation regularly employs dozens to hundreds of different tools, many developed in-house over the years to fill commercial gaps. Each tool has its own language and file formats, resulting in a veritable “Simulation Tower of Babel”. Among vendors, turf wars have resulted in closed formats and lack of general access to APIs, making the integration of this data even more difficult.

A wide variety of simulation point solutions is necessary and even beneficial. Different options can accurately simulate different, and increasingly complex, phenomena. These offer OEMs better and better solutions to match their experts’ preferred approaches and to fit their unique, proprietary in-house processes. But this heterogeneity has the unintended side effect of encouraging and even requiring siloed organizational structures. Organizational siloes isolate the simulation experts, their difficult-to-master point solutions, and the data they generate from the engineering teams designing and developing products. Siloes necessitate manual handoffs between different simulation teams and between simulation and engineering teams.

Manual handoffs of data are, by definition, not configuration-managed. They can result in incorrect inputs, simulation models that only experts can use, and simulation outputs (the results and analysis-driven insights) that are not available to the enterprise in a timely or effective manner. This combination of expert-driven point tools and organizational silos thwarts the effective management of simulation data and processes. Simulation-driven insights are unable to span the enterprise, where they could expand and scale the value of simulation processes to more teams and lifecycle phases.

For simulation to be an effective enterprise tool, something fundamental needs to change.

Why is SPDM More Crucial Today?
What is it about today’s enterprise goals that makes simulation and SPDM even more crucial than they have been in the past? The need for simulation has increased exponentially with innovative new enterprise initiatives. One, the Digital Twin, can leverage simulation for predictive maintenance and real-time design improvements like over-the-air software updates and in-service feature upgrades. Another initiative, additive manufacturing, leverages simulation to assess the quality of the final product and to optimize the process by virtually testing that the results of equipment settings and material properties will be effective. Plus, the push toward autonomous/connected vehicles sees simulation increasingly being used to virtually test the effectiveness of systems that need to integrate software, mechanical, and electrical / electronic components, in the context of extremely complex inputs such as those that human drivers must experience and respond to every day.

With this exponential increase in data, complexity, and use cases for simulation, the current methods – manual handoffs of data, unmanaged change and product versions, rework, parallel / serialized work, and shared file folders full of disconnected inputs and results – are infeasible: they will never achieve the innovation needed to realize tomorrow’s initiatives. What’s needed is traceable simulation in the digital thread of product information: access to the configuration-managed simulation results, and their inputs, that were used to support key product decisions. With better connections between simulation inputs, process steps, and results, the next stage – robust, “lights-out” automation – is possible. Humans or automated processes can execute common simulation processes across a broad range of product designs: set a template up once, and reuse it multiple times across a product line, or to evaluate a series of changes to a single product design. Automating the way companies run simulations, and then store, archive, retrieve, search, and compare across the data generated from these efforts, will be vital to scaling the value of simulation to more and more teams and strategic initiatives across the product’s lifecycle. All of this underscores the need for effective SPDM.

What Is the Commercial SPDM Landscape?
Given these challenges, it is hardly surprising that commercial attempts at creating SPDM environments have fallen short, and large global organizations are creating their own internal solutions in response. Commercial environments fall into two broad categories
  1. Vendor-Specific SPDM Environments within PLM: Vendors offering an SPDM solution too often “lock it away” to only interoperate with a specific set of CAD or PLM tools, lacking the openness and integrations necessary to support the heterogeneous array of tools in use. “You can have any color so long as it’s black” does not constitute a realistic approach in a landscape of dozens or even hundreds of in-house simulation tools used by experts to support the development of proprietary – and prosperous – IP using commercial tools from a huge variety of vendors.

  2. Siloed Vendor-Specific SPDM Environments: Complex to use and requiring in-depth expertise themselves, SPDM tools that may be open to other offerings often have no or limited PLM connections – meaning there is no way to connect with a configuration-managed version of the part information being analyzed. This creates a “cul de sac” in the product development process that breaks traceability rather than supporting it, and adds yet another silo to an already siloed landscape, exacerbating the problems of manual handoffs and their likelihood of negatively impacting accuracy and efficiency.

Limiting the authoring and analysis tools that experts can use, requiring a “rip and replace” of existing systems including the time and interruptions that retraining can cost companies, or adding process steps that can create manual errors all run counter to the vision for SPDM tools, the efficiency and scale they can offer, and the innovation they’re promised to create. Adding more work for engineers should not be the outcome of adopting these tools.

What Is Needed for Effective SPDM
A consistent lesson learned by most of the companies pioneering home-grown solutions to SPDM today is that their in-house implementation efforts, while facing internal cultural hurdles, are also expensive to develop, difficult to maintain, and lack key functionality for the broad enterprise. Many would prefer to leverage an existing PLM infrastructure while creating custom processes unique to their simulation needs. Ease of customization, both at the data model and API levels, is essential.

At Aras, we believe that an effective SPDM approach, enabled by PLM to effectively manage simulation data, must have the following foundational characteristics:
  • Flexible Data Model: It must support the unique definition of simulation information, connect it reliably with related data like requirements, parts, BOM, quality data, and more, and have that definition persist throughout the lifecycle so that simulation processes, data, and results can be viewed or initiated by any user from across the end-to-end product lifecycle.
     
  • Open Connector Architecture: An open, vendor-agnostic framework that accepts any CAD, CAE, or systems design and analysis tool for use in a single or multidisciplinary simulation process is key to many of the use cases that tomorrow’s technological innovations require.

  • Intelligent Simulation Automation: Multiphysics and multi-fidelity simulations traditionally use tool-chaining: manually handing off the outputs of one simulation tool to be used as the inputs for another. Automating this handoff, creating the means to iterate on various versions of the product as it changes and evolves, and accelerating the process of design changes leads to more simulation insights sooner. Connecting them to the digital thread ensures they are configuration-managed and accessible across more of the enterprise.

  • Integral to the Enterprise Digital Thread: Understanding the inputs, defining the process steps, enforcing workflow across teams, and managing the configuration of results against the original versions of the parts they relate to ensures that simulations can be easily accessed and interpreted later by stakeholders who need them. Storing this information in the digital thread in a machine-interpretable way, with well-defined, enforced, and robust semantics, ensures that it can be reused for machine learning and AI tomorrow.

In future blogs, we will delve into each of these topics in more detail. For more information on the role of simulation to support product innovation, read our eBook: A Guide to Simulation for Innovation.

Anonymous