Your address will show here +12 34 56 78
Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Manufacturers use a variety of tools and systems every day to manage their process from start to finish. It is critical that these systems can provide a “single pane of glass” and “single version of truth” along the way. Meaning, data can be viewed from any device, location or system and will be synchronized across all platforms. Manufacturing Operations Management Systems make this possible.

Real-time manufacturing operations management and industrial analytics tools

Check out PARCview

What is Manufacturing Operations Management?

Manufacturing Operations Management (MOM) is a form of LEAN manufacturing where a collection of systems is used to manage a process from start to finish. The key to MOMs is ensuring data is consistent across all systems being used from scheduling and production to shipment and delivery.

MOM includes software tools designed for the management of people, business processes, technology, and capital assets to meet customer demand while creating shareholder value. Tying in the LEAN manufacturing, the processes must be efficiently performed and resources productively managed. These are the prerequisites for successful operations management.

Key Applications of Manufacturing Operations Management


Supply chain & resource management

MOM systems include tools for planning, procuring, and receiving raw materials and components, especially as it relates to obtaining, storing, and moving necessary materials/components in a timely manner and of suitable quality to support efficient production, something that is certainly critical in these times of supply chain disruptions.

To deal with today’s dynamic business environment ranging from challenges caused by pandemics, shutdowns, geo-political conflicts, and supply chain disruptions, organizations need to be able to be sustainable, operationally resilient, conform to ESG goals, deploy the latest cybersecurity tools, and connect its workforces from any location.

Process & production management

Once all the resources are gathered, MOM tools need to be established for implementing product designs to specifications, developing the formulations or recipes for manufacturing the desired products, as well as manufacturing of product or products that conform to specifications and comply with regulations.

Organizations must monitor and adjust their processes quickly and automatically, to efficiently evaluate the situation when an inevitable glitch occurs. This is a prime opportunity for digital transformation through MOM systems.

Distribution & customer satisfaction management

The final stage of MOM relates to the distribution to the customers, particularly as it relates to sequencing and in-house logistics, as well as supporting products through their end-of-life cycles.

Organizations must react in real-time to changing market conditions and customer expectations. They will have to innovate with new business processes that reach throughout the organization, into the design and supply chain.

Driving innovation & transformation

Successfully innovating at this level involves managing people, processes, systems, and information. When disruptive technologies are in the mix, the first challenge is often tied up in the interplay of people and technology.

Only when the people involved begin to understand what the new MOM technologies are capable of and have the tools to visualize the data and real-time manufacturing analytics software to convert this data into actionable information can they begin to take steps towards achieving the innovation.

One output of manufacturing operations management systems is a production dashboard, like this one, built with dataPARC’s PARCview, which create a shared view of current operating conditions and critical KPIs.

Manufacturing Operations Management Systems

Today’s MOM systems can play a role in achieving the next levels of operations performance because they marshal many or all the needed services in one place and can provide a development and runtime environment for small or large applications.

Common MOM Tools

In addition to leveraging the latest AI, ML, AR/VR, APM, digital twin, edge, and Cloud technologies, MOM systems often consist of one or more of the following:

  • Manufacturing Execution Systems (MES)
  • Enterprise Asset Management (EAM)
  • Human-Machine Interface (HMI)
  • Laboratory Information Systems (LIMS)
  • Plant Asset Management (PAM)
  • Product Lifecycle Management (PLM)
  • Plant Asset Management (PAM)
  • Real-time Process Optimization (RPO)
  • Warehouse Management Systems (WMS)

MOM systems integrate with business systems, engineering systems, and maintenance systems both within and across multiple plants and enterprises.

An example of a multi-site operations with unique manufacturing applications at each site. Some tools like dataPARC’s PARCview, enable manufacturers to integrate data across sites for more effective manufacturing operations management.

Supply Chain Management (SCM), Supplier Resource Management (SRM), Transportation Management (TMS) are commonly used to manage the supply chain.

Plant automation systems, such as Distributed Control Systems (DCS) and Programmable Logic Controllers/Programmable Automation Controllers (PLCs/PACs) are key technologies driving manufacturing production.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

Data Visualization and Real-Time Analytics for MOM

Maybe the most important manufacturing operations management tools for managing production are the data visualization and real-time manufacturing analytics software platforms, like dataPARC, which provide integrated operations intelligence and time-series data historian software.

These MOM tools focus on data connectivity, real-time plant performance, and visualization + analytics to empower plant personnel and support their decision-making process.

Benefits of MOM Visualization & Analytics Tools


Eliminate Data Silos

Most real-time manufacturing operations management analytics tools offer the ability to connect to both manufacturing and operations data. Data from traditionally isolated data silos, such as lab quality data, or ERP inventory data, can be pulled in and presented side-by-side for analysis in a single display.

Establish a single source of truth

MOM analytics tools offering visualization plus integration capabilities enable manufacturers to create a “single version of truth” which everyone from management to the plant floor can use to understand the true operating conditions at a plant.

Often combining multiple sites and multiple data sources to form a single view, users leverage this data to gain perspectives and intelligence from both structured and unstructured operational and business data.

Produce common KPI dashboards

By measuring metrics and KPIs, such as production output, yields, material costs, quality, and downtime, users at multiple levels and roles can make better decisions to help improve production efficiencies and business performance.

Real-time manufacturing operations management dashboards from manufacturing analytics providers can pull in data from multiple physical sites or from multiple manufacturing process areas and display them in a common dashboard.

Manufacturing operations management dashboards from manufacturing analytics providers can pull in data from multiple physical sites for real-time production monitoring

Facilitate data-driven decision-making

Without operations intelligence provided by manufacturing operations management systems, users are often unable to properly understand how their decisions affect the process. MOM analytics software can display data sourced in the business systems for direct access to cost, quality control, and inventory data to support better business decisions.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Final Thoughts

The next generation of MOM systems is here. The economics of steady-state operations have been replaced with a dynamic, volatile, disruptive economic environment in which adapting to changing supply and demand, along with issues such as pandemics, shutdowns and geo-political conflicts are the norm. Tighter production specifications, greater economic pressures, and the need to maintain supply chain visibility in real-time, be sustainable and operationally resilient, plus more stringent process safety measures, cybersecurity standards, ESG goals and environmental regulations further challenge this dynamic environment. Managing these challenges requires more agile, less hierarchical structures; highly collaborative processes; reliable instrumentation; high availability of automation assets; excellent data; efficient information and real-time decision-support systems; accurate and predictive models; and precise control. Uncertainty and risks must be well understood and well managed in all aspects of the decision-making process.

Perhaps most importantly, everyone must have a clear understanding of the business objectives and progress toward those objectives. Increasingly, effective manufacturing operations management requires real-time decisions based on a solid understanding of what is happening, and the possibilities over the entire operations cycle. Organizations pursuing Digital Transformation should consider focusing on MOM systems and not just transformative new technologies to drive operations performance to new levels. This means utilizing software tools, such as manufacturing data integration, visualization, and real-time manufacturing analytics software that gathers a user’s manufacturing data in one single pane of glass view and establishes a single source of the truth.

This article was contributed by Craig Resnick. Craig is a primary analyst ARC Advisory Group. Craig’s focus areas include production management, OEE, HMI software, automation platforms, and embedded systems.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Minimize manufacturing gaps such as operational cost or waste by performing a gap analysis with process data. In this article we will walk through the steps of a basic manufacturing gap analysis and provide an example.

Implement real-time gap tracking with dataPARC to help you optimize & control your processes

View Process Optimization Solutions

What is a Gap Analysis for Manufacturing?

A gap analysis is the process of comparing current operating conditions against a target and determining how to bridge the difference. This is an essential part of continuous improvement and LEAN Manufacturing.

A manufacturing gap analysis can be performed on a variety of metrics, such as:

  • operational costs
  • quality
  • productivity
  • waste
  • etc.

When it comes to bridging the gap, the ideal case is to reach the target with a single and permanent change. This is not always the case, there are times where the gap is going to be fluid and would benefit from constant monitoring and small adjustments. In these cases, operations can utilize a real-time gap tracking dashboard to alert them of what is causing the gap and get the process back on track in the moment, rather than realize the problem days, weeks or months later.

Manufacturing analytics software like dataPARC’s PARCview offer tools to help manufacturing companies perform real-time gap tracking post-gap analysis.

Who Conducts a Manufacturing Gap Analysis?

Gap analysis can be performed by anyone trying to optimize a process. As mentioned above there are a multitude of metrics that can be measured.

A process engineer might want to reduce operational cost by focusing on energy consumption, someone in the finance department may notice an increase in chemical cost every month, and a supervisor may want to reduce the time it takes to complete a task to focus on other items.

Almost every department can leverage gap tracking in one way or another.

How to Perform a Gap Analysis in Manufacturing

Like many other improvement strategies, we can use the DMAIC method (Define, Measure, Analyze, Improve, Control) to perform a gap analysis and implement a live gap tracking dashboard.

To create a gap tracking dashboard, a gap analysis needs to be completed first. In the last stage, Control, the dashboard is created, and operations can perform steps Analyze-Improve-Control in real-time.

1. Define

The first step is to define the area of focus and identify the target. A great place to start when looking for an area of focus is the company’s strategic business plan, operational plan, or yearly operational goals. Many times, these goals will already have targets in place.

2. Measure

Next, the process must be measured. Take a close look at the measurement system. Is the data reliable? Does the measurement system provide the necessary information? If so, measure the current state of the process.

If there is no current measurement system, one will need to be created. Although in-process measurements or calculations are best, manual input can also be used.

Some manufacturing analytics providers, like dataPARC, offer manual data entry tools which allow users to create custom tags for manual input. These tags can be trended and used the process tags in dashboards and displays.

3. Analyze

Take the data and compare it to the goal. How far from the target is the process? This is the gap. It may help to visualize the process gap in multiple ways such as with a histogram or trend display.

This histogram provides the overall distribution of the data which can help narrow the focus. What does the peak look like, is it a normal distribution, skewed to one side, is there a double-peak, or edge peak?

A trend shows how the process is shifting overtime, are there times of zero gap vs large gaps such as shift or season?

With the measurement system in place, the gap realized, and some graphical representations of the data, it is time to brainstorm potential causes of the gap. Brainstorming is not a time to eliminate ideas, get everything written down first. There are a variety of tools that can be used to help in this process:

Fishbone Diagram

This classic tool helps determine root causes by separating the process into categories. The most common categories are People, Process/Procedure, Supplies, Equipment, Measurement and Environment, other categories or any combination can be used to fit the situation.

The fishbone diagram is a classic tool for performing root cause analysis.

The team can brainstorm each category and identify any causes that could play a role in the problem. Dive one step further with a 5-why analysis, a method that simply asks “why” until it cannot be answered any more to ensure the true root cause is uncovered.

Is gap analysis one of your digital transformation goals? Let our Digital Transformation Roadmap guide your way.

get the guide

SWOT Chart

This chart is made of four squares, with labeled sections: Strengths, Weaknesses, Opportunities, and Threats. This strategy is used to determine the internal and external factors that drive the effectiveness of the process. For potential root cases, focus on what appears in weaknesses and see if potential solutions find their way to opportunities.

SWOT charts are another fundamental root cause analysis tool.

McKinsey 7s web.

The McKinsey framework is made up of 7 elements, categorized as 3 “Hard” or controllable elements and 4 “Soft”, non-controllable elements. In each element, write the current and desired state. It is important elements are in alignment with one another, any misalignment could point to a root cause.

The McKinsey Framework.

4. Improve

Determine the best way to bridge the gap and implement the changes. A payoff matrix or efficiency impact trend can help pick the most effective, least costly options. Focus on quick wins. Items in busy work can be completed but are not a priority. Those in Major Projects, you must ask, is the price work the impact? Anything that is low impact and high cost can be dropped.

A payoff matrix or efficiency impact trend can help you determine the best way to bridge gaps.

After the solutions are implemented, check the results by analyzing the data again and see if there was an improvement.

5. Control

Once the target is met it is important to keep it that way. Monthly reports can be used to keep track of the process gap and make sure it stays in the desired range.

Set up a dashboard, or other visual to monitor the process in real time. By tracking the gap of the process in real time, operations can see how changes to the process effect the bottom line in real time, rather than at the end of the month.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

An Example of Gap Analysis for Manufacturing

In this manufacturing example we are going to walk through a gap analysis to improve operational costs on a single paper machine.

Define

The company’s operational plan has a goal for monthly operational cost. To break this down into a manageable gap analysis, the focus will be looking at a single machine. This machine is not currently meeting the monthly operational cost goal on a regular basis.

Measure

Since this is an initiative from an operational plan, there is already a measurement system in place. The machine operational costs are broken down into 5 variables. Speed, Steam, Chemical, Furnish and Basis Weight.

These variables are measured continuously, so data can be pulled in hourly, daily, and/or monthly averages. The variety of data views will help in the next stage. Each of these variables have a target, but some are missing upper and lower control limits.

Analyze

First, the combined daily operational cost was compared against the target. There were days where the target is met, but it is not consistent.

Next, each of the five variables were compared with their targets separately over the past several months. From this view, Chemical and Steam stood out as the two main factors that were driving up the operational cost. With that in mind, we moved onto the Fishbone diagram and 5-why analysis.

Using the fishbone diagram we were able to determine that chemical and steam were the two main factor driving up our costs over the past several months.

Improve

From the fishbone and 5 why, we found that there were targets but no control limits set on all the chemicals. Operators were adding the amount of chemical they felt would accomplish the quality tests without trying to only apply to necessary amount.

Thinking about the cost/effectiveness diagram, it is cost free to add control limits to each chemical additive. Engineers pulled chemical and quality data from multiple months, created a histogram to find the distribution and set up control limits to help the operators have a better gauge of how much chemical to apply and the typical range that is needed to satisfy any quality tests.

For steam, there were a lot of potential root causes around the fiber mix and cook. The mill already has SOPs to deal with situations such as bad cooks. Another root cause that came up during the fishbone was steam leaks. Most leaks can be fixed while the machine is running, so over the next several weeks there was a push to find and close major leaks.

Control

In this case, since limits were created for chemical usage, alarms were also created to alert operations if they exceeded the control limit. Alerts are a great way to notify operations when processes are drifting out of control so quick corrects can be made.

After a few weeks of these changes, another analysis was be completed. The operational costs are meeting target, and it was time to move to the next process. It is important not to forget about the operational cost, this was monitored monthly to ensure it does not exceed the target.

Conclusion

Performing routine gap analysis is an important step in LEAN Manufacturing and continuous improvement. By following the above steps manufacturers can optimize their process by reducing waste, operational costs, improving quality or going after other key metrics.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Process Manufacturing, Troubleshooting & Analysis

One of the easiest ways to explain why you should implement 5 whys root cause analysis problem solving at your plant is simple: the cause of the problem is often (we’d go so far to say almost always) different that initially speculated. Implementing a lean strategy like 5 whys can save you time and headaches in the future.

Issues with and failures of assets are bound to happen in process manufacturing. Your team’s strategy for resolving problems that occur will determine your productivity in countless ways. Getting a process set up for resolving problems will not only help with the current issues and failures but will create a plan to resolve future problems with the goal of each resolution coming faster and easier.

Real-time process analytics software with integrated 5 Whys analysis tools.

Check out PARCview

What Exactly are the 5 Whys?

The 5 whys is a lean problem- solving strategy that is popular in many industries. Using lean methodology, the 5 whys were developed by Sakichi Toyoda, a Japanese inventor and industrialist. The 5 whys focus on root cause analysis (RCA) and is defined as a systematic process for identifying the origins of problems and determining an approach for responding to and solving them. 5 whys focuses on prevention or being proactive rather than being reactive.

5 whys strives to be analytical and strategic about getting to the bottom of asset failures and issues, it is a holistic approach that is used by stepping back and looking at both the process and the big picture. The essence of 5 whys is revealed in the quote below where something as small as a nail in a horse’s shoe being lost was the root cause of a war being lost:

“For want of the nail the shoe was lost
For want of the shoe the horse was lost
For want of a horse the warrior was lost
For want of a warrior the battle was lost
For want of a battle the kingdom was lost
All for the want of a nail.”

One of the key factors for successful implementation of the 5 whys technique is to make an informed decision. This means that the decision-making process should be based on an insightful understanding of what is happening on the plant floor. Hunches and guesses are not adequate as they are the equivalent of a band-aid solution.

The 5 whys can help you identify the root cause of process issues at your plant.

Below is an example of a 5 whys method being used for a problem seemingly as basic as a computer failure. If you look closely, you will conclude that the actual problem has nothing to do with a computer failure.

  • Why didn’t your computer perform the task? – Because the memory was not sufficient.
  • Why wasn’t your memory sufficient? – Because I did not ask for enough memory.
  • Why did you underestimate the amount of memory? – Because I did not know my programs would take so much space.
  • Why didn’t you know programs would take so much space? – Because I did not do my research on programs and memory required for my annual projects.
  • Why did you not do research on memory required? – Because I am short staffed and had to let some tasks slip to get other priorities accomplished.

As seen in the example above, the real problem was not in fact computer memory, but a shortage in human assets. Without performing this exercise, the person may never have gotten to the place where they knew they were short staffed and needed help.

This example can be used to illustrate problems in a plant as well. Maybe an asset is having repeated failures or lab data is not testing accurately. Rather than immediately concluding that the problem is entirely mechanical, use the 5 whys method and you may discover that your problem is not what you think.

Advantages and Disadvantages of the 5 Whys Method

Advantages are easily identified by such outcomes such as being able to identify the root cause of your problem and not just the symptoms. It is simple and easy to use and implement, and perhaps the most attractive advantage is that it helps you avoid taking immediate action without first identifying the real root cause of the problem. Taking immediate action on a path that is not accurate is a waste of precious time and resources.

Disadvantages are that some people may disagree with the different answers that come up for the cause of the problem. It is also only as good as the collective knowledge of the people using it and if time and diligence is not applied, you may not uncover and address the true root cause of the problem.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

5 Whys Root Cause Analysis Implementation at your Plant

Now that the problem in its essence has been revealed, what are the next steps?

Get Familiar with the 5 Whys Concept

The first step in implementing the 5 whys at your plant is to get familiar with the concept. You may research the 5 whys methodology online or listen to tutorials to gain a deeper understanding. In the information age, we have access to plenty of free information at the touch of our screens. Get acquainted with 5 whys. Even if it is just one video on YouTube and a few articles online, a better understanding means a better implementation.

Schedule a 5 Whys Meeting with Your Team

The second step would be to solve a problem at your plant using the 5 whys method. To do so, follow the steps below to schedule and hold a 5 whys RCA meeting.

These steps are simple to give you a basic understanding. To gain greater understanding in detail on how to implement 5 whys RCA, read on for in depth instructions.

  • Organize your meeting
  • Define your problem statement
  • Ask the first “why”
  • Ask why four more times
  • Determine countermeasures
  • Assign responsibilities
  • Monitor progress
  • Schedule a follow-up meeting

In other words, the root cause analysis process should include people with practical experience. Logically, they can give you the most valuable information regarding any problem that appears in their area of expertise.

What’s Next? What Happens Once I have Held my 5 Whys meeting?

Once your meeting has been held and you begin to implement the 5 whys method, it is essential to remember that some failures can cascade into other failures, creating a greater need for root cause analysis to fully understand the sequence of cause and failure events.

Root Cause Analysis using the 5 whys method typically has 3 goals:

  • Uncover the root cause
  • Fully understand how to address and learn from the problem
  • Apply the solution to this and future issues, creating a solid methodology to ensure the same success in the future

The Six Phases of 5 Whys Root Cause Analysis

Digging even deeper, when 5 whys root cause analysis is performed, there are six phases in one cycle. The components of asset failure may include environment, people, equipment, materials, and procedure. Before you carry out 5 whys RCA, you should decide which problems are immediate candidates for this analysis. Just a few examples of where root cause analysis is used include major accidents, everyday incidents, human errors, and manufacturing mistakes. Those that result in the highest costs to resolve, most downtime, or threats to safety will rise to the top of the list.

There are some software-based 5 whys analysis tools out there, like dataPARC’s PARCview, which automatically identifies potential the 5 top culprits of a process issue & links to trend data for deeper root cause analysis.

Phase 1: Make an Exhaustive List of Every Possible Cause

The first thing to do in 5 whys is to list every potential cause leading up to a problem or event. At the same time, brainstorm everything that could possibly be related to the problem. In doing these steps you can create a history of what might have gone wrong and when.

You must remain neutral and focus only on the facts of the situation. Emotions and defensiveness must be minimized to have an effective list that you can start with. Stay neutral, open. Talk with people and look at records, logs and other fact keeping resources. Try to replay and reconstruct what you think happened when the problem occurred.

Phase 2: Evidence, Fact and Data Seeking and Gathering

Phase 2 is the time when you get your hands on any possible data or files that can lead to the possible causes of your problem. Sources for this data may be databases, digital handwritten or printed files. In this phase the 5 whys list of at least 5 reasons comes into play and needs backup for each outcome or reason why that came up on your list.

Phase 3: Identify What Contributed to the Problem

In Phase 3 all contributions to the problem are identified. List changes and events in the asset’s history. Evidence around the changes can be very helpful so gather this as you are able. Evidence can be broken down into four categories: paper, people, recording and physical evidence. Examples include paperwork specific to an activity, broken parts of the assets and video footage if you have it.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Phase 4: Collect Data and Analyze It

In Phase 4, you should analyze the collected data. Organize changes or events by how much or little you can have an impact on the outcome. Then decide if each event is unrelated, a correlating factor, a contributing factor, or a root cause. An unrelated event is one that has no impact or effect on the problem whatsoever. A correlating factor is one that is statistically related to the problem but may or may not have a direct impact on the problem.

A contributing factor is an event or condition that directly led to the problem, in full or in part. This should help you arrive at one or more one root causes. When the root cause has been identified, more questions can be asked. Why are you certain that this is the root cause instead of something else?

Phase 5: Preventing Future Breakdowns with Effective Countermeasures

The fifth phase of 5 whys Root Cause Analysis is to preventing future breakdowns by creating a custom plan that includes countermeasures, which essentially address each of the 5 whys you identified in your team meeting. Preventive actions should also be identified. Your actions should not only prevent the problem from happening again, but other problems should not be caused because of it. Ideally a solid solution is one that is repeatable and can be used on other problems.

One of the most important things to determine is how the root cause of the problem can be eliminated. Root causes will of course vary just as much as people and assets do. Examples of eliminating the root cause of the issue are changes to the preventive maintenance improved operator training, new signage or HMI controls, or a change of parts or part suppliers.

In addition, be sure to identify any cost associated with the plan. How much was lost because of the problem, and how much it is going to cost to implement it.

To avoid and predict the potential for future problems, you should ask the team a few questions.

  • What are the steps we must take to prevent the problem from reoccurring?
  • Who will implement and how will the solution be implemented?
  • Are any risks involved?

Phase 6 – Implementation of your Plan

If you make it to this step, you have successfully completed 5 whys root cause analysis and have a solid plan.

Depending on the type, severity, and complexity of the problem and the plan to prevent it from happening again, there are several factors the team needs to think about before implementation occurs. These can include the people in charge of the assets, asset condition and status, processes related to the maintenance of the assets, and any people or processes outside of asset maintenance that have an impact on the identified problem. You would be surprised how much is involved with just one asset when you exhaustively think about it and make a list of all people and actions involved during its useful life.

Implementing your plan should be well organized and orchestrated as well as documented. Follow up meetings with your team should be scheduled to talk about what went well, and what could be improved upon. With time, 5 whys can become an effective tool to both solving and preventing future problems at your plant.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

In the process industries, optimization is the key to efficiency. And efficiency is what leads to profit – allowing manufacturers to produce more and waste less. To optimize their processes, many manufacturers use a combination of time series data historian and data visualization software. dataPARC and PI are two of the leaders in this space, and in this article we’ll compare dataPARC vs PI and highlight some of the advantages dataPARC has over PI as a process information management system.

Check out dataPARC’s real-time process data analytics tools & see how better data can lead to better decisions.

Check out PARCview

dataPARC vs PI: Similarities

dataPARC and PI have existed for decades and have large installation bases represented by major manufacturers around the world.

Both dataPARC and PI:

  • Offer a real-time data historian
  • Use a binary, cluster-index, flat file to store history
  • Have an asset structure to address the complexities of large disparate data sources
  • Offer many of the expected analytics & visualization tools: trending, graphics, reports
  • Can connect to various control systems for collecting time-series data in real-time
  • Use a store & forward function in case data connectivity is lost
  • Can work with very large tag-count systems

Now, let’s dive more into their differences and see how dataPARC sets itself apart.

dataPARC vs PI: Differences

Cost

We might as well start with what will be one of the key considerations when evaluating these two data historian and process data visualization toolkits.

Long story short, dataPARC’s total cost of ownership is lower when compared to other “like” industry solutions. Both the initial cost and ongoing costs are considerably lower than the PI System.

Unlimited Users

A key reason for this is dataPARC’s unlimited license model, which makes it a great fit for organizations wishing to get production data in front of decision-makers at every level of the plant without worrying about having to purchase additional licenses.

PI uses a per-user pricing model. This tends to work for small organizations with only a few people needing to access the platform, but for larger organizations or enterprise implementations the cost adds up quickly.

With dataPARC, everyone who needs access to the data can have access at no additional cost – putting the power to make data-data decisions in the hands of every employee.

Looking for an alternative to PI’s Data Historian? Get an enterprise plant data historian at a fraction of the cost. Check out dataPARC’s PARCserver historian.

User Experience

When customers are asked about dataPARC’s top 3 to 5 benefits, ease-of-use is always near the top of the list. The reduced complexity of the dataPARC system allows even the least “computer-savvy” person to begin building content and gaining value, and results in wide adoption of the tools within an organization. 

Though there are many features to dataPARC, a new user can learn how to search tags, trend, and navigate within minutes. From there, users quickly learn they can view trend statistics, manage alarm events, export data, create displays such as X/Y Plot, Histogram, or Pareto and much more all form the right click menu.

dataPARC’s trending tools have long been recognized by customers as the number 1 trend solution in the industry. dataPARC’s trend capabilities are faster and far superior to others and fit better in the practical function realm.

No other package allows for a quicker build of a trend matrix, with quick drag & drop from both the tag browser and displays.

dataPARC makes finding and trending tag data super easy.

Many organizations that were set up with the PI Historian and ProcessBook have since chosen to get dataPARC to “sit on top” of their PI historian simply for PARCview; the visualization tools and ease of use speak for themselves.

Diagnostic Analytics

As mentioned earlier, dataPARC’s trending application is considered the best in industry. Not only for its ease of use and quick access to analysis tools but for its speed as well.

Trend

dataPARC uses a deliberate data speed strategy with multiple components including an embedded Performance Data Engine (PARCpde) to speed data to the user.  The goal is to meet and exceed the user’s “speed of thought.”  PARCpde is a foundational part of the entire dataPARC system. 

Speed tests comparing dataPARC vs PI and other contemporary historians have shown dataPARC to be anywhere from 10X to 50X faster in delivering large or long-term datasets back to the user. 

Several companies have switched to dataPARC in part because of the data speed.  dataPARC also utilizes an aggregate archive and rollup archive in its architecture which greatly reduces the amount of time wasted when solving problems or investigating opportunities. 

From the trend, users can launch a quick statistics grid, generate a new X/Y Chart or Histogram display. Each chart will pull in the tags from the trend, so users don’t have search for them in Tag Browser again.

The X/Y plot sets two tags up for comparison and a best fit line can be generated – linear, polynomial, etc. The formula generated from the fit can be pulled into a trend or other display. PI can also generate X/Y plots, but they are created from scratch and no best fit line is generated.

Excel Add-in

dataPARC’s Excel add-in was built with a high degree of ease-of-use and speed. 

PI and dataPARC both have in-cell functions that can pull data directly into Excel. The dataPARC add-in has multiple other functions.

There is a sheet that can pull multiple tags in the same time range without dealing with formulas. Users can import tag lists from already created dataPARC displays instead of searching for the tags again.

Besides the value gained in legacy Excel add-in tools, dataPARC’s is highlighted by the following:

  • Drag groups of tags/data into Excel from multiple data sources
  • Filter data based on multiple tags values
  • Cross Correlation/R2 matrix generation
  • CUSUM & MSR charting

Additionally, users can display time series-based data from Excel into PARCview trends and displays. This can be used to trend or compare data from outside the company right next to process data.

Evaluate the top alternatives to Processbook & PI Vision in our PI Server Data Visualization Tools Buyer’s Guide.

Get the Guide

Operations Management

Real-time operations management is necessary to keep a plant running at peak efficiency and to be able to respond quickly to process excursions that result in unplanned downtime or product loss.

This is facilitated by dataPARC in a variety of ways:

  • Graphical process displays
  • KPI and Lab data dashboards
  • Manual data entry (MDE) tools
  • Automated reporting
  • Process alarms & notifications
  • & more

When comparing dataPARC vs PI, both offer the creation of dynamic, information packed graphical dashboards, but only dataPARC has the Centerline display.

Centerline

Centerline is a powerful monitoring tool unique to dataPARC. It is a real-time display that reports run based statistics for tags. The runs can be Grade or Time based, and the statistics include time average, standard deviation, CpK, min, max, etc.

Centerline displays data for time periods or runs to ensure process conditions are the same run after run.

The purpose of a centerline display is to help determine the best operational settings for production, and to ensure those settings are normally being used during production.

Centerline is one of dataPARC’s powerful data analysis tools for which there is no PI equivalent.

Alarms and Notifications

dataPARC’s alarm and notification system can send emails, text notifications or trigger workflows when an alarm is detected or closed. Once an alarm is detected, an alarm event is created. These events can be viewed and acknowledged in a trend, centerline, graphic or alarm list. Users can acknowledge the event by assigning a reason from the reason tree and/or typing a comment to the event. Quick analysis can be done in dataPARC with the Pareto chart to determine the top reasons saved for an alarm or create a tabular report sorted by reason with all comments visible.

Similarly, PI can create event frames and send notifications. Once event frames are detected and a reason assigned, users can see this data as a table in PI Vision, but further analysis or reporting is required to take place in the PI Excel Add-in DataLink. dataPARC’s Excel Add-in also has features to pull in Alarm event data.

More dataPARC Excel Add-in features are explored in the following section.

Manual Data Entry (MDE)

dataPARC’s MDE display is quick to configure and allows users to enter and save manual data to the database rather than on a piece of paper or in Excel.

Manually entered data is represented by tags, thus they can be used in PARCview trends, dashboards, and displays like any other tag.

Need to get better data into the hands of your process engineers? Check out our real-time process analytics tools & see how better data can lead to better decisions.

Calculations

When users don’t have the perfect tag to help manage a process, a calc tag or MDE is often used. dataPARC and PI are both able to perform simple calculations such as adding tags, If/Then statements, or unit conversions.

with PI Vision, PI no longer supports VB scripting. VB scripting opens the doors for custom solutions and dataPARC leverages VB scripting for applications such as database reads, file parsing, web service calls, and much more.

Predictive Analytics

dataPARC’s PARCmodel offers a degree of predictive analysis with PLS (Partial Least Square) and PLC (Principal Component Analysis) modeling capabilities.

PLS

The PLS package has been described by one of the world’s top practical modeling engineers as “…bar-none, better than anything I’ve ever seen before.”  In the processing industry, one of the applications for PLS modeling is in building inferential property predictors (IPPs).

Control engineers in operating companies report that a PLS model generation for one IPP can take more than 8 hours to re-model (longer for the initial model) using multiple tools and off-line activity. dataPARC integrates it all into one tool and the re-model effort can be as little as 5 minutes.

This snappy model generation allows multiple solutions to be generated for comparison to find the best option. The speed of remodeling allows for wider application and benefit of PLS.  Practical engineering methods and even process “hunches” can now be backed with a quick validation by a PLS mathematical session in 2 to 5 minutes. 

dataPARC’s predictive modeling tools

dataPARC delivers huge time savings, better learning environment, better collaboration environment, more useful applications – these all accelerate value to the company’s key business drivers. 

PCA

PCA uses the same modeling advantages that dataPARC’s PLS offers, allowing for easy model generation.  The difference between the two modeling methods is that PLS seeks to model and mimic a single variable using adjacent variables as model inputs.  PCA doesn’t model a single variable but models a whole process. 

The value comes when comparing the current process with the modeled process.  PCA gives the user the ability to know when the current process is off (when compared to the modeled process) and identifies the “offending” process variable(s). 

PCA makes use of two parameters (available to the PLS model as well): DMODX (error from model) and HT2N (Hotelling T2 Normalized – off norm). The PCA model input variables are all graded and staff can see which variable(s) is/are causing the problem.  PCA can be used as an early warning system to help operations see a problem before it happens. 

PARCmodel is separately licensed but incorporated into PARCview and easily accessed in the trend right click menu. PI does not have similar analytics tools.

Looking to replace ProcessBook See why PARCview is regarded as the #1 ProcessBook alternative.

Customer-Centric Development & Support

At dataPARC, above everything is the customer and their very real, timely, practical needs. dataPARC’s strategy involves a high attentiveness to the customer’s needs and solving problems quickly.

dataPARC employs many SMEs serving in key process engineering support roles for operating companies in the industry. Over the years dataPARC’s user features and overall system architecture has been shaped by the SMEs and customers. dataPARC is built by end users for end users.

At dataPARC we sell more than software, we sell our services to help build trends, graphics and other displays to get your system off the ground running. Our Engineers and Support staff are available to help implement new projects and off continual support.

With PI, to get the same displays created, customers would have to outsource to a 3rd party. dataPARC is a one stop shop.

Conclusion

dataPARC and PI have a lot in common, however dataPARC has the upper hand where it counts – user experience, speed of data, and cost. dataPARC is simple, fast, and effective.

The advantages to dataPARC vs PI continue to grow with every new feature and update. Features that are driven by users and customers.

pi processbook alternatives guide

Download the Guide

Discover top alternatives to PI’s ProcessBook and PI Vision analytics toolkits.

Download PDF
0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Where do we start in digitizing our manufacturing operations? one may ask. While there is no easy answer, the solution lies in starting not from the top down, but from the ground up, focusing on the digital transformation roles and responsibilities of the key people in your plant.

Digital transformation in process manufacturing is not only a priority, but now an essential step forward as the world encounters and adapts to a more digital world. To put it simply if you do not adjust your processes to embrace digital change, your competitors will (and may already have) outproduce, outshine and outsell you.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Transformation Teams

Digital change has been slow until now though it has been steady. PLC and DCS systems were manufacturing’s digital beginnings and thankfully there is so much more available now to further digitize operations and minimize downtime, improve your process, enhance data management, data sharing reporting and increase profitability. A truly connected enterprise will be adaptable and agile, allowing it to keep abreast of changes in the operating environment.

Plant roles play an essential part in the digitization of process manufacturing and all can contribute to a seamless digital transformation within your facility. Each role embraces digital change and transforms the process from the inside out. By focusing on these roles and the duties and responsibilities within each of them, plant digitization can lead to a well-oiled machine whose comprehensive outcomes depend on and benefit from.

Where do we start in digitizing our operations? one may ask. While there is no easy answer, the solution does lie in starting not from the top down, but from the ground up, with each role’s responsibilities and contributions enhancing the other, adding to and building on the next, for a comprehensive digital enterprise and solid, data-based reporting.

Integrating sources of plant data is a good place to start, along with the processes themselves becoming digitized for maximum outcomes. In this article we will focus on the various roles in the plant, their responsibilities and how each one can contribute to digital transformation.

Digital Transformation Roles & Responsibilities

The Operator

The Operator’s Role in Digital Transformation

Checking process conditions (temperatures, pressures, line speed, etc.) are an essential task for an operator. These process conditions could have readings directly on the machine with valves or buttons to adjust as needed. With more and more digital transformation in manufacturing these process variables are being set up with PLCs to create a digital tag. This tag can be read through an OPCDA server and visualized throughout the plant on computers, in offices, control rooms and meeting rooms. They can also be set up with a DCS to control the process from the control room rather than having to walk the floor to adjust speeds or valves.

The process variables need to be monitored to produce quality products. There are ranges for each process variable and additive when making a product, if these get out of range, the final product could be outside the final specification. Limits can be drawn on gauges, written in an SOP (Standard Operating Procedure) or set up as limits for alarming. These alarms could appear either on the DCS or data visualization screen to alert the operator a variable needs attention.

To consistently make quality product, operators must communicate with the lab tech to verify the product is within spec. This communication between the lab and operators has been traditionally done through verbal communication, walkie talkies, phone calls, etc. To digitize this process, the lab tech enters tested values into a data visualization program or a lab information management system (LIMS) database. These values can be displayed on dashboard with the specifications next to them. The operator can then see when specification values are out of spec and adjust the process, or when values are trending up/down and adjust the process to keep the product within specification before making bad quality product.

Operators are also responsible for keeping track of a product and lot being produced. This can be done manually with pen and paper or entered digitally into a database.

At the end of the shift operators need to pass key information to the next shift. This can be done with a hand off meeting to verbally discuss, a physical notebook to log key points or a digitalized version of a notebook. With digitalized versions of reports there is opportunity to relay information to multiple control rooms or locations of the company’s operations at once.

The Lab Technician

The Lab Technician’s Role in Digital Transformation

Lab quality testing is an essential part of process manufacturing. Thorough testing of each batch quality results allows for production of the scheduled product. Because other roles such as process engineer and operator rely on the outcomes of lab testing, getting the lab quality data seamlessly disseminated is essential to smooth operations.

Testing multiple variables of the product and comparing it to specifications, manually testing the product, recording the result, and manually comparing the finished product to specifications are among the lab technician’s duties. If the lab tech is entering data into a digital system, limits can typically be saved for different products, speeding things up.

The lab tech would manually test the product, enter the results in a program, and the LIMS system would flag if the result were out of spec. Furthermore, a lab tech can set up the test, a machine conducts the test, the result is then fed to the LIMS system where the value would be flagged if the test is out of spec. performing these tasks digitally is a tremendous time saver and process.

In summary, lab techs are ultimately responsible for testing the final product and passing or failing it to be sold. Digitizing these tests and the corresponding data streamlines and accelerates the entire lab test process.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

The Process Engineer

The Process Engineer’s Role in Digital Transformation

Process Engineers, often called by other titles including chemical engineers, often have a range or duties including product development, process optimization, documentation of SOPs, setting up automatic controls/PLCs, ensuring equipment reliability, communicating with superintendents, operators, lab techs, maintenance managers and customers.

Process engineers monitoring the entire manufacturing process on a daily, weekly, and monthly basis to identify improvement opportunities and evaluate the condition of the assets and processes.

Most sites have an existing system for maintenance requests. A physical system may exist where staff hand writes the issue, area, and other important information and hand deliver it to the maintenance department. Alternately, there could be a system set up to email the maintenance department with pictures attached. A program may be used to submit maintenance requests. This system would provide a unique ticket number, automated status updates, and other key information. Such a program would allow engineers or the maintenance department to see history thus being able to identify repetitive issues, such as a part needing replacement. Digitizing maintenance can help create a preventative maintenance schedule, to replace the part before it is no longer performing, resulting in sub-par product quality.

Another way for engineers to monitor the process is through data visualization. When data is stored, the history can be viewed, and users can identify irregularities, trends, and cycles in the process to help identify root cause when upsets occur. Engineers might set up their own alarms, separate from operator alarms, to keep track of events and determine if an optimization project is possible.

Process optimization and product development are important tasks for project engineers. Engineers may develop and conduct trials to continually optimize the process and develop new products. They often use the Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control) method to do this. The Define step is typically completed by a stakeholder, a superintendent or plant manager. Once the project is defined the engineer moves into the measure step.

The measure step can take many forms, physically measuring, counting, or documenting a process. Collecting necessary data can be time-consuming. With more of the data being digitized, data collection is already done.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Engineers need to organize and collect data to analyze it. Once the data is collected it can put into Excel, Minitab, or other programs to be analyzed. By doing comparisons and statistical analysis, with the help of process knowledge, an improvement plan can be created.

Engineers will work with operators and lab techs to work through their improvement plan. Typically, the plans will include information that the operators and lab techs will have to record to give back to the engineer to determine if an improvement was made. The plans can be printed off and hand to those involved, and the necessary data collected on sheets of paper.

If a program/graphic/database was being used then the engineer could create an improvement plan within said program, then the operator/lab tech can enter necessary values directly, making the data accessible instantly to the engineer. After the project is complete and an improvement was made, a SOP is written and saved.

In his role, the engineer needs to communicate this change to all necessary personnel. The SOP could be saved locally on each computer, in a shared file, on SharePoint, or as a link within a program that has versioning so users can go back and see what changes were made and when. To alert others of the changes, an email can be sent out to supervisors to communicate to their shift, or if a digital notebook is available, a message can be sent to the necessary areas with a link to the newly updated SOP.

As mentioned above, engineers can be responsible for writing and maintaining SOPs. SOPs can be stored in binders in the control room, saved on control room computers, or a shared folder. There are also programs that can save versions of documents so users can see what changed and when. Operators and lab techs would then use the SOPs when performing a task or testing. It is important for operators to be notified of changes made to the SOP. This could be the engineer sending out an email, or a program with a preset list sending updates to emails. Engineers could also have a notification set up on the operator’s computer.

The Plant Manager

The Plant Manager’s Role in Digital Transformation

Plant managers wear many hats and the hats they wear continue to multiply as plants face complexities and pressure to produce more with increased profitability.

Hiring good people – the key to running a digital forward organization is staffing with people in mind. Good, productive people run plants with data, not hunches or best guesses. They make data driven decisions that are the best for the organization and identify root causes through careful anomaly detection and analysis.

Good leaders know that to truly digitize operations at a plant you must start from the bottom and that every role is an important component to the whole and every person’s contribution important.

Ron Baldus, CTO at dataPARC, advises “Clean data” is the key to successful digital operations. What exactly does clean data mean, one might ask? Clean data is the pure data, data-driven data, not hunch-driven data and the one version of the truth. With clean data plant managers and those who work for them can continue to make data and profit driven decisions. A good data visualization software that connects all data sources is a good place to start. With this connected software, extensive reports pulling on many data sources can be run to give the plant manager a key report with important information visible. If there is a problem in the operations, this reporting can allow the plant manager to identify the problem and task his engineers and operators with getting to the source and making the necessary adjustments, all based on fact and not best guesses.

Plant managers know that there are many important moving parts to a plant operation and getting reliable data is the lifeblood of a successful, profitable operation. The more digital the plant becomes, the cleaner data flows to all departments and roles and allows troubleshooting, reporting, and forecasting to be more and more seamless.

Another advantage to digitization at the plant manager level is transferring of skill, information, and expertise at the subject matter expert SME level. Many SMEs are getting close to retirement and in them a wealth of information, experience and methodology that is at risk of being lost. Through the digitization of reports and operations, the methods can be preserved and passed on to the next person assuming the role and responsibility, whether it an operator or an engineer or other essential role.

Looking Forward

Whether it is the operator, the engineer, the lab tech or the plant manager, all digital transformation roles and responsibilities in manufacturing contribute to the transformation of the plant. From the bottom up with effective communication and consistent data, downtime can be minimized, golden runs more common and seamless operations a daily reality.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Integrating manufacturing data in a plant is necessary for many reasons. Among the most important is getting relevant data to various departments quickly. In doing so, downtime is reduced, anomalies are identified and corrected, and quality is improved.

So often integrations are delayed due to fears around losing data quality during integration or simply finding the time in a 24/7 environment. There are pros and cons to each integration type. In this article we will walk you through the different integrations and what to look out for as well as tips and best practices.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Integrating Historian & ERP Data

Enterprise Resource Planning (ERP) is software used by accounting, procurement, and other groups to track orders, supply chain logistics and accounting data. By adding historian data, ERP systems have a fuller picture of the comprehensive plant operations.

combining erp and historian data on a trend

Integrating your historian and ERP data can provide great insight into which processes are affecting quality.

ERP users have access to more information about finished goods such as the exact time of any major production step or if there were an issue with production. For instance, if the texture of newsprint is slippery and not up to spec and because of that cannot be cut properly on the news producer’s rollers, the specific lot can be identified and the challenge of finding out which lot produced poor quality paper is no longer a roadblock.

In a nutshell, The Historian to ERP integration means departments outside of production get all the data they need without engaging another resource. The challenges include a time -consuming integration where erroneous values can have a wide-ranging impact, so double-checking values is essential.

Integrating Historian & MES Data

Connecting a historian to an MES (Manufacturing Execution System) expands the capabilities of the MES. Manufacturing execution systems are computerized systems used in manufacturing to track and document the transformation of raw materials to finished goods, obviously an essential component of manufacturing data capture. The historian provides a historical log of all production data rather than only being able to see current values or near-past values. Being able to pull large amounts of historical data along with data from an MES when needed, allows for projections that are not possible without this long-term perspective and additional data

A relevant example of a MES to historian benefit is an ethanol plant that would like to examine seasonal -winter vs. summer- variability on fermentation rates. The historian has all this data and the MES allows the user to pull out data only for relevant times.

integrating historian and mes data on a trend

Integrating MES data into a historian provides access to years and years of data and allows for long-term analysis.

Using product definitions from the MES and the comprehensive history of production runs for a given product line or product type without manual filters of all historical data is key to fast troubleshooting with this integration type. Historian to MES integrations help to reduce waste and decrease the time it takes to solve an issue. Like the Historian to ERP solution, the Historian to MES integration takes significant work and resources but the benefits are immediately evident and realized.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

Integrating LIMS & ERP Data

A Laboratory Information Management System (LIMS) contains all testing and quality information from a plant’s testing labs. Plants often have labs for quality testing. Federal regulations and standards often dictate a test’s values or results which a batch’s success and quality is ultimately dependent upon. Testing is done at various stages of production and this includes the final stage which is the most important stage. Certificates of analysis are common documents that ensure the safety and quality of tested batches. LIMS to ERP integration is especially important for the food and beverage industry as they depend upon testing to ensuring their product is safe for human consumption.

integrating LIMS and ERP data in a trend

By integrating LIMS and ERP data it’s easy to identify a specific out-of-spec batch or product run for root cause analysis.

Batch quality data from LIMS systems allows ERP users who could be accounting or procurement departments to build documents and reports that share data to certify the quality of shipped product. This integrated data also gives the customer reps immediate access to data about shipped product. The LIMS to ERP integration is very important as so many LIMS departments still rely on a paper trail which can be a tremendous hold up to production. As with the historian to ERP integration the LIMS to ERP integration must have accurate data to provide site-wide value, so double checking is necessary.

Integrating LIMS & Historian Data

Just like historical process data from assets, testing data is very useful information when troubleshooting a production issue. The LIMS system as explained earlier, stores all of the testing data from the lab. Sending LIMS data to the historian allows users to have a greater understanding of the production process and lab values to provide a fuller picture and greater analysis of the issue.

integrating LIMS and historian data in a trend

Integrating LIMS and historian data is one of the most effective ways to analyze how a process affects product quality.

An example could be when a paper brightness is out of spec, lab data can shed light and bring attention to the part of the process that needs adjustment. Alerts within the historian can be set up, giving engineers more time to adjust the process to meet quality. Past testing values are useful when comparing production runs and bring awareness to patterns that production data alone may not have. As with any integration, LIMS to Historian requires planning, a team that is engaged and milestones to check in on the progress and success of the integration.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Integrating CMMS (Computerized Maintenance Management Systems) & ERP Data

Maintenance is a large and necessary part of plant operations. Maintenance records and work order information are often stored in a Computerized Maintenance Management System (CMMS). The CMMS system has comprehensive information that by itself cannot be accessed by departments that may need to learn more details about the specifics of the maintenance.

By connecting the CMMS to an ERP system, ERP users will have access to more data about the finished product. Users can check to see if there were any maintenance issues around the time of the production. Facilities with lengthy scheduled shutdowns like an oil refinery will need to plan out how much gasoline or other fuel to keep in storage to meet their customer obligations.

integrating CMMS and ERP data in a trend

By integrating maintenance and ERP data we’re able to investigate and out-of-spec product run and note that there was a maintenance event that likely caused the issue.

Knowing about shutdowns both planned and unplanned allows the user to better plan out both customer orders and shipping. Anticipating the schedule for planned repairs is also useful for financial planning and forecasting. Users with access to historical work order information can better understand any issues that might come up and gives a bigger glimpse into the physical repair and the associated costs and impact. Integrating these systems can prove to be some of the hardest integrations simply because the data types can vary so much.

The key to a successful CMMS to ERP integration is getting necessary leadership on board and having a detailed roadmap and plan with regular teams check- ins so that obstacles can be addressed immediately.

Integrating Field Data Capture System & Historian Data

The remote nature of field data capture systems means that this data is often siloed and very difficult and slow to access. Field data is just that, captured in the field and often must be pieced together from manual entries, often on paper. Various roles collect this data, and it must be utilized collectively to have any value. Field data types such as temperature, quality and speed must be consistent when entered and even more so when moved on to a historian.

Though often cumbersome to collect, compile and enter, field data in a historian can be enormously empowering to an engineer. For example, oil wells in the Canadian oil sands can be 50 to 200 miles from the nearest human operator. The more data the operator knows about these wells, the less travel they spend checking up on each well.

Integrating Field data and Historian Data

Integrating field data into a historian provides reliable access to long-term data from previously siloed wells.

Connecting field data to a historian also increases the amount of data an engineer can use during troubleshooting. The data in the field is vital to reducing downtime and managing product quality. Sharing that data with the historian gives the data a broader audience where comparison and analysis can be made, resulting in less downtime and greater productivity.

Looking Forward

When integrating manufacturing data, the overriding theme and result is digital data empowerment. When important plant data can flow seamlessly from one person, system or department, better decisions can be made through better analysis which ultimately leads to better operations, less downtime and greater profitability. It is important to understand the full data management and connectivity options available and the pros and cons of each. Various brands of each solution are on today’s market. Ideally, all sources of plant data can be connected and disseminated effectively for maximum efficiency and profitability.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

The digital Transformation – everyone and everything is a part of it in some way. In the 20th century, breakthroughs in technology allowed for the ever-evolving computing machines that we now depend upon so totally, we rarely give them a second thought. Even before the advent of microprocessors and supercomputers, there were certain notable scientists and inventors who helped lay the groundwork for the technology that has since drastically reshaped every facet of modern life.

0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Deviation analysis is a routine form of troubleshooting performed at process manufacturing facilities around the world. When speed is imperative, a robust deviation detection system, along with a good process for analyzing the resulting data, is essential for solving problems quickly.

A properly configured deviation detection system allows nearly everyone involved in a manufacturing process to collaborate and quickly identify the root causes of unexpected production issues.

In a previous post we wrote about time series anomaly detection methods, and how to set up deviation detection for your process. In this article, we’re going to be focusing on how to actually analyze the data to pinpoint the source of a deviant process.

deviation detection webinar signup

Watch the webcast to see us use deviation detection to troubleshoot process issues.

Watch the Webcast

Deviation Analysis: Reviewing the Data

So, if you read our other article about anomaly detection methods, we covered setting up deviation detection, including the following steps:

  1. Selecting tags
  2. Filtering downtime
  3. Identifying “good” operating data
  4. Identifying “bad” operating data

The fifth step is to actually analyze the data you’ve just produced, so you can identify where your problem is occurring.

But, before we get into analysis, let’s review the data we’ve produced.

The examples below show the data we’ve produced with dataPARC’s process data analytics software, but the analysis process would be similar if you were doing this in your own custom-built Excel workbook.

Selecting Tags

Here we have the tags we identified. In our case, we were able to just drag over the entire process area from our display graphic and they all ended up in our application here. We could have also added the tags manually or even exported the data from our historian and dumped it into a spreadsheet.

deviation analysis - getting the tags

We pulled data from 363 tags associated with our problematic process.

Good Data

Next, we have our “good” data. The data when our process was running efficiently. You’ll see that the values here are averages over a one-month period.

deviation analysis - good data example

Average data from a month when manufacturing processes were running smoothly.

Bad Data

This is our problem data. Narrowed down to a specific two-day period where we first recognized we had an issue.

deviation analysis - bad data example

Bad doggie! I mean… Bad data. Bad!

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Methods of Deviation Detection

Again, you can refer to our article on anomaly detection methods for more details, but in this next part we’ll be using 4 different methods of analysis to try and pinpoint the problem.

The four deviation detection methods we’ll be using are:

  1. Absolute Change (%Chg) – The simplest form of deviation detection. Comparing a value against the average.
  2. Variability (COVChg) – How much the data varies or how spread out the data is relative to the average.
  3. Standard Deviation (SDCgh) – A standard for control charts. Measures how much the data varies over time.
  4. Multi-Parameter (DModX) – Advanced deviation detection metric showing the difference between expected values and real data, to evaluate the overall health of the process. The ranges are often rate-dependent.

In the image below you’ll see the deviation values for each method of calculation. Here red means a positive change, and blue means a negative change.

deviation analysis methods

Our four deviation detection methods. Red is positive change in values. Blue is negative value change.

So, if we’re looking for a trouble spot within our manufacturing process, the first thing we’re going to want to do is start to look at the deviation values.

By sorting by the different detection methods, we can begin to identify some patterns. And, we can really pare down our list of potential culprits. Just an initial sort by deviation values eliminates all but about a dozen of our tags as suspects.

So, let’s look at tags where the majority of the models show high deviation values. That gives us a place to begin troubleshooting.

Applied Deviation Analysis

For instance, here we have our Cooling Water tag, and in three of the four models we’re seeing that it has a fairly high deviation value. It’s a prime suspect.

deviation analysis - cooling water data

So, let’s analyze that, and take a closer look.

Need to get better data into the hands of your process engineers? Check out our real-time process analytics tools & see how better data can lead to better decisions.

Within our deviation detection application we can just select the tag and click the “trend” button to bring up the data trend for the Cooling Water tag.

Looking at the trend, it’s definitely going up, and deviating from the “good” operating conditions. But we also know our process. And we know that the cooling water comes from the river, and we know that the river temperature fluctuates with the seasons. So, we’ll add our River Temp tag to the trend, and sure enough – it looks like it’s just a seasonal change.

cooling water vs river temp image

Pairing our Cooling Water Tmp tag with our River Temp tag. Nope, that’s not it!

So, the Cooling Water isn’t our culprit. What can we look into next? This 6X dT tag looks like a problem, with multiple indications of high variation. This represents the temperature change across the sixth section of the extraction train.

deviation analysis - looking at the 6xt data

This looks like the source of our problem.

It’s likely that this is going to be our problem tag. Putting our heads together with the rest of the team, we can pretty quickly get anecdotal evidence to either confirm or deny that, say, maintenance was performed in this part of the process recently. If it’s still unclear, we can pull it up on a trend, like we did with our Cooling Water tag, and see if we are indeed seeing some erratic behavior with the values from this tag.

Looking Ahead

Really, this is routine troubleshooting that is done daily at process facilities around the world. But, when speed is imperative, and you need a quick answer for management when they’re asking why their machine is down or the product quality is out-of-spec, having a robust deviation detection system in place, and a good process for analyzing the resulting data, can really help make things clear quickly.

deviation detection webinar signup

Watch the webcast

In this recorded webcast we discuss how to use deviation detection to quickly understand and communicate issues with errant processes, and in some cases, how to identify problems before they even occur.

Watch the Webcast
deviation detection webinar signup
1

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

One of the problems in process manufacturing is that processes tend to drift over time. When they do, we encounter production issues. Immediately, management wants to know, “what’s changed, and how do we fix it?” Anomaly detection systems can help us provide some quick answers.

When a manufacturing process deviates from its expected range, there are several problems that arise. The plant experiences production issues, quality issues, environmental issues, cost issues, or safety issues.

One or more of these issues will present itself, and the question from management is always, “what changed?” Of course, they’d really like to know exactly what to do to go and fix it, but fundamentally, we need to know what changed to put us in this situation.

Usually the culprit is either the physical equipment – maybe maintenance that’s been performed recently that threw things off – or it’s in the way we’re operating the equipment.

From a process engineer or a process operator’s perspective, we need to quickly identify what changed. We’re possibly in a situation where the plant is losing money every minute we’re operating like this, so operators, engineers, supervisors… everyone is under pressure to fix the problem as soon as possible.

In order to do this, we need to understand how the value has changed, and the frequency of those changes. Or rather, how big are the swings and how often are they occurring?

deviation detection webinar signup

Watch the webcast to see us use deviation detection to troubleshoot process issues.

Watch the Webcast

Time Series Anomaly Detection Methods

Let’s begin by looking at some time series anomaly detection (or deviation detection) methods that are commonly used to troubleshoot and identify process issues in plants around the world.

Absolute Change

time series anomaly detection - absolute change

This is the simplest form of deviation detection. For Absolute Change, we get a baseline average where things are running well, and when we’re down the road, sometime in the future, and things aren’t running so hot, we look back and see how much things have changed from the average.

Absolute change is used to see if there was a shift in the process that has made the operating conditions less than ideal. This is commonly used as a first pass when troubleshooting issues at process facilities.

Variability

time series anomaly detection - variability

Here we want to know if the variability has changed in some way. In this case, we’ll show the COV change between a good period and a bad period. COV is basically a way to take variations and normalize them based on the value. So high values don’t necessarily get a higher standard deviation than low values because they’re normalized.

Variability charts are commonly used to identify less consistent operating conditions and perhaps more variations in quality, energy usage, etc.

Standard Deviations

time series anomaly detection - standard deviation

Anyone who’s done control charts in the past 30 years will be familiar with standard deviations. Here we take a period of data, get the average, calculate the standard deviation, and put limits up (+/- 3 standard deviations is pretty typical). Then, you evaluate where you’re out based on that.

Standard deviation is probably the most common way to identify how well the process is being controlled, and is used to define the operating limits.

Multi-Parameter

time series anomaly detection - multi-parameter

This is a more advanced method of deviation detection that we at dataPARC refer to as PCA Modelling. Here we take all the variables and put them together and model them against each other to narrow the range. Instead of having flat ranges, they’re often rate-dependent.

The benefit of PCA Modelling over the other anomaly detection methods, is that it gives us the ability to narrow the window and get an operating range that is specific to the rate and other current operating conditions.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Setting up Anomaly Detection

Now that we have a basic understanding of some methods for detecting anomalies in our manufacturing process, we can begin setting up our detection system. The steps below outline the process we usually take when setting anomaly detection up for our customers, and we typically advise them to take a similar approach when doing it themselves.

1. Select Your Tags

Simple enough. For any particular process area you’re going to have at least a handful of tags that you’re going to want to review to see if you can spot the problem. Find them, and, using your favorite time series data trending application (if you have one), or Excel (if you don’t), gather a fairly large set of data. Maybe a month or so.

At dataPARC, we’ve been performing time series anomaly detection for customers for years, so we actually built a deviation detection application to simplify a lot of these routine steps.

For instance, if we want, we can grab an entire process unit from a display graphic and drag it into our app without having to take the time to hunt for the individual tags themselves. Pretty cool, right?

If we just pull up the process graphic for this part of the plant…

…we can quickly compile all the tags we want to review.

2. Filter out Downtime

This is a CRITICAL step, and should be applied before you even identify your good and bad periods. In order to accurately detect anomalies in your process data, you need to make sure to filter out any downs you may have had at your plant that will skew your numbers.

anomaly detection - filter downtime

Downtime.

dataPARC’s PARCview application allows you to define thresholds to automatically identify and filter out downtime, so if you’re using a process analytics toolkit like PARCview, that’ll save you some time. If your analytics tools or your historian doesn’t have this capability, you can also just filter out the downs by hand in Excel. Regardless of how you do it, it’s a critical step.

Need to get better data into the hands of your process engineers? Check out our real-time process analytics tools & see how better data can lead to better decisions.

3. Identify Good Period

Now you’re going to want to review your data. Look back over the month or so of data you pulled and identify a period of time that everyone agrees the process was running “good”. This could be a week, two weeks… whatever makes sense for your process.

anomaly detection - good time series data

Things are running well here.

4. Identify Bad Period

Now that we have the base built, we need to find our “bad” period. Whether we’re waiting for a bad period to occur, or we’re proactively looking for bad periods as time goes on.

anomaly detection - bad time series data

Here we’re having some trouble.

5. Analyze the Data

Yes, it’s important to understand the different anomaly detection methods, and yes, we’ve discussed the steps we need to take to build our very own time series anomaly detection system, but perhaps the most critical part of this whole process is analyzing the data after we’ve become aware of the deviations. This is how we pinpoint which tags – which part of our process – is giving us problems.

Deviation Analysis is a pretty big topic that we’ve covered extensively in another post.

Looking Ahead

Anomaly detection systems are great for being able to quickly identify key process changes, and really the system should be available to people at nearly level of your operation. For effective troubleshooting and analysis, everyone from the operator, the process engineer, maintenance, management… they all need to have visibility into this data and the ability to provide input.

Properly configured, you should be able to identify roughly what your problem is, within 5 tags of the problem, in 5 minutes.

So, when management asks “what’s changed, and how do we fix it?”, just tell them to give you 5 minutes.

deviation detection webinar signup

Watch the webcast

In this recorded webcast we discuss how to use deviation detection to quickly understand and communicate issues with errant processes, and in some cases, how to identify problems before they even occur.

Watch the Webcast
deviation detection webinar signup
0