Your address will show here +12 34 56 78
Process Manufacturing

IT/OT Convergence and corresponding buzzwords are all over the place. If you are in manufacturing, chances are you have seen ads and read articles like this one on the necessity of keeping up with the ever-changing digital landscape. No matter what the source, important data needs to get to the right people at the right time, so where does one start?

In a plant, chances are you have many sources of IT and OT data that contribute to seamless operations. Whether it is an oil & gas, food, chemical or mineral process, on the ideal days, your operation runs like a well- oiled machine. That being said, how easily and consistently can the entire plant or enterprise operate efficiently, solve problems and reduce bottlenecks?

Accurate and fast data access, intuitive data visualization and analytics are needed to make decisions that affect production on a 24/7 basis. Resources are also at risk when changes are made, even if they are for the better. Ripping and replacing data systems and equipment is expensive and involves a lot of risk.

No matter what the source, important data needs to get to the right people at the right time, so where does one start to make small, gradual IT/OT changes?

Integrating IT & OT data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Utilize Tools you Already Have While Adding New Tools That Drive Change

Firstly, you can leverage the data assets you already have. Risk is reduced considerable when you can add on products that enhance data sources you already use. For example, if you utilize a data historian that works well for your needs, you can add a data visualization and analytics solution on top, as well as connect other sources that were previously siloed. Not replacing the historian saves a headache and allows your important production to continue without interruption. Using what you already have means saving valuable financial resources that can be allocated to other needs, faster ROI and reduced risk.

How Energy Transfer Empowered Their Operations

An example of a company that utilized current tools and enhanced their operations with new ones was Energy Transfer, a midstream energy company headquartered in Houston, Texas. Energy Transfer had an enormous amount of data at their 477 sites and 90k miles of pipeline. The company has acquired assets over the years, resulting in multiple vendor data systems across the Enterprise, and did not have a good way to share it. Engineers and management both needed data from all sites but couldn’t make operational decisions before the window of opportunity had passed. They also knew future acquisitions with different data systems might come, so a flexible future was critical. Energy Transfer purchased a reliable data visualization and analytics software solution and was then able to combine data from all sites with just one program.

Not only was the data easily available, but the tools enabled the data to be customized to the way that everyone- from operator to engineer to management- wanted to see it. Sites could customize and access data they specifically needed, while corporate headquarters could access data they wanted and customize it to their specific needs. By joining their OT and IT worlds in a future-proof manner, Energy Transfer not only improved efficiency but they saved time and money while doing it to the tune of $10M in annualized savings.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

Start Small and Involve Your Team in the Integration of IT/OT Data

Another step you can take is to get your people on the same page about digitizing as many processes and data as you can at your plant. This means small steps such as LIMS data and process data being shared digitally rather than manually through eliminating time-consuming methods such as handwritten reports and manually entered excel spreadsheets. Manual processes create delays and perpetuate data silos as they are only available to those who receive them through arduous, dated methods. A reliable data visualization and analytics software can help you do this, allowing you to share the data seamlessly. The small things can add up to the big things in IT/OT convergence. Getting the team into a digital mindset is an important step.

Starting with a team to talk about IT/OT convergence is a wise first step. Formulating a plan with goals, milestones and deadlines helps make the process more bite sized and manageable. Defining roles, assigning tasks and meeting about progress are effective ways of incorporating your IT/OT data into your manufacturing process and reporting.

W.R. Grace Connected their Data Sources Enterprise Wide

An example of a company that started small and soon widely adopted IT/OT convergence was W.R. Grace, a $2 billion specialty chemical company that manufactures a wide array of chemicals and is the world’s leading supplier of FCC catalyst and additives. Grace has manufacturing locations on three continents and sells to over 60 countries and is headquartered in Columbia, Maryland. Grace’s IT and OT worlds were very disconnected. Grace had robust OT and IT data but was limited in the way they could use this data. Most Grace staff were required to use the operator control system functions to interact with process data because the existing historian tools were ineffective, slow, and not intuitive, resulting in no buy-in, especially from operations staff.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Grace implemented a data visualization solution across several of its chemical facilities, including its largest facility, Curtis Bay, with over 600 employees. Grace Curtis Bay unified key sources of data with a robust data visualization and analytics product. Grace employed intuitive, fast and effective dashboards, unifying operations and information technologies and thereby increasing process management effectiveness, decreasing downtime and increasing profitability. These dashboards were easy to use and highly intuitive, and in turn they got utilized by many staff who were hesitant to engage the previous system. Grace saw an ROI from the implementation of data visualization software tools in less than six months. Using the data visualization & analytics tools also allowed Grace to make predictive and proactive changes in their operations resulting in increased first pass quality.

If you lack high performing data visualization and analytics across your plant or enterprise, achieving IT/OT convergence in manufacturing is relatively simple when you employ the right resources and have a plan that allows for gradual changes that add up to significant results. Small changes such as combining existing systems, effective software tools and a mindset of proactive digital awareness for your team can significantly impact the overall success of your operations. Newfound ease in your daily tasks and reporting are results of effective IT/OT convergence and will be experienced by everyone. From the operator to the engineer all the way to corporate management, small changes become major impacts when completed strategically and as part of an effective plan for positive digital progress.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF

Process Manufacturing

Now that PI ProcessBook is headed toward extinction, many engineers & operators in the process industries are faced with the challenge of replacing the tools they’ve depended on for years. But how to go about replacing critical trends and displays? In this article we’ll give you some tips for evaluating ProcessBook alternatives, and present some of the best ProcessBook alternatives available today.

Looking to replace ProcessBook? Learn how you can export your existing displays without issue.

Learn more

Why you Need to Replace ProcessBook

ProcessBook End of Life

In late 2020, OSIsoft announced the retirement of ProcessBook, PI’s venerable data visualization toolkit that debuted all the way back in 1994.

Because of its tight integration with PI’s industry-leading historian, ProcessBook was widely adopted by process engineers to build trends, dashboards, and process graphics from their PI Server time-series data.

Over the years, as competitors have popped up with newer, more powerful analytics tools, many engineers have stuck with ProcessBook, often simply because of their familiarity with the toolkit, or because of the difficulty of migrating critical graphics and displays to a new platform.

OSIsoft’s announcement will likely force their hands.

While current users will continue to be able to use ProcessBook indefinitely, by discontinuing support, OSIsoft is clearly encouraging customers to make plans to transition to alternative platforms. Security updates for ProcessBook are scheduled to end in 2022, and support for the platform will end entirely in December of 2024.

So, if you’ve been on the fence about replacing ProcessBook at your facility, now’s the time to begin looking into some alternatives.

How to Evaluate ProcessBook Alternatives

The industrial analytics marketplace has become quite crowded. There are dozens, maybe hundreds of companies out there making analytics products for everything from broad industrial applications to niche manufacturing processes. So, where do you start in your search for an alternative to ProcessBook? What factors should you consider to determine which of these solutions is the best fit for you? At a high level, you should be thinking about the following:

1. Ease of Integration

Chances are, if you’re using ProcessBook, you’re also using PI Server as a data historian. If you’re looking to replace ProcessBook, the most critical question you’ll have to answer is, how well does the alternative integrate with PI server. The answer to this question will mean the difference between a quick, 10 minute connection to PI Server, or a costly and time-intensive investment in custom development and new OT infrastructure.

Migration is also a key consideration. Your organization has invested much time and money into building the ProcessBook displays that you depend on to run your operations efficiently. Some ProcessBook alternatives provide tools to simply migrate your existing ProcessBook displays to your new platform. Others… don’t.

2. Diagnostic Analytics Capabilities

Diagnostic, or “exploratory” analytics tools are used for root cause investigation and troubleshooting of downtime events or product quality issues.

Trends and trending capabilities are at the core of diagnostic analytics. Effective root cause analysis depends on rapid ad-hoc analysis and the ability to quickly overlay historical data from various process areas to determine correlation and causation.

Trending is likely to be one of the top two capabilities ProcessBook users are looking to address with a replacement system.

In addition to trends, diagnostic analytics are often supported by other visualization tools, such as histograms, X/Y charts, and Pareto charts. When presented with difficult process questions, the more ways you can slice and dice your data the easier it will be to arrive at the correct answer.

3. Operations Management Capabilities

“Operations Management” is broadly defined here as the capabilities that allow for:

  • Production tracking
  • Process monitoring
  • OEE tracking
  • Quality management
  • Process Alarms & Notifications
  • Reporting
  • Manual Data Entry

That’s a lot of functionality, and most of it comes from dashboarding and process graphics-building tools that leverage process data for real-time monitoring. Basic analytics solutions typically only allow for monitoring at the site level, but more sophisticated offerings allow enterprise-wide tracking of production KPIs across multiple sites.

ProcessBook users have probably gotten the most use mileage out of the platform’s dynamic, interactive graphics, and it’s not uncommon for displays built with ProcessBook to see a decade or more of continued use. When looking to replace ProcessBook’s operations management capabilities you have a few options. You could look for point solutions designed specifically for that capability, like SSRS for reporting. You could find a highly customizable product that has coding capabilities like ProcessBook did. Or you could find a broad a solution that has the building blocks necessary to solve multiple business needs.

4. Advanced Analytics Capabilities

Advanced analytics is another loaded term that we’ll define here for the purpose of this post. Often used in relation to leading-edge manufacturing concepts, like machine learning and industrial AI, advanced analytics in ProcessBook replacement tools will typically take two forms: predictive analytics and prescriptive analytics.

Predictive analytics tools promise to prevent downtime and improve OEE by building models from recorded data to anticipate and alert users to potential productivity loss. Prescriptive analytics take the next logical step and tell you which actions need to be taken to address predicted production issues.

Together, and in conjunction with process automation tools, predictive and prescriptive analytics form a sort of elementary Artificial Intelligence used to maximize plant performance.

5. Cost/Pricing

Although it’d certainly be nice if it weren’t the case, cost will likely be a consideration as you consider ProcessBook alternatives. Pricing for these solutions is usually determined by features and the scope of implementation, and most providers don’t publicly list their pricing so providing even ballpark figures will be difficult. However, there’s one key factor you should be aware of when evaluating pricing.

The pricing model

Pricing models vary between process manufacturing analytics providers, to include everything from flat rate pricing, usage-based pricing, tiered pricing, and user-based pricing. These days, many manufacturing analytics solutions use “per-user” pricing, with the licensing cost going up according to the number of individuals using the tools at a facility. The upside with per-user pricing is that for small facilities, or for organizations with few people monitoring and analyzing process data, it can make for a relatively-cost effective solution. The flipside, obviously, is that for data-driven companies who believe in giving every operator, engineer, and SME the ability to contribute to improving plant performance, per-user pricing can get very expensive very fast.

Seamlessly integrate with PI Server for real-time process monitoring and rapid root cause analysis.

Learn more

The Best ProcessBook Alternatives

dataPARC’s PARCview

A real-time data analysis and visualization toolkit developed by the end user, for the end user, dataPARC’s PARCview application has long co-existed with OSIsoft’s PI Server in process facilities around the world. In fact, over 50% of all dataPARC installations are built on top of PI historians.

If you value ProcessBook primarily for its trending and interactive process graphics, dataPARC is a superb alternative, featuring diagnostic analytics and operations management capabilities that are significant upgrades from what you’ve been accustomed to with ProcessBook.

Ease of Integration

dataPARC integration with PI Server is extremely simple, with native integration allowing users to connect and begin visualizing PI historian data in a matter of minutes. By utilizing the latest PI SDK technology and other performance focused features, most ProcessBook users will find improved performance.

Likewise, dataPARC’s ProcessBook conversion utility allows users to bulk import their existing ProcessBook displays without losing any functionality.

Diagnostic Analytics Capabilities

Widely considered the best time-series trending application available for analyzing process data, dataPARC’s greatest strength is in its ability to connect and analyze data from various sources within a facility.

In addition to its powerful real-time trending toolkit, dataPARC is loaded with features that support root-cause analysis and freeform process data exploration, including:

  • Histograms
  • X-Y plots
  • Pareto charts
  • 5-why analysis tool
  • Excel plugin

dataPARC is also likely the fastest ProcessBook alternative you’ll find on this list. dataPARC’s Performance Data Engine (PDE) provides access to both aggregated and lossless datasets , allowing the best of both worlds – super fast access to long-term datasets and extremely high resolution short-term data.

Operations Management Capabilities

dataPARC offers a complete set of tools for operations management. dataPARC’s display design tool offers the ability to create custom KPI dashboards and real-time process displays using pre-built pumps, tanks, and other industry standard objects. dataPARC even allows you import existing graphics or entire ProcessBook displays.

All the standard reporting features are included here, along with smart notifications that can be configured to trigger email or text alerts for downtime events or other process excursions. dataPARC’s Centerline tool is one of the platform’s most powerful features, providing operators with an intuitive multivariate control chart with early fault detection and process deviation warnings, so operators can eliminate quality or equipment issues before they occur.

Additional operations management capabilities offered by dataPARC include a robust module for data entry (manual or electronic), notifications, advanced calculation engine and a task scheduling engine.

Advanced Analytics Capabilities

dataPARC doesn’t make claims to artificial intelligence or machine learning, but the platform provides a solid interface for advanced analytics, offering a data modeling module that uses PLS & PCA to power predictive analytics for maintenance, operations & quality applications.


dataPARC provides an unlimited user license for PARCview, which makes it a good fit for organizations wishing to get production data in front of decision-makers at every level of the plant.

Evaluate the top alternatives to Processbook & PI Vision in our PI Server Data Visualization Tools Buyer’s Guide.

Get the Guide

PI Vision

OSIsoft’s ProcessBook successor, PI Vision is branded as being the “fastest, easiest way to visualize PI Server data.” PI Vision is a web-based application that runs in the browser, which can be a significant change for ProcessBook users used to a locally-installed desktop app.

Ease of Integration

PI Vision likely offers the most straightforward integration with PI Server, as it’s part of OSIsoft’s PI System.

Migration of existing ProcessBook screens to PI Vision is supported by OSIsoft’s PI ProcessBook to Pi Vision Migration utility, however many users have reported difficulty retaining the full functionality of custom displays and graphics after moving them into Pi Vision.

Diagnostic Analytics Capabilities

Many users report that PI Vision’s trending tools provide less firepower than ProcessBook and competitors for root cause analysis and ad-hoc diagnostics, but it is perfectly capable of performing the basic trending functions of plotting time-series and other data against time on a graph.

OSIsoft also offers their PI DataLink Excel plugin, which is often used for more advanced diagnostic analytics efforts.

Operations Management Capabilities

Process Displays are the heart and soul of the PI System and if you’re moving from ProcessBook you’ll likely feel at home working in PI Vision.

Although it’s not a 100% feature-for-feature replacement of ProcessBook (the SQC module, for instance isn’t available in Vision), some of the ProcessBook features lacking in early versions of PI Vision are being added via periodic updates. In addition, there are some new capabilities in PI Vision that don’t exist in ProcessBook.

The PI Vision displays use HTML5 and are integrated into PI’s Asset Framework (AF), which results in pretty intuitive display building. Basic reporting is available as well, but like much of the PI system, the data must be extracted to Excel via PI DataLink.

Advanced Analytics Capabilities

PI Vision doesn’t include any built-in data modeling tools or other advanced analytics components. PI Server data can be brought into 3rd party analytics apps via PI Integrator for advanced analysis.


PI Vision uses a per-user pricing model, which is great for small organizations with only a few people accessing the platform. For larger manufacturers, or enterprise implementations with teams of operators, process engineers, and data scientists accessing the product, PI Vision can become quite expensive.

Proficy CSense

GE acquired CSense back in 2011 to provide better data visualization and analytics tools for use with their own Proficy Historian. CSense is billed as industrial analytics software that improves asset and process performance. Trending and diagnostic analytics is the focus here, with less emphasis on robust process displays.

Ease of Integration

CSense is optimized for integration with Proficy, GE’s own time-series data historian. A OSISoft PI OLEDB provider is required to integrate with OSIsoft’s PI Server. This interface may affect performance for users familiar with native PI integration.

Diagnostic Analytics Capabilities

CSense’s trending and diagnostic capabilities likely exceed what you’ve experienced with ProcessBook.

Dedicated modules for troubleshooting (CSense Troubleshooter) and process optimization (CSense Architect) provide modern trends, charts, and other visualization tools to analyze continuous, discrete, or batch process performance.

Operations Management Capabilities

GE takes a modular approach to their data visualization products. Much of the operations management functionality provided in a single product like ProcessBook is spread over many separate products within the Proficy suite.

There’s a fair amount of overlap in these products, but somewhere among GE’s iFIX, CIMPLICITY, Proficy Operations Hub, Proficy Plant Applications, and Proficy Workflow products you’ll find operations management capabilities that greatly exceed those of ProcessBook.

Advanced Analytics Capabilities

CSense marketing materials are filled with mentions of industrial advanced analytics, digital twins, machine learning, and predictive analytics.

Like PARCview’s approach, this advanced functionality is enabled by the insights mined from the powerful data visualization tools of the core platform, though models can be developed in CSense to help predict product quality and asset failure.


CSense licensing is offered in three editions – Runtime, Developer, and Troubleshooter, all which come with different component combinations and data connectors.

Canary Axiom

Like CSense and PI Vision, Canary’s Axiom was designed as the visualization component of a larger system. Axiom was built to support analysis of data stored in Canary Historian.

Ease of Integration

Canary’s Data Collectors can connect to process data sources via OPC DA and OPC UA, but they don’t have a dedicated module to connect to PI Server like some of the other options on this list.

Diagnostic Analytics Capabilities

Axiom is browser-based trending application that, while not as powerful as some other ProcessBook replacements, is easy to use and capably performs basic diagnostic analysis. It lacks the more powerful features of products like dataPARC and CSense, but Canary does provide an Excel Add-in for performing more advanced analysis.

If you were perfectly fine with the trending and diagnostic capabilities of ProcessBook, you’ll likely be satisfied with what Axiom provides.

Operations Management Capabilities

Axiom offers dashboarding and reporting capabilities alongside their trending tools, but ProcessBook users may be disappointed by the lack of emphasis on display building provided here. Simply put, if you’re looking to replace your dynamic, interactive ProcessBook displays, you’ll want to look elsewhere.

Canary also lacks support for migrating existing ProcessBook displays, which is a feature that both dataPARC and PI Vision have.

Advanced Analytics Capabilities

Canary avoids making flimsy claims to current buzzwords like Industrial AI or Machine Learning. Axiom’s focus is on nuts-and-bolts trending & analysis of time-series data, though their Excel Add-in does certainly open the door to more advanced analytics applications.


Canary helpfully posts their pricing on their website, with Axiom fetching a per-client fee in the form of both a one-time and monthly charge. This pricing assumes you’ll be using the Canary Historian, so the info on the website won’t be much help if you’re looking to connect Axiom to PI Server data only.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.


Offering “self-service industrial analytics”, TrendMiner provides a complete suite of web-based time-series data analysis tools.

Ease of Integration

Like dataPARC, TrendMiner offers native integration with PI Server. You can expect connecting to your PI historian to be very easy, although TrendMiner doesn’t appear to support the transfer of existing ProcessBook displays.

Diagnostic Analytics Capabilities

With a name like TrendMiner, you’d assume that trending and diagnostic analysis is key for the Software AG brand. And you’d be correct – tag browsing, trend overlays, and data filtering are all favorite features of TrendMiner customers.

Root Cause Analysis is a core use case for TrendMiner, but some users have mentioned difficulty learning to use the system to identify process issues. Reports of slow performance also make this a riskier choice when considering ProcessBook alternatives.

Operations Management Capabilities

Monitoring is at the core of TrendMiner’s operations management capabilities. The platform provides a number of tools that support smart alarming & notifications of process excursions, but visualization of processes seems limited to trends. ProcessBook users who depend on interactive process displays to manage and monitor plant performance will need to look to other options on this list to replace that functionality.

Advanced Analytics Capabilities

TrendMiner’s monitoring features extend the platform into “predictive” analytics territory. Like dataPARC and some of the other ProcessBook alternatives on this list, predictive performance is enabled via models based on historical data.


TrendMiner pricing is dependent on your particular use case and they don’t list pricing on their website, however you can expect them to be on the higher end of the ProcessBook alternatives listed here.


Founded in 2012, Seeq offers Advanced Analytics for Process Manufacturing Data. Analysis is the focus with Seeq, and their products are heavy on diagnostic and predictive analytics, without as much support for operations management as some of the other ProcessBook alternatives on this list.

Ease of Integration

Like dataPARC, Seeq is optimized to connect to various different data sources within a plant, including OSIsoft’s PI Server. You should expect a pretty seamless integration with your PI historian data.

Seeq does not support importing existing ProcessBook displays.

Diagnostic Analytics Capabilities

Seeq’s browser-based Workbench application powers the platform’s diagnostic analytics, offering a powerful set of trending and visualization tools.

Advanced trending, bar charts, tables, scatterplots, and treemaps can all be employed to perform rapid root-cause analysis, and represent a significant upgrade in diagnostic capabilities from ProcessBook.

Operations Management Capabilities

Seeq offers the ability to configure alarms for process monitoring, and their Seeq Organizer application allows users to build scorecards and dashboards for KPI monitoring, but they don’t provide anything approaching the display-building capabilities that ProcessBook provides.

Engineers looking for a ProcessBook replacement, and an option for migrating or even replicating their existing displays, will likely want to look at other alternatives.

Advanced Analytics Capabilities

Advanced Analytics is an area where Seeq shines. Claiming predictive analytics, machine learning, pattern recognition, and scalable calculation capabilities, it’s clear that Seeq intends to be your solution for sophisticated process data analysis. Most of Seeq’s advanced analytics capabilities come from their Seeq Data Lab application, which provides access to Python libraries for custom data processing.


Seeq’s pricing isn’t listed on their website, but they reportedly use a per-user pricing model which starts at $1,000/year per user.

Looking to replace ProcessBook See why PARCview is regarded as the #1 ProcessBook alternative.


Inductive Automation’s Ignition application is a popular SCADA platform with a broad set of tools for building industrial analytics solutions.

Ease of Integration

Ignition offers the ability connect to virtually any data in your plant. While it lacks the easy native integration capabilities of some of the other ProcessBook alternatives on this list, Ignition supports various methods of connecting to your PI historian, including JDBC and OPC.

Diagnostic Analytics Capabilities

While Ignition shines as an HMI designer or MES, it ranks quite a bit lower than others on this list in its diagnostic analytics capabilities. But it wasn’t designed for root cause investigations and ad-hoc data analysis. Trending in Ignition is basic and integrated into displays, similar to what users may be familiar with in ProcessBook.

Operations Management Capabilities

Ignition’s Designer, on the other hand, is an extremely capable application for building process graphics and HMIs for real-time process monitoring and KPI tracking.

While perhaps lacking the interactivity that ProcessBook users are familiar with, Ignition modules are available to help manage SPC, OEE, material tracking, batch processing, and more.

Advanced Analytics Capabilities

Again, predictive analytics, prescriptive analytics, machine learning, industrial AI… these aren’t Ignition’s focus and don’t factor in to their application feature sets.


Inductive Automation offers unlimited users for a single server license of Ignition, with pricing based per-feature, starting at around $12,500 and going up from there. Ignitions price very much reflects its standing as one of the industry’s most popular SCADA platforms.

Looking for a ProcessBook Alternative?

Well, you have a lot to consider. Presented with the prospect of discontinued support for ProcessBook, your challenge is twofold: first, figure out how to replace the displays, trends, and other features that made ProcessBook valuable to you, and second, evaluate potential replacement candidates for capabilities like predictive and prescriptive analytics that didn’t exist when you got started with ProcessBook.

All the ProcessBook replacement options we listed above feature different toolsets and it’s up to you to identify your needs and determine which solution is the right one for your organization. Hopefully this post got you set off in the right direction on your search.

pi processbook alternatives guide

Download the Guide

Discover top alternatives to PI’s ProcessBook and PI Vision analytics toolkits.

Download PDF

Process Manufacturing, Troubleshooting & Analysis

One of the easiest ways to explain why you should implement 5 whys root cause analysis problem solving at your plant is simple: the cause of the problem is often (we’d go so far to say almost always) different that initially speculated. Implementing a lean strategy like 5 whys can save you time and headaches in the future.

Issues with and failures of assets are bound to happen in process manufacturing. Your team’s strategy for resolving problems that occur will determine your productivity in countless ways. Getting a process set up for resolving problems will not only help with the current issues and failures but will create a plan to resolve future problems with the goal of each resolution coming faster and easier.

Real-time process analytics software with integrated 5 Whys analysis tools.

Check out PARCview

What Exactly are the 5 Whys?

The 5 whys is a lean problem- solving strategy that is popular in many industries. Using lean methodology, the 5 whys were developed by Sakichi Toyoda, a Japanese inventor and industrialist. The 5 whys focus on root cause analysis (RCA) and is defined as a systematic process for identifying the origins of problems and determining an approach for responding to and solving them. 5 whys focuses on prevention or being proactive rather than being reactive.

5 whys strives to be analytical and strategic about getting to the bottom of asset failures and issues, it is a holistic approach that is used by stepping back and looking at both the process and the big picture. The essence of 5 whys is revealed in the quote below where something as small as a nail in a horse’s shoe being lost was the root cause of a war being lost:

“For want of the nail the shoe was lost
For want of the shoe the horse was lost
For want of a horse the warrior was lost
For want of a warrior the battle was lost
For want of a battle the kingdom was lost
All for the want of a nail.”

One of the key factors for successful implementation of the 5 whys technique is to make an informed decision. This means that the decision-making process should be based on an insightful understanding of what is happening on the plant floor. Hunches and guesses are not adequate as they are the equivalent of a band-aid solution.

The 5 whys can help you identify the root cause of process issues at your plant.

Below is an example of a 5 whys method being used for a problem seemingly as basic as a computer failure. If you look closely, you will conclude that the actual problem has nothing to do with a computer failure.

  • Why didn’t your computer perform the task? – Because the memory was not sufficient.
  • Why wasn’t your memory sufficient? – Because I did not ask for enough memory.
  • Why did you underestimate the amount of memory? – Because I did not know my programs would take so much space.
  • Why didn’t you know programs would take so much space? – Because I did not do my research on programs and memory required for my annual projects.
  • Why did you not do research on memory required? – Because I am short staffed and had to let some tasks slip to get other priorities accomplished.

As seen in the example above, the real problem was not in fact computer memory, but a shortage in human assets. Without performing this exercise, the person may never have gotten to the place where they knew they were short staffed and needed help.

This example can be used to illustrate problems in a plant as well. Maybe an asset is having repeated failures or lab data is not testing accurately. Rather than immediately concluding that the problem is entirely mechanical, use the 5 whys method and you may discover that your problem is not what you think.

Advantages and Disadvantages of the 5 Whys Method

Advantages are easily identified by such outcomes such as being able to identify the root cause of your problem and not just the symptoms. It is simple and easy to use and implement, and perhaps the most attractive advantage is that it helps you avoid taking immediate action without first identifying the real root cause of the problem. Taking immediate action on a path that is not accurate is a waste of precious time and resources.

Disadvantages are that some people may disagree with the different answers that come up for the cause of the problem. It is also only as good as the collective knowledge of the people using it and if time and diligence is not applied, you may not uncover and address the true root cause of the problem.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

5 Whys Root Cause Analysis Implementation at your Plant

Now that the problem in its essence has been revealed, what are the next steps?

Get Familiar with the 5 Whys Concept

The first step in implementing the 5 whys at your plant is to get familiar with the concept. You may research the 5 whys methodology online or listen to tutorials to gain a deeper understanding. In the information age, we have access to plenty of free information at the touch of our screens. Get acquainted with 5 whys. Even if it is just one video on YouTube and a few articles online, a better understanding means a better implementation.

Schedule a 5 Whys Meeting with Your Team

The second step would be to solve a problem at your plant using the 5 whys method. To do so, follow the steps below to schedule and hold a 5 whys RCA meeting.

These steps are simple to give you a basic understanding. To gain greater understanding in detail on how to implement 5 whys RCA, read on for in depth instructions.

  • Organize your meeting
  • Define your problem statement
  • Ask the first “why”
  • Ask why four more times
  • Determine countermeasures
  • Assign responsibilities
  • Monitor progress
  • Schedule a follow-up meeting

In other words, the root cause analysis process should include people with practical experience. Logically, they can give you the most valuable information regarding any problem that appears in their area of expertise.

What’s Next? What Happens Once I have Held my 5 Whys meeting?

Once your meeting has been held and you begin to implement the 5 whys method, it is essential to remember that some failures can cascade into other failures, creating a greater need for root cause analysis to fully understand the sequence of cause and failure events.

Root Cause Analysis using the 5 whys method typically has 3 goals:

  • Uncover the root cause
  • Fully understand how to address and learn from the problem
  • Apply the solution to this and future issues, creating a solid methodology to ensure the same success in the future

The Six Phases of 5 Whys Root Cause Analysis

Digging even deeper, when 5 whys root cause analysis is performed, there are six phases in one cycle. The components of asset failure may include environment, people, equipment, materials, and procedure. Before you carry out 5 whys RCA, you should decide which problems are immediate candidates for this analysis. Just a few examples of where root cause analysis is used include major accidents, everyday incidents, human errors, and manufacturing mistakes. Those that result in the highest costs to resolve, most downtime, or threats to safety will rise to the top of the list.

There are some software-based 5 whys analysis tools out there, like dataPARC’s PARCview, which automatically identifies potential the 5 top culprits of a process issue & links to trend data for deeper root cause analysis.

Phase 1: Make an Exhaustive List of Every Possible Cause

The first thing to do in 5 whys is to list every potential cause leading up to a problem or event. At the same time, brainstorm everything that could possibly be related to the problem. In doing these steps you can create a history of what might have gone wrong and when.

You must remain neutral and focus only on the facts of the situation. Emotions and defensiveness must be minimized to have an effective list that you can start with. Stay neutral, open. Talk with people and look at records, logs and other fact keeping resources. Try to replay and reconstruct what you think happened when the problem occurred.

Phase 2: Evidence, Fact and Data Seeking and Gathering

Phase 2 is the time when you get your hands on any possible data or files that can lead to the possible causes of your problem. Sources for this data may be databases, digital handwritten or printed files. In this phase the 5 whys list of at least 5 reasons comes into play and needs backup for each outcome or reason why that came up on your list.

Phase 3: Identify What Contributed to the Problem

In Phase 3 all contributions to the problem are identified. List changes and events in the asset’s history. Evidence around the changes can be very helpful so gather this as you are able. Evidence can be broken down into four categories: paper, people, recording and physical evidence. Examples include paperwork specific to an activity, broken parts of the assets and video footage if you have it.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Phase 4: Collect Data and Analyze It

In Phase 4, you should analyze the collected data. Organize changes or events by how much or little you can have an impact on the outcome. Then decide if each event is unrelated, a correlating factor, a contributing factor, or a root cause. An unrelated event is one that has no impact or effect on the problem whatsoever. A correlating factor is one that is statistically related to the problem but may or may not have a direct impact on the problem.

A contributing factor is an event or condition that directly led to the problem, in full or in part. This should help you arrive at one or more one root causes. When the root cause has been identified, more questions can be asked. Why are you certain that this is the root cause instead of something else?

Phase 5: Preventing Future Breakdowns with Effective Countermeasures

The fifth phase of 5 whys Root Cause Analysis is to preventing future breakdowns by creating a custom plan that includes countermeasures, which essentially address each of the 5 whys you identified in your team meeting. Preventive actions should also be identified. Your actions should not only prevent the problem from happening again, but other problems should not be caused because of it. Ideally a solid solution is one that is repeatable and can be used on other problems.

One of the most important things to determine is how the root cause of the problem can be eliminated. Root causes will of course vary just as much as people and assets do. Examples of eliminating the root cause of the issue are changes to the preventive maintenance improved operator training, new signage or HMI controls, or a change of parts or part suppliers.

In addition, be sure to identify any cost associated with the plan. How much was lost because of the problem, and how much it is going to cost to implement it.

To avoid and predict the potential for future problems, you should ask the team a few questions.

  • What are the steps we must take to prevent the problem from reoccurring?
  • Who will implement and how will the solution be implemented?
  • Are any risks involved?

Phase 6 – Implementation of your Plan

If you make it to this step, you have successfully completed 5 whys root cause analysis and have a solid plan.

Depending on the type, severity, and complexity of the problem and the plan to prevent it from happening again, there are several factors the team needs to think about before implementation occurs. These can include the people in charge of the assets, asset condition and status, processes related to the maintenance of the assets, and any people or processes outside of asset maintenance that have an impact on the identified problem. You would be surprised how much is involved with just one asset when you exhaustively think about it and make a list of all people and actions involved during its useful life.

Implementing your plan should be well organized and orchestrated as well as documented. Follow up meetings with your team should be scheduled to talk about what went well, and what could be improved upon. With time, 5 whys can become an effective tool to both solving and preventing future problems at your plant.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Where do we start in digitizing our manufacturing operations? one may ask. While there is no easy answer, the solution lies in starting not from the top down, but from the ground up, focusing on the digital transformation roles and responsibilities of the key people in your plant.

Digital transformation in process manufacturing is not only a priority, but now an essential step forward as the world encounters and adapts to a more digital world. To put it simply if you do not adjust your processes to embrace digital change, your competitors will (and may already have) outproduce, outshine and outsell you.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Transformation Teams

Digital change has been slow until now though it has been steady. PLC and DCS systems were manufacturing’s digital beginnings and thankfully there is so much more available now to further digitize operations and minimize downtime, improve your process, enhance data management, data sharing reporting and increase profitability. A truly connected enterprise will be adaptable and agile, allowing it to keep abreast of changes in the operating environment.

Plant roles play an essential part in the digitization of process manufacturing and all can contribute to a seamless digital transformation within your facility. Each role embraces digital change and transforms the process from the inside out. By focusing on these roles and the duties and responsibilities within each of them, plant digitization can lead to a well-oiled machine whose comprehensive outcomes depend on and benefit from.

Where do we start in digitizing our operations? one may ask. While there is no easy answer, the solution does lie in starting not from the top down, but from the ground up, with each role’s responsibilities and contributions enhancing the other, adding to and building on the next, for a comprehensive digital enterprise and solid, data-based reporting.

Integrating sources of plant data is a good place to start, along with the processes themselves becoming digitized for maximum outcomes. In this article we will focus on the various roles in the plant, their responsibilities and how each one can contribute to digital transformation.

Digital Transformation Roles & Responsibilities

The Operator

The Operator’s Role in Digital Transformation

Checking process conditions (temperatures, pressures, line speed, etc.) are an essential task for an operator. These process conditions could have readings directly on the machine with valves or buttons to adjust as needed. With more and more digital transformation in manufacturing these process variables are being set up with PLCs to create a digital tag. This tag can be read through an OPCDA server and visualized throughout the plant on computers, in offices, control rooms and meeting rooms. They can also be set up with a DCS to control the process from the control room rather than having to walk the floor to adjust speeds or valves.

The process variables need to be monitored to produce quality products. There are ranges for each process variable and additive when making a product, if these get out of range, the final product could be outside the final specification. Limits can be drawn on gauges, written in an SOP (Standard Operating Procedure) or set up as limits for alarming. These alarms could appear either on the DCS or data visualization screen to alert the operator a variable needs attention.

To consistently make quality product, operators must communicate with the lab tech to verify the product is within spec. This communication between the lab and operators has been traditionally done through verbal communication, walkie talkies, phone calls, etc. To digitize this process, the lab tech enters tested values into a data visualization program or a lab information management system (LIMS) database. These values can be displayed on dashboard with the specifications next to them. The operator can then see when specification values are out of spec and adjust the process, or when values are trending up/down and adjust the process to keep the product within specification before making bad quality product.

Operators are also responsible for keeping track of a product and lot being produced. This can be done manually with pen and paper or entered digitally into a database.

At the end of the shift operators need to pass key information to the next shift. This can be done with a hand off meeting to verbally discuss, a physical notebook to log key points or a digitalized version of a notebook. With digitalized versions of reports there is opportunity to relay information to multiple control rooms or locations of the company’s operations at once.

The Lab Technician

The Lab Technician’s Role in Digital Transformation

Lab quality testing is an essential part of process manufacturing. Thorough testing of each batch quality results allows for production of the scheduled product. Because other roles such as process engineer and operator rely on the outcomes of lab testing, getting the lab quality data seamlessly disseminated is essential to smooth operations.

Testing multiple variables of the product and comparing it to specifications, manually testing the product, recording the result, and manually comparing the finished product to specifications are among the lab technician’s duties. If the lab tech is entering data into a digital system, limits can typically be saved for different products, speeding things up.

The lab tech would manually test the product, enter the results in a program, and the LIMS system would flag if the result were out of spec. Furthermore, a lab tech can set up the test, a machine conducts the test, the result is then fed to the LIMS system where the value would be flagged if the test is out of spec. performing these tasks digitally is a tremendous time saver and process.

In summary, lab techs are ultimately responsible for testing the final product and passing or failing it to be sold. Digitizing these tests and the corresponding data streamlines and accelerates the entire lab test process.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

The Process Engineer

The Process Engineer’s Role in Digital Transformation

Process Engineers, often called by other titles including chemical engineers, often have a range or duties including product development, process optimization, documentation of SOPs, setting up automatic controls/PLCs, ensuring equipment reliability, communicating with superintendents, operators, lab techs, maintenance managers and customers.

Process engineers monitoring the entire manufacturing process on a daily, weekly, and monthly basis to identify improvement opportunities and evaluate the condition of the assets and processes.

Most sites have an existing system for maintenance requests. A physical system may exist where staff hand writes the issue, area, and other important information and hand deliver it to the maintenance department. Alternately, there could be a system set up to email the maintenance department with pictures attached. A program may be used to submit maintenance requests. This system would provide a unique ticket number, automated status updates, and other key information. Such a program would allow engineers or the maintenance department to see history thus being able to identify repetitive issues, such as a part needing replacement. Digitizing maintenance can help create a preventative maintenance schedule, to replace the part before it is no longer performing, resulting in sub-par product quality.

Another way for engineers to monitor the process is through data visualization. When data is stored, the history can be viewed, and users can identify irregularities, trends, and cycles in the process to help identify root cause when upsets occur. Engineers might set up their own alarms, separate from operator alarms, to keep track of events and determine if an optimization project is possible.

Process optimization and product development are important tasks for project engineers. Engineers may develop and conduct trials to continually optimize the process and develop new products. They often use the Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control) method to do this. The Define step is typically completed by a stakeholder, a superintendent or plant manager. Once the project is defined the engineer moves into the measure step.

The measure step can take many forms, physically measuring, counting, or documenting a process. Collecting necessary data can be time-consuming. With more of the data being digitized, data collection is already done.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Engineers need to organize and collect data to analyze it. Once the data is collected it can put into Excel, Minitab, or other programs to be analyzed. By doing comparisons and statistical analysis, with the help of process knowledge, an improvement plan can be created.

Engineers will work with operators and lab techs to work through their improvement plan. Typically, the plans will include information that the operators and lab techs will have to record to give back to the engineer to determine if an improvement was made. The plans can be printed off and hand to those involved, and the necessary data collected on sheets of paper.

If a program/graphic/database was being used then the engineer could create an improvement plan within said program, then the operator/lab tech can enter necessary values directly, making the data accessible instantly to the engineer. After the project is complete and an improvement was made, a SOP is written and saved.

In his role, the engineer needs to communicate this change to all necessary personnel. The SOP could be saved locally on each computer, in a shared file, on SharePoint, or as a link within a program that has versioning so users can go back and see what changes were made and when. To alert others of the changes, an email can be sent out to supervisors to communicate to their shift, or if a digital notebook is available, a message can be sent to the necessary areas with a link to the newly updated SOP.

As mentioned above, engineers can be responsible for writing and maintaining SOPs. SOPs can be stored in binders in the control room, saved on control room computers, or a shared folder. There are also programs that can save versions of documents so users can see what changed and when. Operators and lab techs would then use the SOPs when performing a task or testing. It is important for operators to be notified of changes made to the SOP. This could be the engineer sending out an email, or a program with a preset list sending updates to emails. Engineers could also have a notification set up on the operator’s computer.

The Plant Manager

The Plant Manager’s Role in Digital Transformation

Plant managers wear many hats and the hats they wear continue to multiply as plants face complexities and pressure to produce more with increased profitability.

Hiring good people – the key to running a digital forward organization is staffing with people in mind. Good, productive people run plants with data, not hunches or best guesses. They make data driven decisions that are the best for the organization and identify root causes through careful anomaly detection and analysis.

Good leaders know that to truly digitize operations at a plant you must start from the bottom and that every role is an important component to the whole and every person’s contribution important.

Ron Baldus, CTO at dataPARC, advises “Clean data” is the key to successful digital operations. What exactly does clean data mean, one might ask? Clean data is the pure data, data-driven data, not hunch-driven data and the one version of the truth. With clean data plant managers and those who work for them can continue to make data and profit driven decisions. A good data visualization software that connects all data sources is a good place to start. With this connected software, extensive reports pulling on many data sources can be run to give the plant manager a key report with important information visible. If there is a problem in the operations, this reporting can allow the plant manager to identify the problem and task his engineers and operators with getting to the source and making the necessary adjustments, all based on fact and not best guesses.

Plant managers know that there are many important moving parts to a plant operation and getting reliable data is the lifeblood of a successful, profitable operation. The more digital the plant becomes, the cleaner data flows to all departments and roles and allows troubleshooting, reporting, and forecasting to be more and more seamless.

Another advantage to digitization at the plant manager level is transferring of skill, information, and expertise at the subject matter expert SME level. Many SMEs are getting close to retirement and in them a wealth of information, experience and methodology that is at risk of being lost. Through the digitization of reports and operations, the methods can be preserved and passed on to the next person assuming the role and responsibility, whether it an operator or an engineer or other essential role.

Looking Forward

Whether it is the operator, the engineer, the lab tech or the plant manager, all digital transformation roles and responsibilities in manufacturing contribute to the transformation of the plant. From the bottom up with effective communication and consistent data, downtime can be minimized, golden runs more common and seamless operations a daily reality.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Integrating manufacturing data in a plant is necessary for many reasons. Among the most important is getting relevant data to various departments quickly. In doing so, downtime is reduced, anomalies are identified and corrected, and quality is improved.

So often integrations are delayed due to fears around losing data quality during integration or simply finding the time in a 24/7 environment. There are pros and cons to each integration type. In this article we will walk you through the different integrations and what to look out for as well as tips and best practices.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Integrating Historian & ERP Data

Enterprise Resource Planning (ERP) is software used by accounting, procurement, and other groups to track orders, supply chain logistics and accounting data. By adding historian data, ERP systems have a fuller picture of the comprehensive plant operations.

combining erp and historian data on a trend

Integrating your historian and ERP data can provide great insight into which processes are affecting quality.

ERP users have access to more information about finished goods such as the exact time of any major production step or if there were an issue with production. For instance, if the texture of newsprint is slippery and not up to spec and because of that cannot be cut properly on the news producer’s rollers, the specific lot can be identified and the challenge of finding out which lot produced poor quality paper is no longer a roadblock.

In a nutshell, The Historian to ERP integration means departments outside of production get all the data they need without engaging another resource. The challenges include a time -consuming integration where erroneous values can have a wide-ranging impact, so double-checking values is essential.

Integrating Historian & MES Data

Connecting a historian to an MES (Manufacturing Execution System) expands the capabilities of the MES. Manufacturing execution systems are computerized systems used in manufacturing to track and document the transformation of raw materials to finished goods, obviously an essential component of manufacturing data capture. The historian provides a historical log of all production data rather than only being able to see current values or near-past values. Being able to pull large amounts of historical data along with data from an MES when needed, allows for projections that are not possible without this long-term perspective and additional data

A relevant example of a MES to historian benefit is an ethanol plant that would like to examine seasonal -winter vs. summer- variability on fermentation rates. The historian has all this data and the MES allows the user to pull out data only for relevant times.

integrating historian and mes data on a trend

Integrating MES data into a historian provides access to years and years of data and allows for long-term analysis.

Using product definitions from the MES and the comprehensive history of production runs for a given product line or product type without manual filters of all historical data is key to fast troubleshooting with this integration type. Historian to MES integrations help to reduce waste and decrease the time it takes to solve an issue. Like the Historian to ERP solution, the Historian to MES integration takes significant work and resources but the benefits are immediately evident and realized.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

Integrating LIMS & ERP Data

A Laboratory Information Management System (LIMS) contains all testing and quality information from a plant’s testing labs. Plants often have labs for quality testing. Federal regulations and standards often dictate a test’s values or results which a batch’s success and quality is ultimately dependent upon. Testing is done at various stages of production and this includes the final stage which is the most important stage. Certificates of analysis are common documents that ensure the safety and quality of tested batches. LIMS to ERP integration is especially important for the food and beverage industry as they depend upon testing to ensuring their product is safe for human consumption.

integrating LIMS and ERP data in a trend

By integrating LIMS and ERP data it’s easy to identify a specific out-of-spec batch or product run for root cause analysis.

Batch quality data from LIMS systems allows ERP users who could be accounting or procurement departments to build documents and reports that share data to certify the quality of shipped product. This integrated data also gives the customer reps immediate access to data about shipped product. The LIMS to ERP integration is very important as so many LIMS departments still rely on a paper trail which can be a tremendous hold up to production. As with the historian to ERP integration the LIMS to ERP integration must have accurate data to provide site-wide value, so double checking is necessary.

Integrating LIMS & Historian Data

Just like historical process data from assets, testing data is very useful information when troubleshooting a production issue. The LIMS system as explained earlier, stores all of the testing data from the lab. Sending LIMS data to the historian allows users to have a greater understanding of the production process and lab values to provide a fuller picture and greater analysis of the issue.

integrating LIMS and historian data in a trend

Integrating LIMS and historian data is one of the most effective ways to analyze how a process affects product quality.

An example could be when a paper brightness is out of spec, lab data can shed light and bring attention to the part of the process that needs adjustment. Alerts within the historian can be set up, giving engineers more time to adjust the process to meet quality. Past testing values are useful when comparing production runs and bring awareness to patterns that production data alone may not have. As with any integration, LIMS to Historian requires planning, a team that is engaged and milestones to check in on the progress and success of the integration.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Integrating CMMS (Computerized Maintenance Management Systems) & ERP Data

Maintenance is a large and necessary part of plant operations. Maintenance records and work order information are often stored in a Computerized Maintenance Management System (CMMS). The CMMS system has comprehensive information that by itself cannot be accessed by departments that may need to learn more details about the specifics of the maintenance.

By connecting the CMMS to an ERP system, ERP users will have access to more data about the finished product. Users can check to see if there were any maintenance issues around the time of the production. Facilities with lengthy scheduled shutdowns like an oil refinery will need to plan out how much gasoline or other fuel to keep in storage to meet their customer obligations.

integrating CMMS and ERP data in a trend

By integrating maintenance and ERP data we’re able to investigate and out-of-spec product run and note that there was a maintenance event that likely caused the issue.

Knowing about shutdowns both planned and unplanned allows the user to better plan out both customer orders and shipping. Anticipating the schedule for planned repairs is also useful for financial planning and forecasting. Users with access to historical work order information can better understand any issues that might come up and gives a bigger glimpse into the physical repair and the associated costs and impact. Integrating these systems can prove to be some of the hardest integrations simply because the data types can vary so much.

The key to a successful CMMS to ERP integration is getting necessary leadership on board and having a detailed roadmap and plan with regular teams check- ins so that obstacles can be addressed immediately.

Integrating Field Data Capture System & Historian Data

The remote nature of field data capture systems means that this data is often siloed and very difficult and slow to access. Field data is just that, captured in the field and often must be pieced together from manual entries, often on paper. Various roles collect this data, and it must be utilized collectively to have any value. Field data types such as temperature, quality and speed must be consistent when entered and even more so when moved on to a historian.

Though often cumbersome to collect, compile and enter, field data in a historian can be enormously empowering to an engineer. For example, oil wells in the Canadian oil sands can be 50 to 200 miles from the nearest human operator. The more data the operator knows about these wells, the less travel they spend checking up on each well.

Integrating Field data and Historian Data

Integrating field data into a historian provides reliable access to long-term data from previously siloed wells.

Connecting field data to a historian also increases the amount of data an engineer can use during troubleshooting. The data in the field is vital to reducing downtime and managing product quality. Sharing that data with the historian gives the data a broader audience where comparison and analysis can be made, resulting in less downtime and greater productivity.

Looking Forward

When integrating manufacturing data, the overriding theme and result is digital data empowerment. When important plant data can flow seamlessly from one person, system or department, better decisions can be made through better analysis which ultimately leads to better operations, less downtime and greater profitability. It is important to understand the full data management and connectivity options available and the pros and cons of each. Various brands of each solution are on today’s market. Ideally, all sources of plant data can be connected and disseminated effectively for maximum efficiency and profitability.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

The digital Transformation – everyone and everything is a part of it in some way. In the 20th century, breakthroughs in technology allowed for the ever-evolving computing machines that we now depend upon so totally, we rarely give them a second thought. Even before the advent of microprocessors and supercomputers, there were certain notable scientists and inventors who helped lay the groundwork for the technology that has since drastically reshaped every facet of modern life.


Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Deviation analysis is a routine form of troubleshooting performed at process manufacturing facilities around the world. When speed is imperative, a robust deviation detection system, along with a good process for analyzing the resulting data, is essential for solving problems quickly.

A properly configured deviation detection system allows nearly everyone involved in a manufacturing process to collaborate and quickly identify the root causes of unexpected production issues.

In a previous post we wrote about time series anomaly detection methods, and how to set up deviation detection for your process. In this article, we’re going to be focusing on how to actually analyze the data to pinpoint the source of a deviant process.

deviation detection webinar signup

Watch the webcast to see us use deviation detection to troubleshoot process issues.

Watch the Webcast

Deviation Analysis: Reviewing the Data

So, if you read our other article about anomaly detection methods, we covered setting up deviation detection, including the following steps:

  1. Selecting tags
  2. Filtering downtime
  3. Identifying “good” operating data
  4. Identifying “bad” operating data

The fifth step is to actually analyze the data you’ve just produced, so you can identify where your problem is occurring.

But, before we get into analysis, let’s review the data we’ve produced.

The examples below show the data we’ve produced with dataPARC’s process data analytics software, but the analysis process would be similar if you were doing this in your own custom-built Excel workbook.

Selecting Tags

Here we have the tags we identified. In our case, we were able to just drag over the entire process area from our display graphic and they all ended up in our application here. We could have also added the tags manually or even exported the data from our historian and dumped it into a spreadsheet.

deviation analysis - getting the tags

We pulled data from 363 tags associated with our problematic process.

Good Data

Next, we have our “good” data. The data when our process was running efficiently. You’ll see that the values here are averages over a one-month period.

deviation analysis - good data example

Average data from a month when manufacturing processes were running smoothly.

Bad Data

This is our problem data. Narrowed down to a specific two-day period where we first recognized we had an issue.

deviation analysis - bad data example

Bad doggie! I mean… Bad data. Bad!

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Methods of Deviation Detection

Again, you can refer to our article on anomaly detection methods for more details, but in this next part we’ll be using 4 different methods of analysis to try and pinpoint the problem.

The four deviation detection methods we’ll be using are:

  1. Absolute Change (%Chg) – The simplest form of deviation detection. Comparing a value against the average.
  2. Variability (COVChg) – How much the data varies or how spread out the data is relative to the average.
  3. Standard Deviation (SDCgh) – A standard for control charts. Measures how much the data varies over time.
  4. Multi-Parameter (DModX) – Advanced deviation detection metric showing the difference between expected values and real data, to evaluate the overall health of the process. The ranges are often rate-dependent.

In the image below you’ll see the deviation values for each method of calculation. Here red means a positive change, and blue means a negative change.

deviation analysis methods

Our four deviation detection methods. Red is positive change in values. Blue is negative value change.

So, if we’re looking for a trouble spot within our manufacturing process, the first thing we’re going to want to do is start to look at the deviation values.

By sorting by the different detection methods, we can begin to identify some patterns. And, we can really pare down our list of potential culprits. Just an initial sort by deviation values eliminates all but about a dozen of our tags as suspects.

So, let’s look at tags where the majority of the models show high deviation values. That gives us a place to begin troubleshooting.

Applied Deviation Analysis

For instance, here we have our Cooling Water tag, and in three of the four models we’re seeing that it has a fairly high deviation value. It’s a prime suspect.

deviation analysis - cooling water data

So, let’s analyze that, and take a closer look.

Need to get better data into the hands of your process engineers? Check out our real-time process analytics tools & see how better data can lead to better decisions.

Within our deviation detection application we can just select the tag and click the “trend” button to bring up the data trend for the Cooling Water tag.

Looking at the trend, it’s definitely going up, and deviating from the “good” operating conditions. But we also know our process. And we know that the cooling water comes from the river, and we know that the river temperature fluctuates with the seasons. So, we’ll add our River Temp tag to the trend, and sure enough – it looks like it’s just a seasonal change.

cooling water vs river temp image

Pairing our Cooling Water Tmp tag with our River Temp tag. Nope, that’s not it!

So, the Cooling Water isn’t our culprit. What can we look into next? This 6X dT tag looks like a problem, with multiple indications of high variation. This represents the temperature change across the sixth section of the extraction train.

deviation analysis - looking at the 6xt data

This looks like the source of our problem.

It’s likely that this is going to be our problem tag. Putting our heads together with the rest of the team, we can pretty quickly get anecdotal evidence to either confirm or deny that, say, maintenance was performed in this part of the process recently. If it’s still unclear, we can pull it up on a trend, like we did with our Cooling Water tag, and see if we are indeed seeing some erratic behavior with the values from this tag.

Looking Ahead

Really, this is routine troubleshooting that is done daily at process facilities around the world. But, when speed is imperative, and you need a quick answer for management when they’re asking why their machine is down or the product quality is out-of-spec, having a robust deviation detection system in place, and a good process for analyzing the resulting data, can really help make things clear quickly.

deviation detection webinar signup

Watch the webcast

In this recorded webcast we discuss how to use deviation detection to quickly understand and communicate issues with errant processes, and in some cases, how to identify problems before they even occur.

Watch the Webcast
deviation detection webinar signup

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

One of the problems in process manufacturing is that processes tend to drift over time. When they do, we encounter production issues. Immediately, management wants to know, “what’s changed, and how do we fix it?” Anomaly detection systems can help us provide some quick answers.

When a manufacturing process deviates from its expected range, there are several problems that arise. The plant experiences production issues, quality issues, environmental issues, cost issues, or safety issues.

One or more of these issues will present itself, and the question from management is always, “what changed?” Of course, they’d really like to know exactly what to do to go and fix it, but fundamentally, we need to know what changed to put us in this situation.

Usually the culprit is either the physical equipment – maybe maintenance that’s been performed recently that threw things off – or it’s in the way we’re operating the equipment.

From a process engineer or a process operator’s perspective, we need to quickly identify what changed. We’re possibly in a situation where the plant is losing money every minute we’re operating like this, so operators, engineers, supervisors… everyone is under pressure to fix the problem as soon as possible.

In order to do this, we need to understand how the value has changed, and the frequency of those changes. Or rather, how big are the swings and how often are they occurring?

deviation detection webinar signup

Watch the webcast to see us use deviation detection to troubleshoot process issues.

Watch the Webcast

Time Series Anomaly Detection Methods

Let’s begin by looking at some time series anomaly detection (or deviation detection) methods that are commonly used to troubleshoot and identify process issues in plants around the world.

Absolute Change

time series anomaly detection - absolute change

This is the simplest form of deviation detection. For Absolute Change, we get a baseline average where things are running well, and when we’re down the road, sometime in the future, and things aren’t running so hot, we look back and see how much things have changed from the average.

Absolute change is used to see if there was a shift in the process that has made the operating conditions less than ideal. This is commonly used as a first pass when troubleshooting issues at process facilities.


time series anomaly detection - variability

Here we want to know if the variability has changed in some way. In this case, we’ll show the COV change between a good period and a bad period. COV is basically a way to take variations and normalize them based on the value. So high values don’t necessarily get a higher standard deviation than low values because they’re normalized.

Variability charts are commonly used to identify less consistent operating conditions and perhaps more variations in quality, energy usage, etc.

Standard Deviations

time series anomaly detection - standard deviation

Anyone who’s done control charts in the past 30 years will be familiar with standard deviations. Here we take a period of data, get the average, calculate the standard deviation, and put limits up (+/- 3 standard deviations is pretty typical). Then, you evaluate where you’re out based on that.

Standard deviation is probably the most common way to identify how well the process is being controlled, and is used to define the operating limits.


time series anomaly detection - multi-parameter

This is a more advanced method of deviation detection that we at dataPARC refer to as PCA Modelling. Here we take all the variables and put them together and model them against each other to narrow the range. Instead of having flat ranges, they’re often rate-dependent.

The benefit of PCA Modelling over the other anomaly detection methods, is that it gives us the ability to narrow the window and get an operating range that is specific to the rate and other current operating conditions.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Setting up Anomaly Detection

Now that we have a basic understanding of some methods for detecting anomalies in our manufacturing process, we can begin setting up our detection system. The steps below outline the process we usually take when setting anomaly detection up for our customers, and we typically advise them to take a similar approach when doing it themselves.

1. Select Your Tags

Simple enough. For any particular process area you’re going to have at least a handful of tags that you’re going to want to review to see if you can spot the problem. Find them, and, using your favorite time series data trending application (if you have one), or Excel (if you don’t), gather a fairly large set of data. Maybe a month or so.

At dataPARC, we’ve been performing time series anomaly detection for customers for years, so we actually built a deviation detection application to simplify a lot of these routine steps.

For instance, if we want, we can grab an entire process unit from a display graphic and drag it into our app without having to take the time to hunt for the individual tags themselves. Pretty cool, right?

If we just pull up the process graphic for this part of the plant…

…we can quickly compile all the tags we want to review.

2. Filter out Downtime

This is a CRITICAL step, and should be applied before you even identify your good and bad periods. In order to accurately detect anomalies in your process data, you need to make sure to filter out any downs you may have had at your plant that will skew your numbers.

anomaly detection - filter downtime


dataPARC’s PARCview application allows you to define thresholds to automatically identify and filter out downtime, so if you’re using a process analytics toolkit like PARCview, that’ll save you some time. If your analytics tools or your historian doesn’t have this capability, you can also just filter out the downs by hand in Excel. Regardless of how you do it, it’s a critical step.

Need to get better data into the hands of your process engineers? Check out our real-time process analytics tools & see how better data can lead to better decisions.

3. Identify Good Period

Now you’re going to want to review your data. Look back over the month or so of data you pulled and identify a period of time that everyone agrees the process was running “good”. This could be a week, two weeks… whatever makes sense for your process.

anomaly detection - good time series data

Things are running well here.

4. Identify Bad Period

Now that we have the base built, we need to find our “bad” period. Whether we’re waiting for a bad period to occur, or we’re proactively looking for bad periods as time goes on.

anomaly detection - bad time series data

Here we’re having some trouble.

5. Analyze the Data

Yes, it’s important to understand the different anomaly detection methods, and yes, we’ve discussed the steps we need to take to build our very own time series anomaly detection system, but perhaps the most critical part of this whole process is analyzing the data after we’ve become aware of the deviations. This is how we pinpoint which tags – which part of our process – is giving us problems.

Deviation Analysis is a pretty big topic that we’ve covered extensively in another post.

Looking Ahead

Anomaly detection systems are great for being able to quickly identify key process changes, and really the system should be available to people at nearly level of your operation. For effective troubleshooting and analysis, everyone from the operator, the process engineer, maintenance, management… they all need to have visibility into this data and the ability to provide input.

Properly configured, you should be able to identify roughly what your problem is, within 5 tags of the problem, in 5 minutes.

So, when management asks “what’s changed, and how do we fix it?”, just tell them to give you 5 minutes.

deviation detection webinar signup

Watch the webcast

In this recorded webcast we discuss how to use deviation detection to quickly understand and communicate issues with errant processes, and in some cases, how to identify problems before they even occur.

Watch the Webcast
deviation detection webinar signup

Dashboards & Displays, Data Visualization, Process Manufacturing

Most modern manufacturing processes are controlled and monitored by computer based control and data acquisition systems. This means that one of the primary ways that an operator interacts with a process is through computer display screens. These screens may simply passively display information, or they may be interactive, allowing an operator to select an object and make a change which will be then be relayed to the actual process. This interface where a person interacts with a display, and consequently the process, is called a Human-Machine Interface, or HMI.


Process Manufacturing

Overall Equipment Effectiveness, or OEE, has several benefits over simple one-dimensional metrics like machine efficiency. If you are not meeting demand and have a low OEE (equipment is underperforming) then you know you have an equipment effectiveness problem. If equipment is operating at a high OEE but not meeting customer demand, you know you have a capacity problem. Also, OEE lets you understand if you have spare capacity to keep up with changes in demand.