Your address will show here +12 34 56 78
Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

In this article we will explore common types of process engineering software and highlight some powerful software applications that every process engineer should know.

Process Engineers! Solve challenging process & product quality issues with these process data visualization tools.

Learn more

Types of Process Engineering Software

Process Engineers use software every day for a range of reasons.

There is software that allows users to visualize data, perform calculations, and conduct data analysis to help with process optimization and root case analysis.

Software can help mitigate risk with simulations and modeling. For instance, engineers can test how a process may react to changes without introducing hazards or wasting material.

Today and for the foreseeable future, process engineering software allows us to communicate problems, ideas, and solutions.

Graphical Dashboards

Engineers often create graphics dashboards to help display real-time data in a more consumable way.

Graphical dashboard tools are often combined with trend visualization software such as PARCView.

When creating a dashboard, it is important to know the audience and information to be conveyed. Will the dashboard be a process diagram, displaying a single view of how the process is running? Or a quality page, showing recent lab data and specifications.

Flashing values or pop ups can be used in displays to alert operations when a parameter goes out of specification. When adding alerts or colors, be aware of colorblindness and how that can modify interpretation of the display.

Reporting Software

Reports can range from detailed documents to concise information with embedded charts and images. The type of report will dictate which software to use.

There are multiple reporting tools available. Some, like dataPARC’s production monitoring software use simple coding to pull in data and automate the report while others may require more manual effort.

Microsoft’s Excel is a foundation in data reporting. It is a powerful tool and there are many internet resources that can help users at any level build reports in Excel with VBA.

SSRS (SQL Server Reporting Services) is another standard reporting tool for those who have data in SQL. SSRS can be used to create reports and send them on a schedule.

Microsoft Word is often used to create reports. It is best used for reports where automation and accessibility to data is not needed.

Centerlining Software

Operational envelopes, or Centerlining, is used to make sure process parameters are set up consistently from run to run. This helps to produce good quality product more quickly.

Engineers will build and review operational envelopes as guides. These ranges need to be set up for every product produced, which can be time consuming and inaccurate.

With PARCview’s unique Centerline tool, an engineer does not have to analyze and create the operational ranges. By adding variables to the centerline tool, it can identify if any variables are running outside of normal operational ranges.

dataPARC’s Centerline

LIMS Software

Process engineers can manage and use LIMS (Laboratory Information Management Systems). LIMS provide a way to record, apply limits, and organize lab data and processes.

LIMS can be independent systems such as LabWare or SAP . Others can be integrated with data visualization and process software, allowing users to view lab and operations data and lab data in the same program.

LIMS are important to process engineers because they track testing and quality information which can be key when working with customers or providing a look into plant operations.

ERP Software

ERP software is used for product tracking and scheduling. It can be used to allocate items to projects for billing purposes or for tracking project hours.

ERP software helps keep the process organized and allow engineers and operations to know what is coming up on the production schedule.

SAP and Oracle are two common ERP applications. They help keep everyone on the same page with a single source of truth.

Looking for new process engineering software? check out dataPARC’s process engineering toolkit. Tackle root cause analysis, process monitoring, predictive modeling, & more.

Root Cause Analysis Software

Software to assist in root cause analysis can help reduce downtime. When a process goes down the cause may not be known right away.

SOLOGIC is a program dedicated to root case analysis, it has tools included cause and effect diagrams, fishbone, and incident timelines.

By looking at process trends and centerlines from data visualization software, engineers can identify if any variables were in upset conditions prior to the down.

By recording downtime root causes an engineer could look at that information in a pareto chart. This will identify the most common or most time-consuming reasons for lost time. With this information the engineer can go after minimizing occurrence or duration of a specific cause.

Pareto charts can help identify common lost-time reasons

Data visualization software with built in downtime tracking capabilities can help expedite root cause analysis.

Check out our real-time process analytics tools & see how you can reduce downtime & product loss.

Check out PARCview

Process Optimization Software

A key role for many process engineers is process optimization. This can include in depth Six Sigma projects requiring large amounts of data, testing and implementation of solutions.

Throughout such projects statistical software, such as Minitab or JMP, is used.

minitab

Process optimization software can help identify variables that are statistically significant to the process, pointing to the parameters that will have the greatest impact.

Software is utilized to help come to conclusions, however subject matter experts can identify which statistically significant data is also practically significant and worth pursuing.

Quality Management Software

Process engineers work with quality to ensure products are made to the production specification and that customers are receiving a consistent product, order after order.

Histograms of variables with the 3-Sigma value and Cpk can be used to help create or review specification and control limits.

Many data visualization applications can produce histograms and simple charts to help with quality data analysis. Excel can also be used to create such charts.

Process Calculation Software

Process engineers are often responsible for calculating Overall Equipment Effectiveness (OEE), producing process metrics and or other process calculations.

Software will reduce errors in these calculations and produce results quickly.

In the case of process calculations, there are times when making a change to process requires calculating different volumes and ratios of material to prevent a reaction. Utilizing a software over manual calculations can prevent errors that may result in downtime or safety incidents.

Depending on if the value is used for a one-time purpose or needs to be regularly calculated can narrow down which software should be used to create the calculation.

Excel can be used; however, data would need to be pulled into the program. Integrated data visualization software can perform calculations and historize those values, so they are always ready.

Simulation & Modeling Software

Simulation, modeling, and sizing software is used when designing a new process or making changes to and existing process.

AutoPIPE is used for pipe stress analysis, Pipe-Flo can perform fluid flow calculations and AFT Arrow is a gas flow simulation that can be used for insulation sizing. AutoCAD is used for a range of 3-D modeling including drafting and design.

Such model and simulation programs can allow engineers to test and see how the process may perform before it is built – mitigating risk and saving material.

Other Useful Software

When asked what process engineering software they use daily, each engineer we spoke to had had different programs for the areas of software outlined in the previous section. There were some that came up repeatedly.

Whether you are starting your career as a Process Engineer or have many years under your belt, utilizing some of this software can promote career development and help streamline daily tasks.

Microsoft Office

Microsoft has several programs that are used on a regular basis, even if your company doesn’t have these specific ones, there is something like it.

https://www.microsoft.com/en-us/

Looking for new process engineering software? check out dataPARC’s process engineering toolkit. Tackle root cause analysis, process monitoring, predictive modeling, & more.

Excel

As mentioned in many of the sections above, Excel can be used for a variety of reasons and is a pivotal tool to have in the toolbox. Excel can be used to generate reports, create charts and graphs, complete calculations and analyze data.

Word

Word is a simple tool for creating reports, Standard Operating Procedures (SOPs) and other types of documentation.

OneNote

OneNote is a newer product that can be used as a digital notebook. It helps organize notes, and one could even build their own quick reference guide. Similarly, to other Microsoft products, OneNote notebooks or sheets can be shared with multiple people through OneDrive.

PowerPoint

There are many options for presentation generators and PowerPoint is still used to lead presentations and meetings. PowerPoints are a simple way to present information.

Teams

Teams is not the only online meeting and communication tool, but it is a very common one. Teams is used for inner company chat, virtual meetings, and document sharing. Within teams you can create groups (teams) that is connected to SharePoint to share documents, chat, post. Emails can even be sent directly to a group so other people can comment on it as a thread.

Outlook

Outlook is the primary hub for emails and meeting scheduler. Internally, you can view others calendars to find when free time to schedule a meeting without emailing the back and forth.

Project

Project is used to build Gantt Charts for project management. Often used for scheduled downtime and other large projects. These can be very detailed and helpful to stay organized. There is a bit of a learning curve.

Snagit / Snip It

Screenshots and images are helpful to include in emails, reports, SOPs to enhance communication.

The original screenshot was using the print screen button on the keyboard. Now, with multiple monitors this is unrealistic because the desired image needs to be cropped to a certain size.

Snip it is standard on Windows computer, but it has limited editing options, and does not auto save the images. It does allow to take a screenshot of a certain area, rather than the entire screen.

A step up from Snip it is Snagit, it is a paid software, but it allows multiple screenshots, saving them to a library to come back to later. The editing capabilities are far superior to Snip it, allowing annotation, arrows, blurring, etc.

https://www.techsmith.com/screen-capture.html

These are just a few examples of the countless screen capture software out there. In short, it is an essential tool for all business types.

Notepad++

Viewing and editing code without having to run it in production is valuable. Notepad++ is a versatile program allows users to see code in its programing language.

It has add-ins and one can compare code sets against one another, it will highlight what the difference which can help find bugs, typos, etc.

Notepad++ is free. It keeps tabs saved so you can close and re-open without having to save the files to a specific location.

https://notepad-plus-plus.org/

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Reducing downtime increases productivity, lowers costs, and decreases accidents. Downtime tracking software can be utilized to help reduce downtime. Knowing why the process is going down is key to reducing it.

Monitor, report, & analyze production loss from unplanned downtime, poor quality, and performance issues.

Learn more

What is Downtime?

Downtime is any duration in which a process is not running. However, not all downtime is created equally. There are two types of downtime, planned and unplanned.

Downtime events represented visually in dataPARC’s process trending software.

What is Planned Downtime?

Planned downtime is when production schedules a time to take the process down. Planned downtime is a necessity to maintain machinery by conducting inspections, cleaning, and replacing parts.

Planned downtime allows operations to organize, schedule and prepare for the downtime. They can coordinate with contractors, order parts and plan tasks to complete while the process is down. Planned downs can be organized so personnel have tasks to accomplish and the necessary tools on hand.

What is Unplanned Downtime?

Unplanned downtime is when the machine or process is down for any unscheduled event. This can be due to a part break, lack of material, power outage, etc. Unplanned downtime is unpredictable and should be targeted when aiming to reduce overall downtime.

Importance of Reducing Unplanned Downtime

Unplanned downtime is significantly more costly and dangerous than planned downtime. Since unplanned downtime is unpredictable and the process could go down for numerous reasons it is impossible to be prepared for every situation.

Waiting on parts or the necessary personnel to fix an issue takes time and could mean the machine is going to stay down for longer. Longer downtime is less time making product, directly effecting the bottom line.

Another cost attributed to unplanned downs is the unsellable product made and wasted material. The time right before, at the time of the down and start up of the process tends to lead to off quality product.

Unplanned downtime can attribute to near-misses or accidents. During unplanned downtime the goal is to get the machine/process up and running again as soon as possible. This pressure can create a stressful, chaotic environment resulting in people reacting rather than stopping to think about the best plan forward.

Reducing unplanned downtime can help lower overall operating costs. It also reduces the times when employees are put in unpredictable situations, decreasing the likelihood of an accident occurring.

How to Reduce Downtime

There are numerous reasons for process downtime and multiple approaches may need to be implemented in the effort to reduce it.

1. Track Downtime

Before jumping into the steps of reducing downtime, it is critical to track it. Tracking downtime lets you see why the process is going down and provides a metric on if it is being improved.

The data collected in tracking downtime will be used to help reduce it. Consider collecting the following data for each down occurrence:

  • Duration
  • Reason/Cause
  • Product at time of down
  • Process Area
  • Shift or Crew
  • Operator Comments
  • Other attributes such as environmental occurrences due to downtime, waste collected over the duration, safety concerns, etc.

This data can be collected manually, however having an automated system will ensure the data is collected for each event. More consistent data will help reduce the downtime.

Downtime tracking software can automate and help organize the downtime. Some considerations when researching downtime tracking software:

  • Ease of use
  • Automatically captures downtime events
  • Records downtime cause and other data
  • Analyze data and events
  • Integrate with process data

There are many options for downtime tracking software on the market. Some are dedicated downtime tracking applications, while others, like dataPARC’s PARCview may offer a suite of manufacturing analytics tools that include a downtime tracking module. The right choice is the one that will be used consistently.

Looking to reduce downtime? dataPARC’s real-time production monitoring software uses smart alarms to automatically alert operators & maintenance crews to u.

2. Monitor Production

Having a system to monitor production can also help reduce downtime.

Visible process trends at operator stations give a visual of how the process is running over time and if variables are migrating or staying consistent.

Real-time production dashboards can be used to display quality data, relaying information directly from the lab to operations. This ensures product is continuously on quality.

Alarms can be used independently or in conjunction with trends and dashboards to warn operators when upset conditions are occurring. This can allow them to react more quickly, potentially preventing a down from happening.

3. Create a Preventative Maintenance Schedule

Preventative maintenance happens during planned downtime or while the process is running. Part replacement during planned downtime allows the site to order the necessary parts and make sure the proper personnel are on site to perform the tasks, saving time and money.

Regular maintenance when the process is running, such as adding or changing lubricating oils, and cleaning can help increase the lifetime of the parts.

Once a scheduled is created it can be tracked to ensure tasks are being accomplished. MDE (PARCview’s Manual Data Entry) can be configured on a time schedule and integrated with alerts. If a task is skipped, a reminder message can be sent to the operator or escalated to a supervisor.

Maintenance data can be captured and digitized to help predict downtime events for the development of preventative maintenance schedules.

Recording preventative maintenance data allows sites to analyze it alongside downtime and process data. Correlations can appear and help drive necessary maintenance and reduce downtime.

4. Provide Operator Decision Support

Unplanned down events are inevitable and cannot be eliminated completely, so a priority of reducing downtime should also be reducing the duration when a downtime event occurs.

Creating tools and troubleshooting guides for operators to use in the event of a down will help get the process back up more quickly.

To get the process running, operators need to know why it went down in the first place. Providing operators with the necessary resources to find the root cause is key to resolving the issue quickly.

Process dashboards, trends 5-Why analysis, and workflows can help determine the root cause.

Trends, dashboards, and centerlines can draw attention to significate changes in the process. dataPARC’s Centerline display is a tabular report with run-based statistics. This format helps ensure the process is consistent and can point to variables running outside of past operating conditions or limits.

Centerlines proved early fault detection and process deviation warnings, so operators can respond quickly to reduce unplanned downtime events.

A workflow or preconfigured 5-Why analysis can also help point the root cause and suggested solution.

Check out our real-time process analytics tools & see how you can reduce downtime & product loss.

Check out PARCview

5. Perform DMAIC Analysis

The above suggestions are starting points to reduce downtime. If those are in place, the DMAIC process (Define, Measure, Analyze, Improve, Control) can be used. It is a fundamental LEAN manufacturing tool and can be used to help reduce downtime.

Define

First, the process, when the process is considered down, and a list of potential reasons need to be defined.

For each process, determine how it is identified as running or not running.

For many downtime tracking software’s a tag/variable is needed to indicate when the process is considered down. If a specific tag does not exist consider a utility feeding the process such as steam, water, or pressure. As long as there is a clear value that would indicate the process is running or not that variable can be used.

Brainstorming a list of potential downtime reasons is also needed prior to tracking the events. This reason list/tree can be shared or unique for each process area.

Assigning reasons to downtime events provides data that can be used to reduce downtime in the future.

These reasons need to include both planned and unplanned causes. During the analyze phase the planned reasons can be filtered out to focus on the unplanned downtime. For more information on creating a reason tree see 5 steps to harness your data’s potential.

Measure

Measuring and assigning a reason to the downtime is a critical step in being able to reduce it. Having a robust downtime tracking software will help make measuring the downtime easier. Make sure to capture the who, what, when, where, and why of the downtime event.

Once the downtime tracking software records the downtime event, a reason can be assigned.

Some systems can automatically assign reasons based on an error code from the machine. Users can verify the reason or select it from the predefined reason tree.

Additional information can be helpful to capture for the analyze phase. You may consider allowing users to type in free form comments in addition to the predefined reason to further explain why a downtime event occurred. If using PARCview, the evidence field can be configured to capture other important process data over the duration of the event.

Analyze

Now that the downtime is recorded and categorized it can be analyzed. Pareto charts are useful when analyzing downtime events. Data can be charted on a pareto by duration or count of events.

Pareto charts can help you analyze downtime events and learn the most significant causes of downtime.

Take into consideration other key process data such as safety concerns, environmental risks or material wasted, in addition to duration of events to help determine which downtime cause will be most beneficial to target and reduce.

It is not always the reason with the most total minutes down that should be the target.

Take for instance, an event that caused 15 hours of downtime but was due to a weak part and is unlikely going to happen again verses a cause that happens monthly but has only results in about 75 minutes of downtime each event. The reoccurring event is going to be more beneficial to improve.

Improve

When looking for ways to reduce a downtime cause look both on how to prevent the event from occurring in the first place, and how to get the process back up when the event does occur. Both approaches are needed to reduce downtime.

Think about the frequency of inspections, cleanings, how long parts last and if they can be put on a schedule to be replaced rather than waiting for them to fail while the process is running. Refer to the preventative maintenance schedule and update as needed.

Determine the best way to reduce the most impactful downtime causes or reduce the effect. A payoff matrix can help point to the most impactful, least costly solutions.

Control

Continue to measure and analyze the downtime to ensure items that have been reduced do not start popping back up. Repeat the cycle and target another reason. Workflows and SOPs (Standard Operating Procedures) can be created to help stay in control.

Conclusion

Reducing unplanned downtime requires multiple approaches, finding the right tools and software for tracking and monitoring is key to reducing downtime. Data is needed to drive improvement both preventing future events from occurring and reducing the duration when the process does go down.

A downtime tracking software can help save, organize, and review downtime events, allowing you to more effectively reduce downtime in your manufacturing process. dataPARC’s PARCview integrates downtime tracking, and process monitoring in one user friendly program.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Go beyond a typical gap analysis with a real-time gap tracking dashboard. Minimize manufacturing gaps such as operational cost or waste by creating gap tacking systems such as dashboards that calculate the gap in real-time rather than at the end of the month. In this article we outline why in line gap tracking is beneficial and walk through the steps to create real-time calculations and dashboards to track manufacturing gaps as they happen, allowing operations to make data-driven decisions.

Implement real-time gap tracking with dataPARC to help you optimize & control your processes

View Process Optimization Solutions

Why Gap Tracking?

Gap Tracking is going a step beyond gap analysis. Gap analysis is the comparison of actual operating conditions against targets, this is typically done monthly and is an important tool in the continuous improvement process. However, gap analysis has some shortcomings. The feedback loop is drawn out — by the time you can collect the data and compare values it can be days or months after the fact. This can prevent actionable solutions and cause lost opportunity.

To resolve these shortcomings operations needs information in real-time correlated to the levers they can pull on the machine. They need the power to make decisions and adjustments based on this information. The real-time information needs to be quick and easy to view and understand. Progress made needs to be measured and visible in real-time. A dashboard can help provide this information.

Dashboards help visualize the live data and updates or changes to a graphic can happen relatively quickly. Operations can utilize a real-time gap tracking dashboard to alert them of what is causing the gap and get the process back on track in the moment, rather than realize the problem days, weeks, or months later.

How to Create a Gap Tracking Dashboard

Follow along as we demonstrate how we built this gap tracking dashboard.

Conduct a Gap Analysis

The first step in creating a Real-Time Gap Tracking dashboard is to complete a Gap Analysis:

  • Define the area of focus and targets
  • Measure the variables
  • Analyze to targets against the current values
  • Improve the process to with Quick Wins to minimize large gaps
  • Control with routine Gap Analysis, this can include creating a Gap Tracking dashboard.

Gap Tracking Requirements

Regardless of the process and gap being tracked, the same general information is needed to build a gap tracking dashboard. Many of the following requirements will be pulled from the Gap Analysis.

Adequate measurements

Similarly to Gap Analysis, adequate measurements are required for gap tracking, however measurements may need to be taken more frequently for a reactive gap tracking calculation to be performed. This can be a challenging step, but the more variables that are able to be measured closer to real-time will provide more accurate gap tracking calculations.

If variables only have 3-4 data points per day, it can be difficult to see how changes affect the gap in real time. It is possible that variables without adequate measurement are removed from the dashboard and more emphasis is taken on those with more datapoints.

Process baselines

Process baselines are a great way to determine targets if they are not already outlined. Overall process targets typically come from upper management or operational plans. It is necessary to break these overall targets into their individual inputs. Those inputs could be broken down even further. Depending on the process, there could be targets for different products.

One way to determine baseline is finding times of good quality and production, what were the operating conditions and how can they be replicated.

Once a list of individual variables is created a target should be assigned. By meeting each target, the overall target should be met. If the individual variables do not have targets, the process baselines can be used instead.

dataPARC’s Centerline display is a smart aggregation tool which can be used to help establish operation baselines

Custom calculations

Another key step in building a gap tracking system is to standardize measurements and units. All variables should be converted to a per unit basis, some common options include dollar per ton, dollar per hour, off quality ton per ton or waste ton per ton.

Once all the input variables are converted to the same unit, they can be combined to create the overall process gap.

Value opportunities

Involving those with a high degree of process knowledge is critical. They will be able to help identify all the process inputs, then narrow the list to variables that can provide the most value opportunities.

These value opportunities are then tied into the gap tracking dashboard as an operator workflow. The workflow will focus on the variables that operators are able to control and has the greatest effect on the gap. This is where the calculated gap gets connect to process levers that the operators can manipulate to get things back on track.

With insight from a process expert one or two variables may stand out as most room for value added opportunities.

These variables should be the focus when it comes to the layout of the graphic. As most read from left to right, top to bottom, the most important information should appear in the upper left of the dashboard (or oriented closest to the operator if the monitor is going to be off to the side or high up). This will help ensure that those variables with the most opportunity to close the gap are being looked at first.

Operator buy-in

Ultimately, operators are the ones who will be using the dashboard to make data driven decisions in real time. Involve those who will end up using the dashboard as part of the design and implementation to help build ownership and operator buy in. Without operator buy in, the dashboard is a waste.

Software to visualize the dashboard and perform the calculations

Find the right visualization tool for your site. A process data visualization tool should be able to view trends, a grid with the ability to change colors or provide alerts and link to other displays or trends for quick data interpolation.

In this example dataPARC’s Graphic Designer was used to build the dashboard.

Depending on the process and number of inputs these calculations can get rather large and take a while to process on the fly, and even longer if looking at data in the past.

dataPARC’s Calc Server allows for calculations to be historized making this a great tool to use for fast calculations and viewing history.

Considerations When Building & Using a Gap Tracking Dashboard

Gap tracking dashboards are going to look different from machine to machine and site to site. Here is a list of suggestions to keep in mind while building your own gap tracking dashboard.

  • Use grids and rolled up data to convey the current gap. Colored or pattered backgrounds can help draw attention.
  • When a gap occurs, what levers can operations pull to make a change? These variables should be a focus on the dashboard. This can be done by adding trends, focused metrics, or a link to another display to “zoom into” that lever and see if a change can be made and how if effects the process.
  • Work with operators to determine those levers and key pieces of information, ask what would be helpful for them to see on this dashboard. The goal is to get all the important information in one place.
  • Monitor the progress, determine if the overall gap is decreasing, if not determine why. Make changes to the dashboard as needed.
  • The dashboard should provide information without settings requirements on how to run as not all the variables are always within the operators control.

Manufacturing Gap Tracking Example

Let’s look at a Gap Tracking dashboard for a paper machine.

1. Summary Trend

The first thing is a large trend showing the overall paper machine gap in dollars per day. The blue line represents the real-time gap and the yellow the target.

2. Category Trend

The second trend in the bottom left shows the individual calculated gaps by category. These are also on a dollars per day basis. This view allows the user to quickly identify it a category is trending in the wrong direction.

3. Gap Tracking Table

The table in the bottom right of the screen shows the current category gaps on a dollar per ton and dollar per hour basis. The values are highlighted red for over expected cost and green for under. At a glance, users can see what the current status of each category is.

4. Chemical Usage

Off to the right is a chemical usage button that will pull up another display. This button was added because chemical usage was found to have multiple inputs and levers for the operator to pull to close the gap.

Watch the video below to see this gap tracking dashboard in action.

Manufacturing analytics software like dataPARC’s PARCview offer tools to help manufacturing companies perform real-time gap tracking post-gap analysis.

Conclusion

A gap tracking dashboard can provide operators with a clearer picture of the gap in real-time, allowing them to make data-driven decisions. The alerts and real-time calculations bring awareness, letting operators know when something isn’t running optimally.

It is a way to drive process savings by tying analytics to actionable changes.

Instead of waiting for the end of the month to find there was a gap, it is found in real time. Bring troubleshooting into the present, changes can be made in real time to reduce or prevent larger process losses.

Although gap tracking dashboards are powerful tools, they do not replace regular Gap Analysis. To drive continuous improvement, Gap Analysis should be done regularly, targets on the gap tracking dashboard should be adjusted to reflect any process changes.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Both established operational data historians and newer open-source platforms continue to evolve and add new value to business, but the significant domain expertise now embedded within data historian platforms should not be overlooked.

Time-series databases specialize in collecting, contextualizing, and making sensor-based data available. In general, two classes of time-series databases have emerged: well-established operational data infrastructures (operational, or data historians), and newer open source time-series databases.

Enterprise data historian functionality at a fraction of the cost. Industrial time series data collection & analytics tools.

Learn More

Data Historian vs. Time Series Database

Functionally, at a high level, both classes of time-series databases perform the same task of capturing and serving up machine and operational data. The differences revolve around types of data, features, capabilities, and relative ease of use.

Time-series databases and data historians, like dataPARC’s PARCserver Historian, capture and return time series data for trending and analysis.

Benefits of a Data Historian

Most established data historian solutions can be integrated into operations relatively quickly. The industrial world’s versions of commercial off-the-shelf (COTS) software, such as established data historian platforms, are designed to make it easier to access, store, and share real-time operational data securely within a company or across an ecosystem.

While, in the past, industrial data was primarily consumed by engineers and maintenance crews, this data is increasingly being used by IT due to companies accelerating their IT/OT convergence initiatives, as well as financial departments, insurance companies, downstream and upstream suppliers, equipment providers selling add-on monitoring services, and others. While the associated security mechanisms were already relatively sophisticated, they are evolving to become even more secure.

Another major strength of established data historians is that they were purpose-built and have evolved to be able to efficiently store and manage time-series data from industrial operations. As a result, they are better equipped to optimize production, reduce energy consumption, implement predictive maintenance strategies to prevent unscheduled downtime, and enhance safety. The shift from using the term “data historian” to “data infrastructure” is intended to convey the value of compatibility and ease-of-use.

Searching for a data historian? dataPARC’s PARCserver Historian utilizes hundreds of OPC and custom servers to interface with your automation layer.

What about Time Series Databases?

In contrast, flexibility and a lower upfront purchase cost are the strong suits for the newer open source products. Not surprisingly, these newer tools were initially adopted by financial companies (which often have sophisticated in-house development teams) or for specific projects where scalability, ease-of-use, and the ability to handle real-time data are not as critical.

Since these new systems were somewhat less proven in terms of performance, security, and applications, users were likely to experiment with them for tasks in which safety, lost production, or quality are less critical.

While some of the newer open source time series databases are starting to build the kind of data management capabilities already typically available in a mature operational historian, they are not likely to completely replace operational data infrastructures in the foreseeable future.

Industrial organizations should use caution before leaping into newer open source technologies. They should carefully evaluate the potential consequences in terms of development time for applications, security, costs to maintain and update, and their ability to align, integrate or co-exist with other technologies. It is important to understand operational processes and the domain expertise and applications that are already built-into an established operational data infrastructure.

Why use a Data Historian?

Typical connection management and config area from an enterprise data historian.

When choosing between data historians and open source time-series databases, many issues need to be considered and carefully evaluated within a company’s overall digital transformation process. These include type of data, speed of data, industry- and application-specific requirements, legacy systems, and potential compatibility with newly emerging technologies.

According to the process industry consulting organization ARC Advisory Group, modern data historians and data infrastructures will be key enablers for the digital transformation of industry. Industrial organizations should give serious consideration when investing in modern operational historians and data platforms designed for industrial processes.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

11 Things to Consider When Selecting a Data Historian for Manufacturing Operations:


1. Data Quality

The ability to ingest, cleanse, and validate data. For example, are you really obtaining an average, such as if someone calibrates a sensor, will the average include the calibration data? If an operator or maintenance worker puts a controller in manual, has an instrument that failed, or is overriding alarms, does the historian or database still record the data? Will the average include the manual calibration setpoint?

2. Contextualized Data

When dealing with asset and process models based on years of experience integrating, storing, and accessing industrial process data and its metadata, it’s important to be able to contextualize data easily. A key attribute is the ability to combine different data types and different data sources. Can the historian combine data from spreadsheets and different databases or data sources, precisely synchronize time stamps and be able to make sense of it?

3. High Frequency/High Volume Data

It’s also important to be able to manage high-frequency, high-volume data based on the process requirements, and expand and scale as needed. Increasingly, this includes edge and cloud capabilities.

4. Real-Time Accessibility

Data must be accessible in real time so the information can be used immediately to run the process better or can be used to prevent abnormal behavior. This alone can bring enormous insights and value to organizations.?

5. Data Compression

Deep compression based on specialized algorithms that compress data, but enables users to reproduce a trend, if needed.

6. Sequence of Events

SOE capability enables user to reproduce precisely what happened in operations or a production process.

7. Statistical Analytics

Built in analytics capabilities for statistical spreadsheet-like calculations to perform more complex regression analysis. Additionally, time series systems should be able to stream data to third party applications for advanced analytics, machine learning (ML) or artificial intelligence (AI).

8. Visualization

The ability to easily design and customize digital dashboards with situational awareness that enable workers to easily visualize and understand what is going on.

9. Connectability

Ability to connect to data sources from operational and plant equipment, instruments, etc. While often time-consuming to build, special connectors can help. OPC is a good standard but may not work for all applications.

10. Time Stamp Synchronization

Ability to synchronize time stamps based on the time the instrument is read wherever the data is stored – on-premises, in the cloud, etc. These time stamps align with the data and metadata associated with the application.

11. Partner Ecosphere

Can make it easy to layer purpose-built vertical applications onto the infrastructure for added value.

Looking Ahead

Rather than compete head on, it’s likely that the established historian/data infrastructures and open-source time-series databases will continue to co-exist in the coming years. As the open-source time series database companies progressively add distinguishing features to their products over time, it will be interesting to observe whether they lose some of their open-source characteristics. To a certain extent, we previously saw this dynamic play out in the Linux world.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Manufacturers use a variety of tools and systems every day to manage their process from start to finish. It is critical that these systems can provide a “single pane of glass” and “single version of truth” along the way. Meaning, data can be viewed from any device, location or system and will be synchronized across all platforms. Manufacturing Operations Management Systems make this possible.

Real-time manufacturing operations management and industrial analytics tools

Check out PARCview

What is Manufacturing Operations Management?

Manufacturing Operations Management (MOM) is a form of LEAN manufacturing where a collection of systems is used to manage a process from start to finish. The key to MOMs is ensuring data is consistent across all systems being used from scheduling and production to shipment and delivery.

MOM includes software tools designed for the management of people, business processes, technology, and capital assets to meet customer demand while creating shareholder value. Tying in the LEAN manufacturing, the processes must be efficiently performed and resources productively managed. These are the prerequisites for successful operations management.

Key Applications of Manufacturing Operations Management


Supply chain & resource management

MOM systems include tools for planning, procuring, and receiving raw materials and components, especially as it relates to obtaining, storing, and moving necessary materials/components in a timely manner and of suitable quality to support efficient production, something that is certainly critical in these times of supply chain disruptions.

To deal with today’s dynamic business environment ranging from challenges caused by pandemics, shutdowns, geo-political conflicts, and supply chain disruptions, organizations need to be able to be sustainable, operationally resilient, conform to ESG goals, deploy the latest cybersecurity tools, and connect its workforces from any location.

Process & production management

Once all the resources are gathered, MOM tools need to be established for implementing product designs to specifications, developing the formulations or recipes for manufacturing the desired products, as well as manufacturing of product or products that conform to specifications and comply with regulations.

Organizations must monitor and adjust their processes quickly and automatically, to efficiently evaluate the situation when an inevitable glitch occurs. This is a prime opportunity for digital transformation through MOM systems.

Distribution & customer satisfaction management

The final stage of MOM relates to the distribution to the customers, particularly as it relates to sequencing and in-house logistics, as well as supporting products through their end-of-life cycles.

Organizations must react in real-time to changing market conditions and customer expectations. They will have to innovate with new business processes that reach throughout the organization, into the design and supply chain.

Driving innovation & transformation

Successfully innovating at this level involves managing people, processes, systems, and information. When disruptive technologies are in the mix, the first challenge is often tied up in the interplay of people and technology.

Only when the people involved begin to understand what the new MOM technologies are capable of and have the tools to visualize the data and real-time manufacturing analytics software to convert this data into actionable information can they begin to take steps towards achieving the innovation.

One output of manufacturing operations management systems is a production dashboard, like this one, built with dataPARC’s PARCview, which create a shared view of current operating conditions and critical KPIs.

Manufacturing Operations Management Systems

Today’s MOM systems can play a role in achieving the next levels of operations performance because they marshal many or all the needed services in one place and can provide a development and runtime environment for small or large applications.

Common MOM Tools

In addition to leveraging the latest AI, ML, AR/VR, APM, digital twin, edge, and Cloud technologies, MOM systems often consist of one or more of the following:

  • Manufacturing Execution Systems (MES)
  • Enterprise Asset Management (EAM)
  • Human-Machine Interface (HMI)
  • Laboratory Information Systems (LIMS)
  • Plant Asset Management (PAM)
  • Product Lifecycle Management (PLM)
  • Plant Asset Management (PAM)
  • Real-time Process Optimization (RPO)
  • Warehouse Management Systems (WMS)

MOM systems integrate with business systems, engineering systems, and maintenance systems both within and across multiple plants and enterprises.

An example of a multi-site operations with unique manufacturing applications at each site. Some tools like dataPARC’s PARCview, enable manufacturers to integrate data across sites for more effective manufacturing operations management.

Supply Chain Management (SCM), Supplier Resource Management (SRM), Transportation Management (TMS) are commonly used to manage the supply chain.

Plant automation systems, such as Distributed Control Systems (DCS) and Programmable Logic Controllers/Programmable Automation Controllers (PLCs/PACs) are key technologies driving manufacturing production.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

Data Visualization and Real-Time Analytics for MOM

Maybe the most important manufacturing operations management tools for managing production are the data visualization and real-time manufacturing analytics software platforms, like dataPARC, which provide integrated operations intelligence and time-series data historian software.

These MOM tools focus on data connectivity, real-time plant performance, and visualization + analytics to empower plant personnel and support their decision-making process.

Benefits of MOM Visualization & Analytics Tools


Eliminate Data Silos

Most real-time manufacturing operations management analytics tools offer the ability to connect to both manufacturing and operations data. Data from traditionally isolated data silos, such as lab quality data, or ERP inventory data, can be pulled in and presented side-by-side for analysis in a single display.

Establish a single source of truth

MOM analytics tools offering visualization plus integration capabilities enable manufacturers to create a “single version of truth” which everyone from management to the plant floor can use to understand the true operating conditions at a plant.

Often combining multiple sites and multiple data sources to form a single view, users leverage this data to gain perspectives and intelligence from both structured and unstructured operational and business data.

Produce common KPI dashboards

By measuring metrics and KPIs, such as production output, yields, material costs, quality, and downtime, users at multiple levels and roles can make better decisions to help improve production efficiencies and business performance.

Real-time manufacturing operations management dashboards from manufacturing analytics providers can pull in data from multiple physical sites or from multiple manufacturing process areas and display them in a common dashboard.

Manufacturing operations management dashboards from manufacturing analytics providers can pull in data from multiple physical sites for real-time production monitoring

Facilitate data-driven decision-making

Without operations intelligence provided by manufacturing operations management systems, users are often unable to properly understand how their decisions affect the process. MOM analytics software can display data sourced in the business systems for direct access to cost, quality control, and inventory data to support better business decisions.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Final Thoughts

The next generation of MOM systems is here. The economics of steady-state operations have been replaced with a dynamic, volatile, disruptive economic environment in which adapting to changing supply and demand, along with issues such as pandemics, shutdowns and geo-political conflicts are the norm. Tighter production specifications, greater economic pressures, and the need to maintain supply chain visibility in real-time, be sustainable and operationally resilient, plus more stringent process safety measures, cybersecurity standards, ESG goals and environmental regulations further challenge this dynamic environment. Managing these challenges requires more agile, less hierarchical structures; highly collaborative processes; reliable instrumentation; high availability of automation assets; excellent data; efficient information and real-time decision-support systems; accurate and predictive models; and precise control. Uncertainty and risks must be well understood and well managed in all aspects of the decision-making process.

Perhaps most importantly, everyone must have a clear understanding of the business objectives and progress toward those objectives. Increasingly, effective manufacturing operations management requires real-time decisions based on a solid understanding of what is happening, and the possibilities over the entire operations cycle. Organizations pursuing Digital Transformation should consider focusing on MOM systems and not just transformative new technologies to drive operations performance to new levels. This means utilizing software tools, such as manufacturing data integration, visualization, and real-time manufacturing analytics software that gathers a user’s manufacturing data in one single pane of glass view and establishes a single source of the truth.

This article was contributed by Craig Resnick. Craig is a primary analyst ARC Advisory Group. Craig’s focus areas include production management, OEE, HMI software, automation platforms, and embedded systems.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Minimize manufacturing gaps such as operational cost or waste by performing a gap analysis with process data. In this article we will walk through the steps of a basic manufacturing gap analysis and provide an example.

Implement real-time gap tracking with dataPARC to help you optimize & control your processes

View Process Optimization Solutions

What is a Gap Analysis for Manufacturing?

A gap analysis is the process of comparing current operating conditions against a target and determining how to bridge the difference. This is an essential part of continuous improvement and LEAN Manufacturing.

A manufacturing gap analysis can be performed on a variety of metrics, such as:

  • operational costs
  • quality
  • productivity
  • waste
  • etc.

When it comes to bridging the gap, the ideal case is to reach the target with a single and permanent change. This is not always the case, there are times where the gap is going to be fluid and would benefit from constant monitoring and small adjustments. In these cases, operations can utilize a real-time gap tracking dashboard to alert them of what is causing the gap and get the process back on track in the moment, rather than realize the problem days, weeks or months later.

Manufacturing analytics software like dataPARC’s PARCview offer tools to help manufacturing companies perform real-time gap tracking post-gap analysis.

Who Conducts a Manufacturing Gap Analysis?

Gap analysis can be performed by anyone trying to optimize a process. As mentioned above there are a multitude of metrics that can be measured.

A process engineer might want to reduce operational cost by focusing on energy consumption, someone in the finance department may notice an increase in chemical cost every month, and a supervisor may want to reduce the time it takes to complete a task to focus on other items.

Almost every department can leverage gap tracking in one way or another.

How to Perform a Gap Analysis in Manufacturing

Like many other improvement strategies, we can use the DMAIC method (Define, Measure, Analyze, Improve, Control) to perform a gap analysis and implement a live gap tracking dashboard.

To create a gap tracking dashboard, a gap analysis needs to be completed first. In the last stage, Control, the dashboard is created, and operations can perform steps Analyze-Improve-Control in real-time.

1. Define

The first step is to define the area of focus and identify the target. A great place to start when looking for an area of focus is the company’s strategic business plan, operational plan, or yearly operational goals. Many times, these goals will already have targets in place.

2. Measure

Next, the process must be measured. Take a close look at the measurement system. Is the data reliable? Does the measurement system provide the necessary information? If so, measure the current state of the process.

If there is no current measurement system, one will need to be created. Although in-process measurements or calculations are best, manual input can also be used.

Some manufacturing analytics providers, like dataPARC, offer manual data entry tools which allow users to create custom tags for manual input. These tags can be trended and used the process tags in dashboards and displays.

3. Analyze

Take the data and compare it to the goal. How far from the target is the process? This is the gap. It may help to visualize the process gap in multiple ways such as with a histogram or trend display.

This histogram provides the overall distribution of the data which can help narrow the focus. What does the peak look like, is it a normal distribution, skewed to one side, is there a double-peak, or edge peak?

A trend shows how the process is shifting overtime, are there times of zero gap vs large gaps such as shift or season?

With the measurement system in place, the gap realized, and some graphical representations of the data, it is time to brainstorm potential causes of the gap. Brainstorming is not a time to eliminate ideas, get everything written down first. There are a variety of tools that can be used to help in this process:

Fishbone Diagram

This classic tool helps determine root causes by separating the process into categories. The most common categories are People, Process/Procedure, Supplies, Equipment, Measurement and Environment, other categories or any combination can be used to fit the situation.

The fishbone diagram is a classic tool for performing root cause analysis.

The team can brainstorm each category and identify any causes that could play a role in the problem. Dive one step further with a 5-why analysis, a method that simply asks “why” until it cannot be answered any more to ensure the true root cause is uncovered.

Is gap analysis one of your digital transformation goals? Let our Digital Transformation Roadmap guide your way.

get the guide

SWOT Chart

This chart is made of four squares, with labeled sections: Strengths, Weaknesses, Opportunities, and Threats. This strategy is used to determine the internal and external factors that drive the effectiveness of the process. For potential root cases, focus on what appears in weaknesses and see if potential solutions find their way to opportunities.

SWOT charts are another fundamental root cause analysis tool.

McKinsey 7s web.

The McKinsey framework is made up of 7 elements, categorized as 3 “Hard” or controllable elements and 4 “Soft”, non-controllable elements. In each element, write the current and desired state. It is important elements are in alignment with one another, any misalignment could point to a root cause.

The McKinsey Framework.

4. Improve

Determine the best way to bridge the gap and implement the changes. A payoff matrix or efficiency impact trend can help pick the most effective, least costly options. Focus on quick wins. Items in busy work can be completed but are not a priority. Those in Major Projects, you must ask, is the price work the impact? Anything that is low impact and high cost can be dropped.

A payoff matrix or efficiency impact trend can help you determine the best way to bridge gaps.

After the solutions are implemented, check the results by analyzing the data again and see if there was an improvement.

5. Control

Once the target is met it is important to keep it that way. Monthly reports can be used to keep track of the process gap and make sure it stays in the desired range.

Set up a dashboard, or other visual to monitor the process in real time. By tracking the gap of the process in real time, operations can see how changes to the process effect the bottom line in real time, rather than at the end of the month.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

An Example of Gap Analysis for Manufacturing

In this manufacturing example we are going to walk through a gap analysis to improve operational costs on a single paper machine.

Define

The company’s operational plan has a goal for monthly operational cost. To break this down into a manageable gap analysis, the focus will be looking at a single machine. This machine is not currently meeting the monthly operational cost goal on a regular basis.

Measure

Since this is an initiative from an operational plan, there is already a measurement system in place. The machine operational costs are broken down into 5 variables. Speed, Steam, Chemical, Furnish and Basis Weight.

These variables are measured continuously, so data can be pulled in hourly, daily, and/or monthly averages. The variety of data views will help in the next stage. Each of these variables have a target, but some are missing upper and lower control limits.

Analyze

First, the combined daily operational cost was compared against the target. There were days where the target is met, but it is not consistent.

Next, each of the five variables were compared with their targets separately over the past several months. From this view, Chemical and Steam stood out as the two main factors that were driving up the operational cost. With that in mind, we moved onto the Fishbone diagram and 5-why analysis.

Using the fishbone diagram we were able to determine that chemical and steam were the two main factor driving up our costs over the past several months.

Improve

From the fishbone and 5 why, we found that there were targets but no control limits set on all the chemicals. Operators were adding the amount of chemical they felt would accomplish the quality tests without trying to only apply to necessary amount.

Thinking about the cost/effectiveness diagram, it is cost free to add control limits to each chemical additive. Engineers pulled chemical and quality data from multiple months, created a histogram to find the distribution and set up control limits to help the operators have a better gauge of how much chemical to apply and the typical range that is needed to satisfy any quality tests.

For steam, there were a lot of potential root causes around the fiber mix and cook. The mill already has SOPs to deal with situations such as bad cooks. Another root cause that came up during the fishbone was steam leaks. Most leaks can be fixed while the machine is running, so over the next several weeks there was a push to find and close major leaks.

Control

In this case, since limits were created for chemical usage, alarms were also created to alert operations if they exceeded the control limit. Alerts are a great way to notify operations when processes are drifting out of control so quick corrects can be made.

After a few weeks of these changes, another analysis was be completed. The operational costs are meeting target, and it was time to move to the next process. It is important not to forget about the operational cost, this was monitored monthly to ensure it does not exceed the target.

Conclusion

Performing routine gap analysis is an important step in LEAN Manufacturing and continuous improvement. By following the above steps manufacturers can optimize their process by reducing waste, operational costs, improving quality or going after other key metrics.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

The most important differences between relational databases, time-series databases, and data lakes and other data sources are the ability to handle time-stamped process data and ensure data integrity.

Enterprise data historian functionality at a fraction of the cost. Industrial time series data collection & analytics tools.

Learn More

The Manufacturing Database Battle

The most important differences between relational databases, time-series databases, and data lakes and other data sources are the ability to handle time-stamped process data and ensure data integrity.

This is relevant because the primary job of the data management technology is to:

  • Accurately capture a broad array of data streams
  • Deal with very fast process data
  • Align time stamps
  • Ensure the quality and integrity of the data
  • Ensure cybersecurity
  • Serve up these data streams in a coherent, contextualized way for operational personnel

Time-Series Databases

Digital technologies and sensor-based data are fueling everything from advanced analytics, artificial intelligence and machine learning to augmented and virtual reality models. Sensor-based data is not easily handled by traditional relational databases. As a result, time-series databases have been on the rise and, according to ARC Advisory Group research, this market is growing much more rapidly than traditional relational databases.

While relational databases are designed to structure data in rows and columns, a time-series database or infrastructure aligns sensor data with time as the primary index.

Time-series databases specialize in collecting, contextualizing, and making sensor-based data available. In general, two classes of time-series databases have emerged: well-established operational data infrastructures (operational, or data historians), and newer open source time-series databases.

To gain maximum value from sensor data from operational machines, data must be handled relative to its chronology or time stamp. Because the time stamp may reflect either the time when the sensor made the measurement, or the time when the measurement was stored in the historian (depending upon the data source), it is important to distinguish between the two.

Searching for a data historian? dataPARC’s PARCserver Historian utilizes hundreds of OPC and custom servers to interface with your automation layer.

Relational Databases

Time series data technologies – whether open-source databases or established historians – are built for real-time data. Relational databases, in contrast, are built to highlight relationships, including the metadata attached to the measurement (alarm limits, control limits, customer spend, bounce rate, geographic distribution between different data points, etc.). Relational technologies can be applied to time series data, but this requires substantial amounts of data preparation and cleaning and can make data quality, governance, and context at scale difficult.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Data Lakes

Data lakes, meanwhile, score well on scalability and cost-per-GB, but poorly on data access and usability. Not surprisingly, while data lakes have the most volume of data, they typically have fewer users. As with time series technologies, the market will decide the time in which and how these different technologies get used.

Looking Ahead

Digital technologies and sensor-based data are fueling everything from advanced analytics, artificial intelligence and machine learning to augmented and virtual reality models. The fourth industrial revolution, or Industrie 4.0, along with major market disruptions, such as the pandemic driving sustainability, and operational resilience initiatives, has led to a great acceleration of digital transformation and exponential changes in industrial operations and manufacturing taking place.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Process Manufacturing

IT/OT Convergence and corresponding buzzwords are all over the place. If you are in manufacturing, chances are you have seen ads and read articles like this one on the necessity of keeping up with the ever-changing digital landscape. No matter what the source, important data needs to get to the right people at the right time, so where does one start?

In a plant, chances are you have many sources of IT and OT data that contribute to seamless operations. Whether it is an oil & gas, food, chemical or mineral process, on the ideal days, your operation runs like a well- oiled machine. That being said, how easily and consistently can the entire plant or enterprise operate efficiently, solve problems and reduce bottlenecks?

Accurate and fast data access, intuitive data visualization and analytics are needed to make decisions that affect production on a 24/7 basis. Resources are also at risk when changes are made, even if they are for the better. Ripping and replacing data systems and equipment is expensive and involves a lot of risk.

No matter what the source, important data needs to get to the right people at the right time, so where does one start to make small, gradual IT/OT changes?

Integrating IT & OT data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Utilize Tools you Already Have While Adding New Tools That Drive Change

Firstly, you can leverage the data assets you already have. Risk is reduced considerable when you can add on products that enhance data sources you already use. For example, if you utilize a data historian that works well for your needs, you can add a data visualization and analytics solution on top, as well as connect other sources that were previously siloed. Not replacing the historian saves a headache and allows your important production to continue without interruption. Using what you already have means saving valuable financial resources that can be allocated to other needs, faster ROI and reduced risk.

How Energy Transfer Empowered Their Operations

An example of a company that utilized current tools and enhanced their operations with new ones was Energy Transfer, a midstream energy company headquartered in Houston, Texas. Energy Transfer had an enormous amount of data at their 477 sites and 90k miles of pipeline. The company has acquired assets over the years, resulting in multiple vendor data systems across the Enterprise, and did not have a good way to share it. Engineers and management both needed data from all sites but couldn’t make operational decisions before the window of opportunity had passed. They also knew future acquisitions with different data systems might come, so a flexible future was critical. Energy Transfer purchased a reliable data visualization and analytics software solution and was then able to combine data from all sites with just one program.

Not only was the data easily available, but the tools enabled the data to be customized to the way that everyone- from operator to engineer to management- wanted to see it. Sites could customize and access data they specifically needed, while corporate headquarters could access data they wanted and customize it to their specific needs. By joining their OT and IT worlds in a future-proof manner, Energy Transfer not only improved efficiency but they saved time and money while doing it to the tune of $10M in annualized savings.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

Start Small and Involve Your Team in the Integration of IT/OT Data

Another step you can take is to get your people on the same page about digitizing as many processes and data as you can at your plant. This means small steps such as LIMS data and process data being shared digitally rather than manually through eliminating time-consuming methods such as handwritten reports and manually entered excel spreadsheets. Manual processes create delays and perpetuate data silos as they are only available to those who receive them through arduous, dated methods. A reliable data visualization and analytics software can help you do this, allowing you to share the data seamlessly. The small things can add up to the big things in IT/OT convergence. Getting the team into a digital mindset is an important step.

Starting with a team to talk about IT/OT convergence is a wise first step. Formulating a plan with goals, milestones and deadlines helps make the process more bite sized and manageable. Defining roles, assigning tasks and meeting about progress are effective ways of incorporating your IT/OT data into your manufacturing process and reporting.

W.R. Grace Connected their Data Sources Enterprise Wide

An example of a company that started small and soon widely adopted IT/OT convergence was W.R. Grace, a $2 billion specialty chemical company that manufactures a wide array of chemicals and is the world’s leading supplier of FCC catalyst and additives. Grace has manufacturing locations on three continents and sells to over 60 countries and is headquartered in Columbia, Maryland. Grace’s IT and OT worlds were very disconnected. Grace had robust OT and IT data but was limited in the way they could use this data. Most Grace staff were required to use the operator control system functions to interact with process data because the existing historian tools were ineffective, slow, and not intuitive, resulting in no buy-in, especially from operations staff.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Grace implemented a data visualization solution across several of its chemical facilities, including its largest facility, Curtis Bay, with over 600 employees. Grace Curtis Bay unified key sources of data with a robust data visualization and analytics product. Grace employed intuitive, fast and effective dashboards, unifying operations and information technologies and thereby increasing process management effectiveness, decreasing downtime and increasing profitability. These dashboards were easy to use and highly intuitive, and in turn they got utilized by many staff who were hesitant to engage the previous system. Grace saw an ROI from the implementation of data visualization software tools in less than six months. Using the data visualization & analytics tools also allowed Grace to make predictive and proactive changes in their operations resulting in increased first pass quality.

If you lack high performing data visualization and analytics across your plant or enterprise, achieving IT/OT convergence in manufacturing is relatively simple when you employ the right resources and have a plan that allows for gradual changes that add up to significant results. Small changes such as combining existing systems, effective software tools and a mindset of proactive digital awareness for your team can significantly impact the overall success of your operations. Newfound ease in your daily tasks and reporting are results of effective IT/OT convergence and will be experienced by everyone. From the operator to the engineer all the way to corporate management, small changes become major impacts when completed strategically and as part of an effective plan for positive digital progress.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Process Manufacturing

Now that PI ProcessBook is headed toward extinction, many engineers & operators in the process industries are faced with the challenge of replacing the tools they’ve depended on for years. But how to go about replacing critical trends and displays? In this article we’ll give you some tips for evaluating ProcessBook alternatives, and present some of the best ProcessBook alternatives available today.

Looking to replace ProcessBook? Learn how you can export your existing displays without issue.

Learn more

Why you Need to Replace ProcessBook

ProcessBook End of Life

In late 2020, OSIsoft announced the retirement of ProcessBook, PI’s venerable data visualization toolkit that debuted all the way back in 1994.

Because of its tight integration with PI’s industry-leading historian, ProcessBook was widely adopted by process engineers to build trends, dashboards, and process graphics from their PI Server time-series data.

Over the years, as competitors have popped up with newer, more powerful analytics tools, many engineers have stuck with ProcessBook, often simply because of their familiarity with the toolkit, or because of the difficulty of migrating critical graphics and displays to a new platform.

OSIsoft’s announcement will likely force their hands.

While current users will continue to be able to use ProcessBook indefinitely, by discontinuing support, OSIsoft is clearly encouraging customers to make plans to transition to alternative platforms. Security updates for ProcessBook are scheduled to end in 2022, and support for the platform will end entirely in December of 2024.

So, if you’ve been on the fence about replacing ProcessBook at your facility, now’s the time to begin looking into some alternatives.

How to Evaluate ProcessBook Alternatives

The industrial analytics marketplace has become quite crowded. There are dozens, maybe hundreds of companies out there making analytics products for everything from broad industrial applications to niche manufacturing processes. So, where do you start in your search for an alternative to ProcessBook? What factors should you consider to determine which of these solutions is the best fit for you? At a high level, you should be thinking about the following:

1. Ease of Integration

Chances are, if you’re using ProcessBook, you’re also using PI Server as a data historian. If you’re looking to replace ProcessBook, the most critical question you’ll have to answer is, how well does the alternative integrate with PI server. The answer to this question will mean the difference between a quick, 10 minute connection to PI Server, or a costly and time-intensive investment in custom development and new OT infrastructure.

Migration is also a key consideration. Your organization has invested much time and money into building the ProcessBook displays that you depend on to run your operations efficiently. Some ProcessBook alternatives provide tools to simply migrate your existing ProcessBook displays to your new platform. Others… don’t.

2. Diagnostic Analytics Capabilities

Diagnostic, or “exploratory” analytics tools are used for root cause investigation and troubleshooting of downtime events or product quality issues.

Trends and trending capabilities are at the core of diagnostic analytics. Effective root cause analysis depends on rapid ad-hoc analysis and the ability to quickly overlay historical data from various process areas to determine correlation and causation.

Trending is likely to be one of the top two capabilities ProcessBook users are looking to address with a replacement system.

In addition to trends, diagnostic analytics are often supported by other visualization tools, such as histograms, X/Y charts, and Pareto charts. When presented with difficult process questions, the more ways you can slice and dice your data the easier it will be to arrive at the correct answer.

3. Operations Management Capabilities

“Operations Management” is broadly defined here as the capabilities that allow for:

  • Production tracking
  • Process monitoring
  • OEE tracking
  • Quality management
  • Process Alarms & Notifications
  • Reporting
  • Manual Data Entry

That’s a lot of functionality, and most of it comes from dashboarding and process graphics-building tools that leverage process data for real-time monitoring. Basic analytics solutions typically only allow for monitoring at the site level, but more sophisticated offerings allow enterprise-wide tracking of production KPIs across multiple sites.

ProcessBook users have probably gotten the most use mileage out of the platform’s dynamic, interactive graphics, and it’s not uncommon for displays built with ProcessBook to see a decade or more of continued use. When looking to replace ProcessBook’s operations management capabilities you have a few options. You could look for point solutions designed specifically for that capability, like SSRS for reporting. You could find a highly customizable product that has coding capabilities like ProcessBook did. Or you could find a broad a solution that has the building blocks necessary to solve multiple business needs.

4. Advanced Analytics Capabilities

Advanced analytics is another loaded term that we’ll define here for the purpose of this post. Often used in relation to leading-edge manufacturing concepts, like machine learning and industrial AI, advanced analytics in ProcessBook replacement tools will typically take two forms: predictive analytics and prescriptive analytics.

Predictive analytics tools promise to prevent downtime and improve OEE by building models from recorded data to anticipate and alert users to potential productivity loss. Prescriptive analytics take the next logical step and tell you which actions need to be taken to address predicted production issues.

Together, and in conjunction with process automation tools, predictive and prescriptive analytics form a sort of elementary Artificial Intelligence used to maximize plant performance.

5. Cost/Pricing

Although it’d certainly be nice if it weren’t the case, cost will likely be a consideration as you consider ProcessBook alternatives. Pricing for these solutions is usually determined by features and the scope of implementation, and most providers don’t publicly list their pricing so providing even ballpark figures will be difficult. However, there’s one key factor you should be aware of when evaluating pricing.

The pricing model

Pricing models vary between process manufacturing analytics providers, to include everything from flat rate pricing, usage-based pricing, tiered pricing, and user-based pricing. These days, many manufacturing analytics solutions use “per-user” pricing, with the licensing cost going up according to the number of individuals using the tools at a facility. The upside with per-user pricing is that for small facilities, or for organizations with few people monitoring and analyzing process data, it can make for a relatively-cost effective solution. The flipside, obviously, is that for data-driven companies who believe in giving every operator, engineer, and SME the ability to contribute to improving plant performance, per-user pricing can get very expensive very fast.

Seamlessly integrate with PI Server for real-time process monitoring and rapid root cause analysis.

Learn more

The Best ProcessBook Alternatives

dataPARC’s PARCview

A real-time data analysis and visualization toolkit developed by the end user, for the end user, dataPARC’s PARCview application has long co-existed with OSIsoft’s PI Server in process facilities around the world. In fact, over 50% of all dataPARC installations are built on top of PI historians.

If you value ProcessBook primarily for its trending and interactive process graphics, dataPARC is a superb alternative, featuring diagnostic analytics and operations management capabilities that are significant upgrades from what you’ve been accustomed to with ProcessBook.

Ease of Integration

dataPARC integration with PI Server is extremely simple, with native integration allowing users to connect and begin visualizing PI historian data in a matter of minutes. By utilizing the latest PI SDK technology and other performance focused features, most ProcessBook users will find improved performance.

Likewise, dataPARC’s ProcessBook conversion utility allows users to bulk import their existing ProcessBook displays without losing any functionality.

Diagnostic Analytics Capabilities

Widely considered the best time-series trending application available for analyzing process data, dataPARC’s greatest strength is in its ability to connect and analyze data from various sources within a facility.

In addition to its powerful real-time trending toolkit, dataPARC is loaded with features that support root-cause analysis and freeform process data exploration, including:

  • Histograms
  • X-Y plots
  • Pareto charts
  • 5-why analysis tool
  • Excel plugin

dataPARC is also likely the fastest ProcessBook alternative you’ll find on this list. dataPARC’s Performance Data Engine (PDE) provides access to both aggregated and lossless datasets , allowing the best of both worlds – super fast access to long-term datasets and extremely high resolution short-term data.

Operations Management Capabilities

dataPARC offers a complete set of tools for operations management. dataPARC’s display design tool offers the ability to create custom KPI dashboards and real-time process displays using pre-built pumps, tanks, and other industry standard objects. dataPARC even allows you import existing graphics or entire ProcessBook displays.

All the standard reporting features are included here, along with smart notifications that can be configured to trigger email or text alerts for downtime events or other process excursions. dataPARC’s Centerline tool is one of the platform’s most powerful features, providing operators with an intuitive multivariate control chart with early fault detection and process deviation warnings, so operators can eliminate quality or equipment issues before they occur.

Additional operations management capabilities offered by dataPARC include a robust module for data entry (manual or electronic), notifications, advanced calculation engine and a task scheduling engine.

Advanced Analytics Capabilities

dataPARC doesn’t make claims to artificial intelligence or machine learning, but the platform provides a solid interface for advanced analytics, offering a data modeling module that uses PLS & PCA to power predictive analytics for maintenance, operations & quality applications.

Cost/Pricing

dataPARC provides an unlimited user license for PARCview, which makes it a good fit for organizations wishing to get production data in front of decision-makers at every level of the plant. Compare dataPARC vs PI.

Evaluate the top alternatives to Processbook & PI Vision in our PI Server Data Visualization Tools Buyer’s Guide.

Get the Guide

PI Vision

OSIsoft’s ProcessBook successor, PI Vision is branded as being the “fastest, easiest way to visualize PI Server data.” PI Vision is a web-based application that runs in the browser, which can be a significant change for ProcessBook users used to a locally-installed desktop app.

Ease of Integration

PI Vision likely offers the most straightforward integration with PI Server, as it’s part of OSIsoft’s PI System.

Migration of existing ProcessBook screens to PI Vision is supported by OSIsoft’s PI ProcessBook to Pi Vision Migration utility, however many users have reported difficulty retaining the full functionality of custom displays and graphics after moving them into Pi Vision.

Diagnostic Analytics Capabilities

Many users report that PI Vision’s trending tools provide less firepower than ProcessBook and competitors for root cause analysis and ad-hoc diagnostics, but it is perfectly capable of performing the basic trending functions of plotting time-series and other data against time on a graph.

OSIsoft also offers their PI DataLink Excel plugin, which is often used for more advanced diagnostic analytics efforts.

Operations Management Capabilities

Process Displays are the heart and soul of the PI System and if you’re moving from ProcessBook you’ll likely feel at home working in PI Vision.

Although it’s not a 100% feature-for-feature replacement of ProcessBook (the SQC module, for instance isn’t available in Vision), some of the ProcessBook features lacking in early versions of PI Vision are being added via periodic updates. In addition, there are some new capabilities in PI Vision that don’t exist in ProcessBook.

The PI Vision displays use HTML5 and are integrated into PI’s Asset Framework (AF), which results in pretty intuitive display building. Basic reporting is available as well, but like much of the PI system, the data must be extracted to Excel via PI DataLink.

Advanced Analytics Capabilities

PI Vision doesn’t include any built-in data modeling tools or other advanced analytics components. PI Server data can be brought into 3rd party analytics apps via PI Integrator for advanced analysis.

Cost/Pricing

PI Vision uses a per-user pricing model, which is great for small organizations with only a few people accessing the platform. For larger manufacturers, or enterprise implementations with teams of operators, process engineers, and data scientists accessing the product, PI Vision can become quite expensive.

Proficy CSense

GE acquired CSense back in 2011 to provide better data visualization and analytics tools for use with their own Proficy Historian. CSense is billed as industrial analytics software that improves asset and process performance. Trending and diagnostic analytics is the focus here, with less emphasis on robust process displays.

Ease of Integration

CSense is optimized for integration with Proficy, GE’s own time-series data historian. A OSISoft PI OLEDB provider is required to integrate with OSIsoft’s PI Server. This interface may affect performance for users familiar with native PI integration.

Diagnostic Analytics Capabilities

CSense’s trending and diagnostic capabilities likely exceed what you’ve experienced with ProcessBook.

Dedicated modules for troubleshooting (CSense Troubleshooter) and process optimization (CSense Architect) provide modern trends, charts, and other visualization tools to analyze continuous, discrete, or batch process performance.

Operations Management Capabilities

GE takes a modular approach to their data visualization products. Much of the operations management functionality provided in a single product like ProcessBook is spread over many separate products within the Proficy suite.

There’s a fair amount of overlap in these products, but somewhere among GE’s iFIX, CIMPLICITY, Proficy Operations Hub, Proficy Plant Applications, and Proficy Workflow products you’ll find operations management capabilities that greatly exceed those of ProcessBook.

Advanced Analytics Capabilities

CSense marketing materials are filled with mentions of industrial advanced analytics, digital twins, machine learning, and predictive analytics.

Like PARCview’s approach, this advanced functionality is enabled by the insights mined from the powerful data visualization tools of the core platform, though models can be developed in CSense to help predict product quality and asset failure.

Cost/Pricing

CSense licensing is offered in three editions – Runtime, Developer, and Troubleshooter, all which come with different component combinations and data connectors.

Canary Axiom

Like CSense and PI Vision, Canary’s Axiom was designed as the visualization component of a larger system. Axiom was built to support analysis of data stored in Canary Historian.

Ease of Integration

Canary’s Data Collectors can connect to process data sources via OPC DA and OPC UA, but they don’t have a dedicated module to connect to PI Server like some of the other options on this list.

Diagnostic Analytics Capabilities

Axiom is browser-based trending application that, while not as powerful as some other ProcessBook replacements, is easy to use and capably performs basic diagnostic analysis. It lacks the more powerful features of products like dataPARC and CSense, but Canary does provide an Excel Add-in for performing more advanced analysis.

If you were perfectly fine with the trending and diagnostic capabilities of ProcessBook, you’ll likely be satisfied with what Axiom provides.

Operations Management Capabilities

Axiom offers dashboarding and reporting capabilities alongside their trending tools, but ProcessBook users may be disappointed by the lack of emphasis on display building provided here. Simply put, if you’re looking to replace your dynamic, interactive ProcessBook displays, you’ll want to look elsewhere.

Canary also lacks support for migrating existing ProcessBook displays, which is a feature that both dataPARC and PI Vision have.

Advanced Analytics Capabilities

Canary avoids making flimsy claims to current buzzwords like Industrial AI or Machine Learning. Axiom’s focus is on nuts-and-bolts trending & analysis of time-series data, though their Excel Add-in does certainly open the door to more advanced analytics applications.

Cost/Pricing

Canary helpfully posts their pricing on their website, with Axiom fetching a per-client fee in the form of both a one-time and monthly charge. This pricing assumes you’ll be using the Canary Historian, so the info on the website won’t be much help if you’re looking to connect Axiom to PI Server data only.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

TrendMiner

Offering “self-service industrial analytics”, TrendMiner provides a complete suite of web-based time-series data analysis tools.

Ease of Integration

Like dataPARC, TrendMiner offers native integration with PI Server. You can expect connecting to your PI historian to be very easy, although TrendMiner doesn’t appear to support the transfer of existing ProcessBook displays.

Diagnostic Analytics Capabilities

With a name like TrendMiner, you’d assume that trending and diagnostic analysis is key for the Software AG brand. And you’d be correct – tag browsing, trend overlays, and data filtering are all favorite features of TrendMiner customers.

Root Cause Analysis is a core use case for TrendMiner, but some users have mentioned difficulty learning to use the system to identify process issues. Reports of slow performance also make this a riskier choice when considering ProcessBook alternatives.

Operations Management Capabilities

Monitoring is at the core of TrendMiner’s operations management capabilities. The platform provides a number of tools that support smart alarming & notifications of process excursions, but visualization of processes seems limited to trends. ProcessBook users who depend on interactive process displays to manage and monitor plant performance will need to look to other options on this list to replace that functionality.

Advanced Analytics Capabilities

TrendMiner’s monitoring features extend the platform into “predictive” analytics territory. Like dataPARC and some of the other ProcessBook alternatives on this list, predictive performance is enabled via models based on historical data.

Cost/Pricing

TrendMiner pricing is dependent on your particular use case and they don’t list pricing on their website, however you can expect them to be on the higher end of the ProcessBook alternatives listed here.

Seeq

Founded in 2012, Seeq offers Advanced Analytics for Process Manufacturing Data. Analysis is the focus with Seeq, and their products are heavy on diagnostic and predictive analytics, without as much support for operations management as some of the other ProcessBook alternatives on this list.

Ease of Integration

Like dataPARC, Seeq is optimized to connect to various different data sources within a plant, including OSIsoft’s PI Server. You should expect a pretty seamless integration with your PI historian data.

Seeq does not support importing existing ProcessBook displays.

Diagnostic Analytics Capabilities

Seeq’s browser-based Workbench application powers the platform’s diagnostic analytics, offering a powerful set of trending and visualization tools.

Advanced trending, bar charts, tables, scatterplots, and treemaps can all be employed to perform rapid root-cause analysis, and represent a significant upgrade in diagnostic capabilities from ProcessBook.

Operations Management Capabilities

Seeq offers the ability to configure alarms for process monitoring, and their Seeq Organizer application allows users to build scorecards and dashboards for KPI monitoring, but they don’t provide anything approaching the display-building capabilities that ProcessBook provides.

Engineers looking for a ProcessBook replacement, and an option for migrating or even replicating their existing displays, will likely want to look at other alternatives.

Advanced Analytics Capabilities

Advanced Analytics is an area where Seeq shines. Claiming predictive analytics, machine learning, pattern recognition, and scalable calculation capabilities, it’s clear that Seeq intends to be your solution for sophisticated process data analysis. Most of Seeq’s advanced analytics capabilities come from their Seeq Data Lab application, which provides access to Python libraries for custom data processing.

Cost/Pricing

Seeq’s pricing isn’t listed on their website, but they reportedly use a per-user pricing model which starts at $1,000/year per user.

Looking to replace ProcessBook See why PARCview is regarded as the #1 ProcessBook alternative.

Ignition

Inductive Automation’s Ignition application is a popular SCADA platform with a broad set of tools for building industrial analytics solutions.

Ease of Integration

Ignition offers the ability connect to virtually any data in your plant. While it lacks the easy native integration capabilities of some of the other ProcessBook alternatives on this list, Ignition supports various methods of connecting to your PI historian, including JDBC and OPC.

Diagnostic Analytics Capabilities

While Ignition shines as an HMI designer or MES, it ranks quite a bit lower than others on this list in its diagnostic analytics capabilities. But it wasn’t designed for root cause investigations and ad-hoc data analysis. Trending in Ignition is basic and integrated into displays, similar to what users may be familiar with in ProcessBook.

Operations Management Capabilities

Ignition’s Designer, on the other hand, is an extremely capable application for building process graphics and HMIs for real-time process monitoring and KPI tracking.

While perhaps lacking the interactivity that ProcessBook users are familiar with, Ignition modules are available to help manage SPC, OEE, material tracking, batch processing, and more.

Advanced Analytics Capabilities

Again, predictive analytics, prescriptive analytics, machine learning, industrial AI… these aren’t Ignition’s focus and don’t factor in to their application feature sets.

Cost/Pricing

Inductive Automation offers unlimited users for a single server license of Ignition, with pricing based per-feature, starting at around $12,500 and going up from there. Ignitions price very much reflects its standing as one of the industry’s most popular SCADA platforms.

Looking for a ProcessBook Alternative?

Well, you have a lot to consider. Presented with the prospect of discontinued support for ProcessBook, your challenge is twofold: first, figure out how to replace the displays, trends, and other features that made ProcessBook valuable to you, and second, evaluate potential replacement candidates for capabilities like predictive and prescriptive analytics that didn’t exist when you got started with ProcessBook.

All the ProcessBook replacement options we listed above feature different toolsets and it’s up to you to identify your needs and determine which solution is the right one for your organization. Hopefully this post got you set off in the right direction on your search.

pi processbook alternatives guide

Download the Guide

Discover top alternatives to PI’s ProcessBook and PI Vision analytics toolkits.

Download PDF
0

Process Manufacturing, Troubleshooting & Analysis

One of the easiest ways to explain why you should implement 5 whys root cause analysis problem solving at your plant is simple: the cause of the problem is often (we’d go so far to say almost always) different that initially speculated. Implementing a lean strategy like 5 whys can save you time and headaches in the future.

Issues with and failures of assets are bound to happen in process manufacturing. Your team’s strategy for resolving problems that occur will determine your productivity in countless ways. Getting a process set up for resolving problems will not only help with the current issues and failures but will create a plan to resolve future problems with the goal of each resolution coming faster and easier.

Real-time process analytics software with integrated 5 Whys analysis tools.

Check out PARCview

What Exactly are the 5 Whys?

The 5 whys is a lean problem- solving strategy that is popular in many industries. Using lean methodology, the 5 whys were developed by Sakichi Toyoda, a Japanese inventor and industrialist. The 5 whys focus on root cause analysis (RCA) and is defined as a systematic process for identifying the origins of problems and determining an approach for responding to and solving them. 5 whys focuses on prevention or being proactive rather than being reactive.

5 whys strives to be analytical and strategic about getting to the bottom of asset failures and issues, it is a holistic approach that is used by stepping back and looking at both the process and the big picture. The essence of 5 whys is revealed in the quote below where something as small as a nail in a horse’s shoe being lost was the root cause of a war being lost:

“For want of the nail the shoe was lost
For want of the shoe the horse was lost
For want of a horse the warrior was lost
For want of a warrior the battle was lost
For want of a battle the kingdom was lost
All for the want of a nail.”

One of the key factors for successful implementation of the 5 whys technique is to make an informed decision. This means that the decision-making process should be based on an insightful understanding of what is happening on the plant floor. Hunches and guesses are not adequate as they are the equivalent of a band-aid solution.

The 5 whys can help you identify the root cause of process issues at your plant.

Below is an example of a 5 whys method being used for a problem seemingly as basic as a computer failure. If you look closely, you will conclude that the actual problem has nothing to do with a computer failure.

  • Why didn’t your computer perform the task? – Because the memory was not sufficient.
  • Why wasn’t your memory sufficient? – Because I did not ask for enough memory.
  • Why did you underestimate the amount of memory? – Because I did not know my programs would take so much space.
  • Why didn’t you know programs would take so much space? – Because I did not do my research on programs and memory required for my annual projects.
  • Why did you not do research on memory required? – Because I am short staffed and had to let some tasks slip to get other priorities accomplished.

As seen in the example above, the real problem was not in fact computer memory, but a shortage in human assets. Without performing this exercise, the person may never have gotten to the place where they knew they were short staffed and needed help.

This example can be used to illustrate problems in a plant as well. Maybe an asset is having repeated failures or lab data is not testing accurately. Rather than immediately concluding that the problem is entirely mechanical, use the 5 whys method and you may discover that your problem is not what you think.

Advantages and Disadvantages of the 5 Whys Method

Advantages are easily identified by such outcomes such as being able to identify the root cause of your problem and not just the symptoms. It is simple and easy to use and implement, and perhaps the most attractive advantage is that it helps you avoid taking immediate action without first identifying the real root cause of the problem. Taking immediate action on a path that is not accurate is a waste of precious time and resources.

Disadvantages are that some people may disagree with the different answers that come up for the cause of the problem. It is also only as good as the collective knowledge of the people using it and if time and diligence is not applied, you may not uncover and address the true root cause of the problem.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

5 Whys Root Cause Analysis Implementation at your Plant

Now that the problem in its essence has been revealed, what are the next steps?

Get Familiar with the 5 Whys Concept

The first step in implementing the 5 whys at your plant is to get familiar with the concept. You may research the 5 whys methodology online or listen to tutorials to gain a deeper understanding. In the information age, we have access to plenty of free information at the touch of our screens. Get acquainted with 5 whys. Even if it is just one video on YouTube and a few articles online, a better understanding means a better implementation.

Schedule a 5 Whys Meeting with Your Team

The second step would be to solve a problem at your plant using the 5 whys method. To do so, follow the steps below to schedule and hold a 5 whys RCA meeting.

These steps are simple to give you a basic understanding. To gain greater understanding in detail on how to implement 5 whys RCA, read on for in depth instructions.

  • Organize your meeting
  • Define your problem statement
  • Ask the first “why”
  • Ask why four more times
  • Determine countermeasures
  • Assign responsibilities
  • Monitor progress
  • Schedule a follow-up meeting

In other words, the root cause analysis process should include people with practical experience. Logically, they can give you the most valuable information regarding any problem that appears in their area of expertise.

What’s Next? What Happens Once I have Held my 5 Whys meeting?

Once your meeting has been held and you begin to implement the 5 whys method, it is essential to remember that some failures can cascade into other failures, creating a greater need for root cause analysis to fully understand the sequence of cause and failure events.

Root Cause Analysis using the 5 whys method typically has 3 goals:

  • Uncover the root cause
  • Fully understand how to address and learn from the problem
  • Apply the solution to this and future issues, creating a solid methodology to ensure the same success in the future

The Six Phases of 5 Whys Root Cause Analysis

Digging even deeper, when 5 whys root cause analysis is performed, there are six phases in one cycle. The components of asset failure may include environment, people, equipment, materials, and procedure. Before you carry out 5 whys RCA, you should decide which problems are immediate candidates for this analysis. Just a few examples of where root cause analysis is used include major accidents, everyday incidents, human errors, and manufacturing mistakes. Those that result in the highest costs to resolve, most downtime, or threats to safety will rise to the top of the list.

There are some software-based 5 whys analysis tools out there, like dataPARC’s PARCview, which automatically identifies potential the 5 top culprits of a process issue & links to trend data for deeper root cause analysis.

Phase 1: Make an Exhaustive List of Every Possible Cause

The first thing to do in 5 whys is to list every potential cause leading up to a problem or event. At the same time, brainstorm everything that could possibly be related to the problem. In doing these steps you can create a history of what might have gone wrong and when.

You must remain neutral and focus only on the facts of the situation. Emotions and defensiveness must be minimized to have an effective list that you can start with. Stay neutral, open. Talk with people and look at records, logs and other fact keeping resources. Try to replay and reconstruct what you think happened when the problem occurred.

Phase 2: Evidence, Fact and Data Seeking and Gathering

Phase 2 is the time when you get your hands on any possible data or files that can lead to the possible causes of your problem. Sources for this data may be databases, digital handwritten or printed files. In this phase the 5 whys list of at least 5 reasons comes into play and needs backup for each outcome or reason why that came up on your list.

Phase 3: Identify What Contributed to the Problem

In Phase 3 all contributions to the problem are identified. List changes and events in the asset’s history. Evidence around the changes can be very helpful so gather this as you are able. Evidence can be broken down into four categories: paper, people, recording and physical evidence. Examples include paperwork specific to an activity, broken parts of the assets and video footage if you have it.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Phase 4: Collect Data and Analyze It

In Phase 4, you should analyze the collected data. Organize changes or events by how much or little you can have an impact on the outcome. Then decide if each event is unrelated, a correlating factor, a contributing factor, or a root cause. An unrelated event is one that has no impact or effect on the problem whatsoever. A correlating factor is one that is statistically related to the problem but may or may not have a direct impact on the problem.

A contributing factor is an event or condition that directly led to the problem, in full or in part. This should help you arrive at one or more one root causes. When the root cause has been identified, more questions can be asked. Why are you certain that this is the root cause instead of something else?

Phase 5: Preventing Future Breakdowns with Effective Countermeasures

The fifth phase of 5 whys Root Cause Analysis is to preventing future breakdowns by creating a custom plan that includes countermeasures, which essentially address each of the 5 whys you identified in your team meeting. Preventive actions should also be identified. Your actions should not only prevent the problem from happening again, but other problems should not be caused because of it. Ideally a solid solution is one that is repeatable and can be used on other problems.

One of the most important things to determine is how the root cause of the problem can be eliminated. Root causes will of course vary just as much as people and assets do. Examples of eliminating the root cause of the issue are changes to the preventive maintenance improved operator training, new signage or HMI controls, or a change of parts or part suppliers.

In addition, be sure to identify any cost associated with the plan. How much was lost because of the problem, and how much it is going to cost to implement it.

To avoid and predict the potential for future problems, you should ask the team a few questions.

  • What are the steps we must take to prevent the problem from reoccurring?
  • Who will implement and how will the solution be implemented?
  • Are any risks involved?

Phase 6 – Implementation of your Plan

If you make it to this step, you have successfully completed 5 whys root cause analysis and have a solid plan.

Depending on the type, severity, and complexity of the problem and the plan to prevent it from happening again, there are several factors the team needs to think about before implementation occurs. These can include the people in charge of the assets, asset condition and status, processes related to the maintenance of the assets, and any people or processes outside of asset maintenance that have an impact on the identified problem. You would be surprised how much is involved with just one asset when you exhaustively think about it and make a list of all people and actions involved during its useful life.

Implementing your plan should be well organized and orchestrated as well as documented. Follow up meetings with your team should be scheduled to talk about what went well, and what could be improved upon. With time, 5 whys can become an effective tool to both solving and preventing future problems at your plant.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0