Your address will show here +12 34 56 78
Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

In this article we will explore common types of process engineering software and highlight some powerful software applications that every process engineer should know.

Process Engineers! Solve challenging process & product quality issues with these process data visualization tools.

Learn more

Types of Process Engineering Software

Process Engineers use software every day for a range of reasons.

There is software that allows users to visualize data, perform calculations, and conduct data analysis to help with process optimization and root case analysis.

Software can help mitigate risk with simulations and modeling. For instance, engineers can test how a process may react to changes without introducing hazards or wasting material.

Today and for the foreseeable future, process engineering software allows us to communicate problems, ideas, and solutions.

Graphical Dashboards

Engineers often create graphics dashboards to help display real-time data in a more consumable way.

Graphical dashboard tools are often combined with trend visualization software such as PARCView.

When creating a dashboard, it is important to know the audience and information to be conveyed. Will the dashboard be a process diagram, displaying a single view of how the process is running? Or a quality page, showing recent lab data and specifications.

Flashing values or pop ups can be used in displays to alert operations when a parameter goes out of specification. When adding alerts or colors, be aware of colorblindness and how that can modify interpretation of the display.

Reporting Software

Reports can range from detailed documents to concise information with embedded charts and images. The type of report will dictate which software to use.

There are multiple reporting tools available. Some, like dataPARC’s production monitoring software use simple coding to pull in data and automate the report while others may require more manual effort.

Microsoft’s Excel is a foundation in data reporting. It is a powerful tool and there are many internet resources that can help users at any level build reports in Excel with VBA.

SSRS (SQL Server Reporting Services) is another standard reporting tool for those who have data in SQL. SSRS can be used to create reports and send them on a schedule.

Microsoft Word is often used to create reports. It is best used for reports where automation and accessibility to data is not needed.

Centerlining Software

Operational envelopes, or Centerlining, is used to make sure process parameters are set up consistently from run to run. This helps to produce good quality product more quickly.

Engineers will build and review operational envelopes as guides. These ranges need to be set up for every product produced, which can be time consuming and inaccurate.

With PARCview’s unique Centerline tool, an engineer does not have to analyze and create the operational ranges. By adding variables to the centerline tool, it can identify if any variables are running outside of normal operational ranges.

dataPARC’s Centerline

LIMS Software

Process engineers can manage and use LIMS (Laboratory Information Management Systems). LIMS provide a way to record, apply limits, and organize lab data and processes.

LIMS can be independent systems such as LabWare or SAP . Others can be integrated with data visualization and process software, allowing users to view lab and operations data and lab data in the same program.

LIMS are important to process engineers because they track testing and quality information which can be key when working with customers or providing a look into plant operations.

ERP Software

ERP software is used for product tracking and scheduling. It can be used to allocate items to projects for billing purposes or for tracking project hours.

ERP software helps keep the process organized and allow engineers and operations to know what is coming up on the production schedule.

SAP and Oracle are two common ERP applications. They help keep everyone on the same page with a single source of truth.

Looking for new process engineering software? check out dataPARC’s process engineering toolkit. Tackle root cause analysis, process monitoring, predictive modeling, & more.

Root Cause Analysis Software

Software to assist in root cause analysis can help reduce downtime. When a process goes down the cause may not be known right away.

SOLOGIC is a program dedicated to root case analysis, it has tools included cause and effect diagrams, fishbone, and incident timelines.

By looking at process trends and centerlines from data visualization software, engineers can identify if any variables were in upset conditions prior to the down.

By recording downtime root causes an engineer could look at that information in a pareto chart. This will identify the most common or most time-consuming reasons for lost time. With this information the engineer can go after minimizing occurrence or duration of a specific cause.

Pareto charts can help identify common lost-time reasons

Data visualization software with built in downtime tracking capabilities can help expedite root cause analysis.

Check out our real-time process analytics tools & see how you can reduce downtime & product loss.

Check out PARCview

Process Optimization Software

A key role for many process engineers is process optimization. This can include in depth Six Sigma projects requiring large amounts of data, testing and implementation of solutions.

Throughout such projects statistical software, such as Minitab or JMP, is used.

minitab

Process optimization software can help identify variables that are statistically significant to the process, pointing to the parameters that will have the greatest impact.

Software is utilized to help come to conclusions, however subject matter experts can identify which statistically significant data is also practically significant and worth pursuing.

Quality Management Software

Process engineers work with quality to ensure products are made to the production specification and that customers are receiving a consistent product, order after order.

Histograms of variables with the 3-Sigma value and Cpk can be used to help create or review specification and control limits.

Many data visualization applications can produce histograms and simple charts to help with quality data analysis. Excel can also be used to create such charts.

Process Calculation Software

Process engineers are often responsible for calculating Overall Equipment Effectiveness (OEE), producing process metrics and or other process calculations.

Software will reduce errors in these calculations and produce results quickly.

In the case of process calculations, there are times when making a change to process requires calculating different volumes and ratios of material to prevent a reaction. Utilizing a software over manual calculations can prevent errors that may result in downtime or safety incidents.

Depending on if the value is used for a one-time purpose or needs to be regularly calculated can narrow down which software should be used to create the calculation.

Excel can be used; however, data would need to be pulled into the program. Integrated data visualization software can perform calculations and historize those values, so they are always ready.

Simulation & Modeling Software

Simulation, modeling, and sizing software is used when designing a new process or making changes to and existing process.

AutoPIPE is used for pipe stress analysis, Pipe-Flo can perform fluid flow calculations and AFT Arrow is a gas flow simulation that can be used for insulation sizing. AutoCAD is used for a range of 3-D modeling including drafting and design.

Such model and simulation programs can allow engineers to test and see how the process may perform before it is built – mitigating risk and saving material.

Other Useful Software

When asked what process engineering software they use daily, each engineer we spoke to had had different programs for the areas of software outlined in the previous section. There were some that came up repeatedly.

Whether you are starting your career as a Process Engineer or have many years under your belt, utilizing some of this software can promote career development and help streamline daily tasks.

Microsoft Office

Microsoft has several programs that are used on a regular basis, even if your company doesn’t have these specific ones, there is something like it.

https://www.microsoft.com/en-us/

Looking for new process engineering software? check out dataPARC’s process engineering toolkit. Tackle root cause analysis, process monitoring, predictive modeling, & more.

Excel

As mentioned in many of the sections above, Excel can be used for a variety of reasons and is a pivotal tool to have in the toolbox. Excel can be used to generate reports, create charts and graphs, complete calculations and analyze data.

Word

Word is a simple tool for creating reports, Standard Operating Procedures (SOPs) and other types of documentation.

OneNote

OneNote is a newer product that can be used as a digital notebook. It helps organize notes, and one could even build their own quick reference guide. Similarly, to other Microsoft products, OneNote notebooks or sheets can be shared with multiple people through OneDrive.

PowerPoint

There are many options for presentation generators and PowerPoint is still used to lead presentations and meetings. PowerPoints are a simple way to present information.

Teams

Teams is not the only online meeting and communication tool, but it is a very common one. Teams is used for inner company chat, virtual meetings, and document sharing. Within teams you can create groups (teams) that is connected to SharePoint to share documents, chat, post. Emails can even be sent directly to a group so other people can comment on it as a thread.

Outlook

Outlook is the primary hub for emails and meeting scheduler. Internally, you can view others calendars to find when free time to schedule a meeting without emailing the back and forth.

Project

Project is used to build Gantt Charts for project management. Often used for scheduled downtime and other large projects. These can be very detailed and helpful to stay organized. There is a bit of a learning curve.

Snagit / Snip It

Screenshots and images are helpful to include in emails, reports, SOPs to enhance communication.

The original screenshot was using the print screen button on the keyboard. Now, with multiple monitors this is unrealistic because the desired image needs to be cropped to a certain size.

Snip it is standard on Windows computer, but it has limited editing options, and does not auto save the images. It does allow to take a screenshot of a certain area, rather than the entire screen.

A step up from Snip it is Snagit, it is a paid software, but it allows multiple screenshots, saving them to a library to come back to later. The editing capabilities are far superior to Snip it, allowing annotation, arrows, blurring, etc.

https://www.techsmith.com/screen-capture.html

These are just a few examples of the countless screen capture software out there. In short, it is an essential tool for all business types.

Notepad++

Viewing and editing code without having to run it in production is valuable. Notepad++ is a versatile program allows users to see code in its programing language.

It has add-ins and one can compare code sets against one another, it will highlight what the difference which can help find bugs, typos, etc.

Notepad++ is free. It keeps tabs saved so you can close and re-open without having to save the files to a specific location.

https://notepad-plus-plus.org/

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Reducing downtime increases productivity, lowers costs, and decreases accidents. Downtime tracking software can be utilized to help reduce downtime. Knowing why the process is going down is key to reducing it.

Monitor, report, & analyze production loss from unplanned downtime, poor quality, and performance issues.

Learn more

What is Downtime?

Downtime is any duration in which a process is not running. However, not all downtime is created equally. There are two types of downtime, planned and unplanned.

Downtime events represented visually in dataPARC’s process trending software.

What is Planned Downtime?

Planned downtime is when production schedules a time to take the process down. Planned downtime is a necessity to maintain machinery by conducting inspections, cleaning, and replacing parts.

Planned downtime allows operations to organize, schedule and prepare for the downtime. They can coordinate with contractors, order parts and plan tasks to complete while the process is down. Planned downs can be organized so personnel have tasks to accomplish and the necessary tools on hand.

What is Unplanned Downtime?

Unplanned downtime is when the machine or process is down for any unscheduled event. This can be due to a part break, lack of material, power outage, etc. Unplanned downtime is unpredictable and should be targeted when aiming to reduce overall downtime.

Importance of Reducing Unplanned Downtime

Unplanned downtime is significantly more costly and dangerous than planned downtime. Since unplanned downtime is unpredictable and the process could go down for numerous reasons it is impossible to be prepared for every situation.

Waiting on parts or the necessary personnel to fix an issue takes time and could mean the machine is going to stay down for longer. Longer downtime is less time making product, directly effecting the bottom line.

Another cost attributed to unplanned downs is the unsellable product made and wasted material. The time right before, at the time of the down and start up of the process tends to lead to off quality product.

Unplanned downtime can attribute to near-misses or accidents. During unplanned downtime the goal is to get the machine/process up and running again as soon as possible. This pressure can create a stressful, chaotic environment resulting in people reacting rather than stopping to think about the best plan forward.

Reducing unplanned downtime can help lower overall operating costs. It also reduces the times when employees are put in unpredictable situations, decreasing the likelihood of an accident occurring.

How to Reduce Downtime

There are numerous reasons for process downtime and multiple approaches may need to be implemented in the effort to reduce it.

1. Track Downtime

Before jumping into the steps of reducing downtime, it is critical to track it. Tracking downtime lets you see why the process is going down and provides a metric on if it is being improved.

The data collected in tracking downtime will be used to help reduce it. Consider collecting the following data for each down occurrence:

  • Duration
  • Reason/Cause
  • Product at time of down
  • Process Area
  • Shift or Crew
  • Operator Comments
  • Other attributes such as environmental occurrences due to downtime, waste collected over the duration, safety concerns, etc.

This data can be collected manually, however having an automated system will ensure the data is collected for each event. More consistent data will help reduce the downtime.

Downtime tracking software can automate and help organize the downtime. Some considerations when researching downtime tracking software:

  • Ease of use
  • Automatically captures downtime events
  • Records downtime cause and other data
  • Analyze data and events
  • Integrate with process data

There are many options for downtime tracking software on the market. Some are dedicated downtime tracking applications, while others, like dataPARC’s PARCview may offer a suite of manufacturing analytics tools that include a downtime tracking module. The right choice is the one that will be used consistently.

Looking to reduce downtime? dataPARC’s real-time production monitoring software uses smart alarms to automatically alert operators & maintenance crews to u.

2. Monitor Production

Having a system to monitor production can also help reduce downtime.

Visible process trends at operator stations give a visual of how the process is running over time and if variables are migrating or staying consistent.

Real-time production dashboards can be used to display quality data, relaying information directly from the lab to operations. This ensures product is continuously on quality.

Alarms can be used independently or in conjunction with trends and dashboards to warn operators when upset conditions are occurring. This can allow them to react more quickly, potentially preventing a down from happening.

3. Create a Preventative Maintenance Schedule

Preventative maintenance happens during planned downtime or while the process is running. Part replacement during planned downtime allows the site to order the necessary parts and make sure the proper personnel are on site to perform the tasks, saving time and money.

Regular maintenance when the process is running, such as adding or changing lubricating oils, and cleaning can help increase the lifetime of the parts.

Once a scheduled is created it can be tracked to ensure tasks are being accomplished. MDE (PARCview’s Manual Data Entry) can be configured on a time schedule and integrated with alerts. If a task is skipped, a reminder message can be sent to the operator or escalated to a supervisor.

Maintenance data can be captured and digitized to help predict downtime events for the development of preventative maintenance schedules.

Recording preventative maintenance data allows sites to analyze it alongside downtime and process data. Correlations can appear and help drive necessary maintenance and reduce downtime.

4. Provide Operator Decision Support

Unplanned down events are inevitable and cannot be eliminated completely, so a priority of reducing downtime should also be reducing the duration when a downtime event occurs.

Creating tools and troubleshooting guides for operators to use in the event of a down will help get the process back up more quickly.

To get the process running, operators need to know why it went down in the first place. Providing operators with the necessary resources to find the root cause is key to resolving the issue quickly.

Process dashboards, trends 5-Why analysis, and workflows can help determine the root cause.

Trends, dashboards, and centerlines can draw attention to significate changes in the process. dataPARC’s Centerline display is a tabular report with run-based statistics. This format helps ensure the process is consistent and can point to variables running outside of past operating conditions or limits.

Centerlines proved early fault detection and process deviation warnings, so operators can respond quickly to reduce unplanned downtime events.

A workflow or preconfigured 5-Why analysis can also help point the root cause and suggested solution.

Check out our real-time process analytics tools & see how you can reduce downtime & product loss.

Check out PARCview

5. Perform DMAIC Analysis

The above suggestions are starting points to reduce downtime. If those are in place, the DMAIC process (Define, Measure, Analyze, Improve, Control) can be used. It is a fundamental LEAN manufacturing tool and can be used to help reduce downtime.

Define

First, the process, when the process is considered down, and a list of potential reasons need to be defined.

For each process, determine how it is identified as running or not running.

For many downtime tracking software’s a tag/variable is needed to indicate when the process is considered down. If a specific tag does not exist consider a utility feeding the process such as steam, water, or pressure. As long as there is a clear value that would indicate the process is running or not that variable can be used.

Brainstorming a list of potential downtime reasons is also needed prior to tracking the events. This reason list/tree can be shared or unique for each process area.

Assigning reasons to downtime events provides data that can be used to reduce downtime in the future.

These reasons need to include both planned and unplanned causes. During the analyze phase the planned reasons can be filtered out to focus on the unplanned downtime. For more information on creating a reason tree see 5 steps to harness your data’s potential.

Measure

Measuring and assigning a reason to the downtime is a critical step in being able to reduce it. Having a robust downtime tracking software will help make measuring the downtime easier. Make sure to capture the who, what, when, where, and why of the downtime event.

Once the downtime tracking software records the downtime event, a reason can be assigned.

Some systems can automatically assign reasons based on an error code from the machine. Users can verify the reason or select it from the predefined reason tree.

Additional information can be helpful to capture for the analyze phase. You may consider allowing users to type in free form comments in addition to the predefined reason to further explain why a downtime event occurred. If using PARCview, the evidence field can be configured to capture other important process data over the duration of the event.

Analyze

Now that the downtime is recorded and categorized it can be analyzed. Pareto charts are useful when analyzing downtime events. Data can be charted on a pareto by duration or count of events.

Pareto charts can help you analyze downtime events and learn the most significant causes of downtime.

Take into consideration other key process data such as safety concerns, environmental risks or material wasted, in addition to duration of events to help determine which downtime cause will be most beneficial to target and reduce.

It is not always the reason with the most total minutes down that should be the target.

Take for instance, an event that caused 15 hours of downtime but was due to a weak part and is unlikely going to happen again verses a cause that happens monthly but has only results in about 75 minutes of downtime each event. The reoccurring event is going to be more beneficial to improve.

Improve

When looking for ways to reduce a downtime cause look both on how to prevent the event from occurring in the first place, and how to get the process back up when the event does occur. Both approaches are needed to reduce downtime.

Think about the frequency of inspections, cleanings, how long parts last and if they can be put on a schedule to be replaced rather than waiting for them to fail while the process is running. Refer to the preventative maintenance schedule and update as needed.

Determine the best way to reduce the most impactful downtime causes or reduce the effect. A payoff matrix can help point to the most impactful, least costly solutions.

Control

Continue to measure and analyze the downtime to ensure items that have been reduced do not start popping back up. Repeat the cycle and target another reason. Workflows and SOPs (Standard Operating Procedures) can be created to help stay in control.

Conclusion

Reducing unplanned downtime requires multiple approaches, finding the right tools and software for tracking and monitoring is key to reducing downtime. Data is needed to drive improvement both preventing future events from occurring and reducing the duration when the process does go down.

A downtime tracking software can help save, organize, and review downtime events, allowing you to more effectively reduce downtime in your manufacturing process. dataPARC’s PARCview integrates downtime tracking, and process monitoring in one user friendly program.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Go beyond a typical gap analysis with a real-time gap tracking dashboard. Minimize manufacturing gaps such as operational cost or waste by creating gap tacking systems such as dashboards that calculate the gap in real-time rather than at the end of the month. In this article we outline why in line gap tracking is beneficial and walk through the steps to create real-time calculations and dashboards to track manufacturing gaps as they happen, allowing operations to make data-driven decisions.

Implement real-time gap tracking with dataPARC to help you optimize & control your processes

View Process Optimization Solutions

Why Gap Tracking?

Gap Tracking is going a step beyond gap analysis. Gap analysis is the comparison of actual operating conditions against targets, this is typically done monthly and is an important tool in the continuous improvement process. However, gap analysis has some shortcomings. The feedback loop is drawn out — by the time you can collect the data and compare values it can be days or months after the fact. This can prevent actionable solutions and cause lost opportunity.

To resolve these shortcomings operations needs information in real-time correlated to the levers they can pull on the machine. They need the power to make decisions and adjustments based on this information. The real-time information needs to be quick and easy to view and understand. Progress made needs to be measured and visible in real-time. A dashboard can help provide this information.

Dashboards help visualize the live data and updates or changes to a graphic can happen relatively quickly. Operations can utilize a real-time gap tracking dashboard to alert them of what is causing the gap and get the process back on track in the moment, rather than realize the problem days, weeks, or months later.

How to Create a Gap Tracking Dashboard

Follow along as we demonstrate how we built this gap tracking dashboard.

Conduct a Gap Analysis

The first step in creating a Real-Time Gap Tracking dashboard is to complete a Gap Analysis:

  • Define the area of focus and targets
  • Measure the variables
  • Analyze to targets against the current values
  • Improve the process to with Quick Wins to minimize large gaps
  • Control with routine Gap Analysis, this can include creating a Gap Tracking dashboard.

Gap Tracking Requirements

Regardless of the process and gap being tracked, the same general information is needed to build a gap tracking dashboard. Many of the following requirements will be pulled from the Gap Analysis.

Adequate measurements

Similarly to Gap Analysis, adequate measurements are required for gap tracking, however measurements may need to be taken more frequently for a reactive gap tracking calculation to be performed. This can be a challenging step, but the more variables that are able to be measured closer to real-time will provide more accurate gap tracking calculations.

If variables only have 3-4 data points per day, it can be difficult to see how changes affect the gap in real time. It is possible that variables without adequate measurement are removed from the dashboard and more emphasis is taken on those with more datapoints.

Process baselines

Process baselines are a great way to determine targets if they are not already outlined. Overall process targets typically come from upper management or operational plans. It is necessary to break these overall targets into their individual inputs. Those inputs could be broken down even further. Depending on the process, there could be targets for different products.

One way to determine baseline is finding times of good quality and production, what were the operating conditions and how can they be replicated.

Once a list of individual variables is created a target should be assigned. By meeting each target, the overall target should be met. If the individual variables do not have targets, the process baselines can be used instead.

dataPARC’s Centerline display is a smart aggregation tool which can be used to help establish operation baselines

Custom calculations

Another key step in building a gap tracking system is to standardize measurements and units. All variables should be converted to a per unit basis, some common options include dollar per ton, dollar per hour, off quality ton per ton or waste ton per ton.

Once all the input variables are converted to the same unit, they can be combined to create the overall process gap.

Value opportunities

Involving those with a high degree of process knowledge is critical. They will be able to help identify all the process inputs, then narrow the list to variables that can provide the most value opportunities.

These value opportunities are then tied into the gap tracking dashboard as an operator workflow. The workflow will focus on the variables that operators are able to control and has the greatest effect on the gap. This is where the calculated gap gets connect to process levers that the operators can manipulate to get things back on track.

With insight from a process expert one or two variables may stand out as most room for value added opportunities.

These variables should be the focus when it comes to the layout of the graphic. As most read from left to right, top to bottom, the most important information should appear in the upper left of the dashboard (or oriented closest to the operator if the monitor is going to be off to the side or high up). This will help ensure that those variables with the most opportunity to close the gap are being looked at first.

Operator buy-in

Ultimately, operators are the ones who will be using the dashboard to make data driven decisions in real time. Involve those who will end up using the dashboard as part of the design and implementation to help build ownership and operator buy in. Without operator buy in, the dashboard is a waste.

Software to visualize the dashboard and perform the calculations

Find the right visualization tool for your site. A process data visualization tool should be able to view trends, a grid with the ability to change colors or provide alerts and link to other displays or trends for quick data interpolation.

In this example dataPARC’s Graphic Designer was used to build the dashboard.

Depending on the process and number of inputs these calculations can get rather large and take a while to process on the fly, and even longer if looking at data in the past.

dataPARC’s Calc Server allows for calculations to be historized making this a great tool to use for fast calculations and viewing history.

Considerations When Building & Using a Gap Tracking Dashboard

Gap tracking dashboards are going to look different from machine to machine and site to site. Here is a list of suggestions to keep in mind while building your own gap tracking dashboard.

  • Use grids and rolled up data to convey the current gap. Colored or pattered backgrounds can help draw attention.
  • When a gap occurs, what levers can operations pull to make a change? These variables should be a focus on the dashboard. This can be done by adding trends, focused metrics, or a link to another display to “zoom into” that lever and see if a change can be made and how if effects the process.
  • Work with operators to determine those levers and key pieces of information, ask what would be helpful for them to see on this dashboard. The goal is to get all the important information in one place.
  • Monitor the progress, determine if the overall gap is decreasing, if not determine why. Make changes to the dashboard as needed.
  • The dashboard should provide information without settings requirements on how to run as not all the variables are always within the operators control.

Manufacturing Gap Tracking Example

Let’s look at a Gap Tracking dashboard for a paper machine.

1. Summary Trend

The first thing is a large trend showing the overall paper machine gap in dollars per day. The blue line represents the real-time gap and the yellow the target.

2. Category Trend

The second trend in the bottom left shows the individual calculated gaps by category. These are also on a dollars per day basis. This view allows the user to quickly identify it a category is trending in the wrong direction.

3. Gap Tracking Table

The table in the bottom right of the screen shows the current category gaps on a dollar per ton and dollar per hour basis. The values are highlighted red for over expected cost and green for under. At a glance, users can see what the current status of each category is.

4. Chemical Usage

Off to the right is a chemical usage button that will pull up another display. This button was added because chemical usage was found to have multiple inputs and levers for the operator to pull to close the gap.

Watch the video below to see this gap tracking dashboard in action.

Manufacturing analytics software like dataPARC’s PARCview offer tools to help manufacturing companies perform real-time gap tracking post-gap analysis.

Conclusion

A gap tracking dashboard can provide operators with a clearer picture of the gap in real-time, allowing them to make data-driven decisions. The alerts and real-time calculations bring awareness, letting operators know when something isn’t running optimally.

It is a way to drive process savings by tying analytics to actionable changes.

Instead of waiting for the end of the month to find there was a gap, it is found in real time. Bring troubleshooting into the present, changes can be made in real time to reduce or prevent larger process losses.

Although gap tracking dashboards are powerful tools, they do not replace regular Gap Analysis. To drive continuous improvement, Gap Analysis should be done regularly, targets on the gap tracking dashboard should be adjusted to reflect any process changes.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Both established operational data historians and newer open-source platforms continue to evolve and add new value to business, but the significant domain expertise now embedded within data historian platforms should not be overlooked.

Time-series databases specialize in collecting, contextualizing, and making sensor-based data available. In general, two classes of time-series databases have emerged: well-established operational data infrastructures (operational, or data historians), and newer open source time-series databases.

Enterprise data historian functionality at a fraction of the cost. Industrial time series data collection & analytics tools.

Learn More

Data Historian vs. Time Series Database

Functionally, at a high level, both classes of time-series databases perform the same task of capturing and serving up machine and operational data. The differences revolve around types of data, features, capabilities, and relative ease of use.

Time-series databases and data historians, like dataPARC’s PARCserver Historian, capture and return time series data for trending and analysis.

Benefits of a Data Historian

Most established data historian solutions can be integrated into operations relatively quickly. The industrial world’s versions of commercial off-the-shelf (COTS) software, such as established data historian platforms, are designed to make it easier to access, store, and share real-time operational data securely within a company or across an ecosystem.

While, in the past, industrial data was primarily consumed by engineers and maintenance crews, this data is increasingly being used by IT due to companies accelerating their IT/OT convergence initiatives, as well as financial departments, insurance companies, downstream and upstream suppliers, equipment providers selling add-on monitoring services, and others. While the associated security mechanisms were already relatively sophisticated, they are evolving to become even more secure.

Another major strength of established data historians is that they were purpose-built and have evolved to be able to efficiently store and manage time-series data from industrial operations. As a result, they are better equipped to optimize production, reduce energy consumption, implement predictive maintenance strategies to prevent unscheduled downtime, and enhance safety. The shift from using the term “data historian” to “data infrastructure” is intended to convey the value of compatibility and ease-of-use.

Searching for a data historian? dataPARC’s PARCserver Historian utilizes hundreds of OPC and custom servers to interface with your automation layer.

What about Time Series Databases?

In contrast, flexibility and a lower upfront purchase cost are the strong suits for the newer open source products. Not surprisingly, these newer tools were initially adopted by financial companies (which often have sophisticated in-house development teams) or for specific projects where scalability, ease-of-use, and the ability to handle real-time data are not as critical.

Since these new systems were somewhat less proven in terms of performance, security, and applications, users were likely to experiment with them for tasks in which safety, lost production, or quality are less critical.

While some of the newer open source time series databases are starting to build the kind of data management capabilities already typically available in a mature operational historian, they are not likely to completely replace operational data infrastructures in the foreseeable future.

Industrial organizations should use caution before leaping into newer open source technologies. They should carefully evaluate the potential consequences in terms of development time for applications, security, costs to maintain and update, and their ability to align, integrate or co-exist with other technologies. It is important to understand operational processes and the domain expertise and applications that are already built-into an established operational data infrastructure.

Why use a Data Historian?

Typical connection management and config area from an enterprise data historian.

When choosing between data historians and open source time-series databases, many issues need to be considered and carefully evaluated within a company’s overall digital transformation process. These include type of data, speed of data, industry- and application-specific requirements, legacy systems, and potential compatibility with newly emerging technologies.

According to the process industry consulting organization ARC Advisory Group, modern data historians and data infrastructures will be key enablers for the digital transformation of industry. Industrial organizations should give serious consideration when investing in modern operational historians and data platforms designed for industrial processes.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

11 Things to Consider When Selecting a Data Historian for Manufacturing Operations:


1. Data Quality

The ability to ingest, cleanse, and validate data. For example, are you really obtaining an average, such as if someone calibrates a sensor, will the average include the calibration data? If an operator or maintenance worker puts a controller in manual, has an instrument that failed, or is overriding alarms, does the historian or database still record the data? Will the average include the manual calibration setpoint?

2. Contextualized Data

When dealing with asset and process models based on years of experience integrating, storing, and accessing industrial process data and its metadata, it’s important to be able to contextualize data easily. A key attribute is the ability to combine different data types and different data sources. Can the historian combine data from spreadsheets and different databases or data sources, precisely synchronize time stamps and be able to make sense of it?

3. High Frequency/High Volume Data

It’s also important to be able to manage high-frequency, high-volume data based on the process requirements, and expand and scale as needed. Increasingly, this includes edge and cloud capabilities.

4. Real-Time Accessibility

Data must be accessible in real time so the information can be used immediately to run the process better or can be used to prevent abnormal behavior. This alone can bring enormous insights and value to organizations.?

5. Data Compression

Deep compression based on specialized algorithms that compress data, but enables users to reproduce a trend, if needed.

6. Sequence of Events

SOE capability enables user to reproduce precisely what happened in operations or a production process.

7. Statistical Analytics

Built in analytics capabilities for statistical spreadsheet-like calculations to perform more complex regression analysis. Additionally, time series systems should be able to stream data to third party applications for advanced analytics, machine learning (ML) or artificial intelligence (AI).

8. Visualization

The ability to easily design and customize digital dashboards with situational awareness that enable workers to easily visualize and understand what is going on.

9. Connectability

Ability to connect to data sources from operational and plant equipment, instruments, etc. While often time-consuming to build, special connectors can help. OPC is a good standard but may not work for all applications.

10. Time Stamp Synchronization

Ability to synchronize time stamps based on the time the instrument is read wherever the data is stored – on-premises, in the cloud, etc. These time stamps align with the data and metadata associated with the application.

11. Partner Ecosphere

Can make it easy to layer purpose-built vertical applications onto the infrastructure for added value.

Looking Ahead

Rather than compete head on, it’s likely that the established historian/data infrastructures and open-source time-series databases will continue to co-exist in the coming years. As the open-source time series database companies progressively add distinguishing features to their products over time, it will be interesting to observe whether they lose some of their open-source characteristics. To a certain extent, we previously saw this dynamic play out in the Linux world.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Manufacturers use a variety of tools and systems every day to manage their process from start to finish. It is critical that these systems can provide a “single pane of glass” and “single version of truth” along the way. Meaning, data can be viewed from any device, location or system and will be synchronized across all platforms. Manufacturing Operations Management Systems make this possible.

Real-time manufacturing operations management and industrial analytics tools

Check out PARCview

What is Manufacturing Operations Management?

Manufacturing Operations Management (MOM) is a form of LEAN manufacturing where a collection of systems is used to manage a process from start to finish. The key to MOMs is ensuring data is consistent across all systems being used from scheduling and production to shipment and delivery.

MOM includes software tools designed for the management of people, business processes, technology, and capital assets to meet customer demand while creating shareholder value. Tying in the LEAN manufacturing, the processes must be efficiently performed and resources productively managed. These are the prerequisites for successful operations management.

Key Applications of Manufacturing Operations Management


Supply chain & resource management

MOM systems include tools for planning, procuring, and receiving raw materials and components, especially as it relates to obtaining, storing, and moving necessary materials/components in a timely manner and of suitable quality to support efficient production, something that is certainly critical in these times of supply chain disruptions.

To deal with today’s dynamic business environment ranging from challenges caused by pandemics, shutdowns, geo-political conflicts, and supply chain disruptions, organizations need to be able to be sustainable, operationally resilient, conform to ESG goals, deploy the latest cybersecurity tools, and connect its workforces from any location.

Process & production management

Once all the resources are gathered, MOM tools need to be established for implementing product designs to specifications, developing the formulations or recipes for manufacturing the desired products, as well as manufacturing of product or products that conform to specifications and comply with regulations.

Organizations must monitor and adjust their processes quickly and automatically, to efficiently evaluate the situation when an inevitable glitch occurs. This is a prime opportunity for digital transformation through MOM systems.

Distribution & customer satisfaction management

The final stage of MOM relates to the distribution to the customers, particularly as it relates to sequencing and in-house logistics, as well as supporting products through their end-of-life cycles.

Organizations must react in real-time to changing market conditions and customer expectations. They will have to innovate with new business processes that reach throughout the organization, into the design and supply chain.

Driving innovation & transformation

Successfully innovating at this level involves managing people, processes, systems, and information. When disruptive technologies are in the mix, the first challenge is often tied up in the interplay of people and technology.

Only when the people involved begin to understand what the new MOM technologies are capable of and have the tools to visualize the data and real-time manufacturing analytics software to convert this data into actionable information can they begin to take steps towards achieving the innovation.

One output of manufacturing operations management systems is a production dashboard, like this one, built with dataPARC’s PARCview, which create a shared view of current operating conditions and critical KPIs.

Manufacturing Operations Management Systems

Today’s MOM systems can play a role in achieving the next levels of operations performance because they marshal many or all the needed services in one place and can provide a development and runtime environment for small or large applications.

Common MOM Tools

In addition to leveraging the latest AI, ML, AR/VR, APM, digital twin, edge, and Cloud technologies, MOM systems often consist of one or more of the following:

  • Manufacturing Execution Systems (MES)
  • Enterprise Asset Management (EAM)
  • Human-Machine Interface (HMI)
  • Laboratory Information Systems (LIMS)
  • Plant Asset Management (PAM)
  • Product Lifecycle Management (PLM)
  • Plant Asset Management (PAM)
  • Real-time Process Optimization (RPO)
  • Warehouse Management Systems (WMS)

MOM systems integrate with business systems, engineering systems, and maintenance systems both within and across multiple plants and enterprises.

An example of a multi-site operations with unique manufacturing applications at each site. Some tools like dataPARC’s PARCview, enable manufacturers to integrate data across sites for more effective manufacturing operations management.

Supply Chain Management (SCM), Supplier Resource Management (SRM), Transportation Management (TMS) are commonly used to manage the supply chain.

Plant automation systems, such as Distributed Control Systems (DCS) and Programmable Logic Controllers/Programmable Automation Controllers (PLCs/PACs) are key technologies driving manufacturing production.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

Data Visualization and Real-Time Analytics for MOM

Maybe the most important manufacturing operations management tools for managing production are the data visualization and real-time manufacturing analytics software platforms, like dataPARC, which provide integrated operations intelligence and time-series data historian software.

These MOM tools focus on data connectivity, real-time plant performance, and visualization + analytics to empower plant personnel and support their decision-making process.

Benefits of MOM Visualization & Analytics Tools


Eliminate Data Silos

Most real-time manufacturing operations management analytics tools offer the ability to connect to both manufacturing and operations data. Data from traditionally isolated data silos, such as lab quality data, or ERP inventory data, can be pulled in and presented side-by-side for analysis in a single display.

Establish a single source of truth

MOM analytics tools offering visualization plus integration capabilities enable manufacturers to create a “single version of truth” which everyone from management to the plant floor can use to understand the true operating conditions at a plant.

Often combining multiple sites and multiple data sources to form a single view, users leverage this data to gain perspectives and intelligence from both structured and unstructured operational and business data.

Produce common KPI dashboards

By measuring metrics and KPIs, such as production output, yields, material costs, quality, and downtime, users at multiple levels and roles can make better decisions to help improve production efficiencies and business performance.

Real-time manufacturing operations management dashboards from manufacturing analytics providers can pull in data from multiple physical sites or from multiple manufacturing process areas and display them in a common dashboard.

Manufacturing operations management dashboards from manufacturing analytics providers can pull in data from multiple physical sites for real-time production monitoring

Facilitate data-driven decision-making

Without operations intelligence provided by manufacturing operations management systems, users are often unable to properly understand how their decisions affect the process. MOM analytics software can display data sourced in the business systems for direct access to cost, quality control, and inventory data to support better business decisions.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Final Thoughts

The next generation of MOM systems is here. The economics of steady-state operations have been replaced with a dynamic, volatile, disruptive economic environment in which adapting to changing supply and demand, along with issues such as pandemics, shutdowns and geo-political conflicts are the norm. Tighter production specifications, greater economic pressures, and the need to maintain supply chain visibility in real-time, be sustainable and operationally resilient, plus more stringent process safety measures, cybersecurity standards, ESG goals and environmental regulations further challenge this dynamic environment. Managing these challenges requires more agile, less hierarchical structures; highly collaborative processes; reliable instrumentation; high availability of automation assets; excellent data; efficient information and real-time decision-support systems; accurate and predictive models; and precise control. Uncertainty and risks must be well understood and well managed in all aspects of the decision-making process.

Perhaps most importantly, everyone must have a clear understanding of the business objectives and progress toward those objectives. Increasingly, effective manufacturing operations management requires real-time decisions based on a solid understanding of what is happening, and the possibilities over the entire operations cycle. Organizations pursuing Digital Transformation should consider focusing on MOM systems and not just transformative new technologies to drive operations performance to new levels. This means utilizing software tools, such as manufacturing data integration, visualization, and real-time manufacturing analytics software that gathers a user’s manufacturing data in one single pane of glass view and establishes a single source of the truth.

This article was contributed by Craig Resnick. Craig is a primary analyst ARC Advisory Group. Craig’s focus areas include production management, OEE, HMI software, automation platforms, and embedded systems.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Minimize manufacturing gaps such as operational cost or waste by performing a gap analysis with process data. In this article we will walk through the steps of a basic manufacturing gap analysis and provide an example.

Implement real-time gap tracking with dataPARC to help you optimize & control your processes

View Process Optimization Solutions

What is a Gap Analysis for Manufacturing?

A gap analysis is the process of comparing current operating conditions against a target and determining how to bridge the difference. This is an essential part of continuous improvement and LEAN Manufacturing.

A manufacturing gap analysis can be performed on a variety of metrics, such as:

  • operational costs
  • quality
  • productivity
  • waste
  • etc.

When it comes to bridging the gap, the ideal case is to reach the target with a single and permanent change. This is not always the case, there are times where the gap is going to be fluid and would benefit from constant monitoring and small adjustments. In these cases, operations can utilize a real-time gap tracking dashboard to alert them of what is causing the gap and get the process back on track in the moment, rather than realize the problem days, weeks or months later.

Manufacturing analytics software like dataPARC’s PARCview offer tools to help manufacturing companies perform real-time gap tracking post-gap analysis.

Who Conducts a Manufacturing Gap Analysis?

Gap analysis can be performed by anyone trying to optimize a process. As mentioned above there are a multitude of metrics that can be measured.

A process engineer might want to reduce operational cost by focusing on energy consumption, someone in the finance department may notice an increase in chemical cost every month, and a supervisor may want to reduce the time it takes to complete a task to focus on other items.

Almost every department can leverage gap tracking in one way or another.

How to Perform a Gap Analysis in Manufacturing

Like many other improvement strategies, we can use the DMAIC method (Define, Measure, Analyze, Improve, Control) to perform a gap analysis and implement a live gap tracking dashboard.

To create a gap tracking dashboard, a gap analysis needs to be completed first. In the last stage, Control, the dashboard is created, and operations can perform steps Analyze-Improve-Control in real-time.

1. Define

The first step is to define the area of focus and identify the target. A great place to start when looking for an area of focus is the company’s strategic business plan, operational plan, or yearly operational goals. Many times, these goals will already have targets in place.

2. Measure

Next, the process must be measured. Take a close look at the measurement system. Is the data reliable? Does the measurement system provide the necessary information? If so, measure the current state of the process.

If there is no current measurement system, one will need to be created. Although in-process measurements or calculations are best, manual input can also be used.

Some manufacturing analytics providers, like dataPARC, offer manual data entry tools which allow users to create custom tags for manual input. These tags can be trended and used the process tags in dashboards and displays.

3. Analyze

Take the data and compare it to the goal. How far from the target is the process? This is the gap. It may help to visualize the process gap in multiple ways such as with a histogram or trend display.

This histogram provides the overall distribution of the data which can help narrow the focus. What does the peak look like, is it a normal distribution, skewed to one side, is there a double-peak, or edge peak?

A trend shows how the process is shifting overtime, are there times of zero gap vs large gaps such as shift or season?

With the measurement system in place, the gap realized, and some graphical representations of the data, it is time to brainstorm potential causes of the gap. Brainstorming is not a time to eliminate ideas, get everything written down first. There are a variety of tools that can be used to help in this process:

Fishbone Diagram

This classic tool helps determine root causes by separating the process into categories. The most common categories are People, Process/Procedure, Supplies, Equipment, Measurement and Environment, other categories or any combination can be used to fit the situation.

The fishbone diagram is a classic tool for performing root cause analysis.

The team can brainstorm each category and identify any causes that could play a role in the problem. Dive one step further with a 5-why analysis, a method that simply asks “why” until it cannot be answered any more to ensure the true root cause is uncovered.

Is gap analysis one of your digital transformation goals? Let our Digital Transformation Roadmap guide your way.

get the guide

SWOT Chart

This chart is made of four squares, with labeled sections: Strengths, Weaknesses, Opportunities, and Threats. This strategy is used to determine the internal and external factors that drive the effectiveness of the process. For potential root cases, focus on what appears in weaknesses and see if potential solutions find their way to opportunities.

SWOT charts are another fundamental root cause analysis tool.

McKinsey 7s web.

The McKinsey framework is made up of 7 elements, categorized as 3 “Hard” or controllable elements and 4 “Soft”, non-controllable elements. In each element, write the current and desired state. It is important elements are in alignment with one another, any misalignment could point to a root cause.

The McKinsey Framework.

4. Improve

Determine the best way to bridge the gap and implement the changes. A payoff matrix or efficiency impact trend can help pick the most effective, least costly options. Focus on quick wins. Items in busy work can be completed but are not a priority. Those in Major Projects, you must ask, is the price work the impact? Anything that is low impact and high cost can be dropped.

A payoff matrix or efficiency impact trend can help you determine the best way to bridge gaps.

After the solutions are implemented, check the results by analyzing the data again and see if there was an improvement.

5. Control

Once the target is met it is important to keep it that way. Monthly reports can be used to keep track of the process gap and make sure it stays in the desired range.

Set up a dashboard, or other visual to monitor the process in real time. By tracking the gap of the process in real time, operations can see how changes to the process effect the bottom line in real time, rather than at the end of the month.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

An Example of Gap Analysis for Manufacturing

In this manufacturing example we are going to walk through a gap analysis to improve operational costs on a single paper machine.

Define

The company’s operational plan has a goal for monthly operational cost. To break this down into a manageable gap analysis, the focus will be looking at a single machine. This machine is not currently meeting the monthly operational cost goal on a regular basis.

Measure

Since this is an initiative from an operational plan, there is already a measurement system in place. The machine operational costs are broken down into 5 variables. Speed, Steam, Chemical, Furnish and Basis Weight.

These variables are measured continuously, so data can be pulled in hourly, daily, and/or monthly averages. The variety of data views will help in the next stage. Each of these variables have a target, but some are missing upper and lower control limits.

Analyze

First, the combined daily operational cost was compared against the target. There were days where the target is met, but it is not consistent.

Next, each of the five variables were compared with their targets separately over the past several months. From this view, Chemical and Steam stood out as the two main factors that were driving up the operational cost. With that in mind, we moved onto the Fishbone diagram and 5-why analysis.

Using the fishbone diagram we were able to determine that chemical and steam were the two main factor driving up our costs over the past several months.

Improve

From the fishbone and 5 why, we found that there were targets but no control limits set on all the chemicals. Operators were adding the amount of chemical they felt would accomplish the quality tests without trying to only apply to necessary amount.

Thinking about the cost/effectiveness diagram, it is cost free to add control limits to each chemical additive. Engineers pulled chemical and quality data from multiple months, created a histogram to find the distribution and set up control limits to help the operators have a better gauge of how much chemical to apply and the typical range that is needed to satisfy any quality tests.

For steam, there were a lot of potential root causes around the fiber mix and cook. The mill already has SOPs to deal with situations such as bad cooks. Another root cause that came up during the fishbone was steam leaks. Most leaks can be fixed while the machine is running, so over the next several weeks there was a push to find and close major leaks.

Control

In this case, since limits were created for chemical usage, alarms were also created to alert operations if they exceeded the control limit. Alerts are a great way to notify operations when processes are drifting out of control so quick corrects can be made.

After a few weeks of these changes, another analysis was be completed. The operational costs are meeting target, and it was time to move to the next process. It is important not to forget about the operational cost, this was monitored monthly to ensure it does not exceed the target.

Conclusion

Performing routine gap analysis is an important step in LEAN Manufacturing and continuous improvement. By following the above steps manufacturers can optimize their process by reducing waste, operational costs, improving quality or going after other key metrics.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

The most important differences between relational databases, time-series databases, and data lakes and other data sources are the ability to handle time-stamped process data and ensure data integrity.

Enterprise data historian functionality at a fraction of the cost. Industrial time series data collection & analytics tools.

Learn More

The Manufacturing Database Battle

The most important differences between relational databases, time-series databases, and data lakes and other data sources are the ability to handle time-stamped process data and ensure data integrity.

This is relevant because the primary job of the data management technology is to:

  • Accurately capture a broad array of data streams
  • Deal with very fast process data
  • Align time stamps
  • Ensure the quality and integrity of the data
  • Ensure cybersecurity
  • Serve up these data streams in a coherent, contextualized way for operational personnel

Time-Series Databases

Digital technologies and sensor-based data are fueling everything from advanced analytics, artificial intelligence and machine learning to augmented and virtual reality models. Sensor-based data is not easily handled by traditional relational databases. As a result, time-series databases have been on the rise and, according to ARC Advisory Group research, this market is growing much more rapidly than traditional relational databases.

While relational databases are designed to structure data in rows and columns, a time-series database or infrastructure aligns sensor data with time as the primary index.

Time-series databases specialize in collecting, contextualizing, and making sensor-based data available. In general, two classes of time-series databases have emerged: well-established operational data infrastructures (operational, or data historians), and newer open source time-series databases.

To gain maximum value from sensor data from operational machines, data must be handled relative to its chronology or time stamp. Because the time stamp may reflect either the time when the sensor made the measurement, or the time when the measurement was stored in the historian (depending upon the data source), it is important to distinguish between the two.

Searching for a data historian? dataPARC’s PARCserver Historian utilizes hundreds of OPC and custom servers to interface with your automation layer.

Relational Databases

Time series data technologies – whether open-source databases or established historians – are built for real-time data. Relational databases, in contrast, are built to highlight relationships, including the metadata attached to the measurement (alarm limits, control limits, customer spend, bounce rate, geographic distribution between different data points, etc.). Relational technologies can be applied to time series data, but this requires substantial amounts of data preparation and cleaning and can make data quality, governance, and context at scale difficult.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Data Lakes

Data lakes, meanwhile, score well on scalability and cost-per-GB, but poorly on data access and usability. Not surprisingly, while data lakes have the most volume of data, they typically have fewer users. As with time series technologies, the market will decide the time in which and how these different technologies get used.

Looking Ahead

Digital technologies and sensor-based data are fueling everything from advanced analytics, artificial intelligence and machine learning to augmented and virtual reality models. The fourth industrial revolution, or Industrie 4.0, along with major market disruptions, such as the pandemic driving sustainability, and operational resilience initiatives, has led to a great acceleration of digital transformation and exponential changes in industrial operations and manufacturing taking place.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

In the process industries, optimization is the key to efficiency. And efficiency is what leads to profit – allowing manufacturers to produce more and waste less. To optimize their processes, many manufacturers use a combination of time series data historian and data visualization software. dataPARC and PI are two of the leaders in this space, and in this article we’ll compare dataPARC vs PI and highlight some of the advantages dataPARC has over PI as a process information management system.

Check out dataPARC’s real-time process data analytics tools & see how better data can lead to better decisions.

Check out PARCview

dataPARC vs PI: Similarities

dataPARC and PI have existed for decades and have large installation bases represented by major manufacturers around the world.

Both dataPARC and PI:

  • Offer a real-time data historian
  • Use a binary, cluster-index, flat file to store history
  • Have an asset structure to address the complexities of large disparate data sources
  • Offer many of the expected analytics & visualization tools: trending, graphics, reports
  • Can connect to various control systems for collecting time-series data in real-time
  • Use a store & forward function in case data connectivity is lost
  • Can work with very large tag-count systems

Now, let’s dive more into their differences and see how dataPARC sets itself apart.

dataPARC vs PI: Differences

Cost

We might as well start with what will be one of the key considerations when evaluating these two data historian and process data visualization toolkits.

Long story short, dataPARC’s total cost of ownership is lower when compared to other “like” industry solutions. Both the initial cost and ongoing costs are considerably lower than the PI System.

Unlimited Users

A key reason for this is dataPARC’s unlimited license model, which makes it a great fit for organizations wishing to get production data in front of decision-makers at every level of the plant without worrying about having to purchase additional licenses.

PI uses a per-user pricing model. This tends to work for small organizations with only a few people needing to access the platform, but for larger organizations or enterprise implementations the cost adds up quickly.

With dataPARC, everyone who needs access to the data can have access at no additional cost – putting the power to make data-data decisions in the hands of every employee.

Looking for an alternative to PI’s Data Historian? Get an enterprise plant data historian at a fraction of the cost. Check out dataPARC’s PARCserver historian.

User Experience

When customers are asked about dataPARC’s top 3 to 5 benefits, ease-of-use is always near the top of the list. The reduced complexity of the dataPARC system allows even the least “computer-savvy” person to begin building content and gaining value, and results in wide adoption of the tools within an organization. 

Though there are many features to dataPARC, a new user can learn how to search tags, trend, and navigate within minutes. From there, users quickly learn they can view trend statistics, manage alarm events, export data, create displays such as X/Y Plot, Histogram, or Pareto and much more all form the right click menu.

dataPARC’s trending tools have long been recognized by customers as the number 1 trend solution in the industry. dataPARC’s trend capabilities are faster and far superior to others and fit better in the practical function realm.

No other package allows for a quicker build of a trend matrix, with quick drag & drop from both the tag browser and displays.

dataPARC makes finding and trending tag data super easy.

Many organizations that were set up with the PI Historian and ProcessBook have since chosen to get dataPARC to “sit on top” of their PI historian simply for PARCview; the visualization tools and ease of use speak for themselves.

Diagnostic Analytics

As mentioned earlier, dataPARC’s trending application is considered the best in industry. Not only for its ease of use and quick access to analysis tools but for its speed as well.

Trend

dataPARC uses a deliberate data speed strategy with multiple components including an embedded Performance Data Engine (PARCpde) to speed data to the user.  The goal is to meet and exceed the user’s “speed of thought.”  PARCpde is a foundational part of the entire dataPARC system. 

Speed tests comparing dataPARC vs PI and other contemporary historians have shown dataPARC to be anywhere from 10X to 50X faster in delivering large or long-term datasets back to the user. 

Several companies have switched to dataPARC in part because of the data speed.  dataPARC also utilizes an aggregate archive and rollup archive in its architecture which greatly reduces the amount of time wasted when solving problems or investigating opportunities. 

From the trend, users can launch a quick statistics grid, generate a new X/Y Chart or Histogram display. Each chart will pull in the tags from the trend, so users don’t have search for them in Tag Browser again.

The X/Y plot sets two tags up for comparison and a best fit line can be generated – linear, polynomial, etc. The formula generated from the fit can be pulled into a trend or other display. PI can also generate X/Y plots, but they are created from scratch and no best fit line is generated.

Excel Add-in

dataPARC’s Excel add-in was built with a high degree of ease-of-use and speed. 

PI and dataPARC both have in-cell functions that can pull data directly into Excel. The dataPARC add-in has multiple other functions.

There is a sheet that can pull multiple tags in the same time range without dealing with formulas. Users can import tag lists from already created dataPARC displays instead of searching for the tags again.

Besides the value gained in legacy Excel add-in tools, dataPARC’s is highlighted by the following:

  • Drag groups of tags/data into Excel from multiple data sources
  • Filter data based on multiple tags values
  • Cross Correlation/R2 matrix generation
  • CUSUM & MSR charting

Additionally, users can display time series-based data from Excel into PARCview trends and displays. This can be used to trend or compare data from outside the company right next to process data.

Evaluate the top alternatives to Processbook & PI Vision in our PI Server Data Visualization Tools Buyer’s Guide.

Get the Guide

Operations Management

Real-time operations management is necessary to keep a plant running at peak efficiency and to be able to respond quickly to process excursions that result in unplanned downtime or product loss.

This is facilitated by dataPARC in a variety of ways:

  • Graphical process displays
  • KPI and Lab data dashboards
  • Manual data entry (MDE) tools
  • Automated reporting
  • Process alarms & notifications
  • & more

When comparing dataPARC vs PI, both offer the creation of dynamic, information packed graphical dashboards, but only dataPARC has the Centerline display.

Centerline

Centerline is a powerful monitoring tool unique to dataPARC. It is a real-time display that reports run based statistics for tags. The runs can be Grade or Time based, and the statistics include time average, standard deviation, CpK, min, max, etc.

Centerline displays data for time periods or runs to ensure process conditions are the same run after run.

The purpose of a centerline display is to help determine the best operational settings for production, and to ensure those settings are normally being used during production.

Centerline is one of dataPARC’s powerful data analysis tools for which there is no PI equivalent.

Alarms and Notifications

dataPARC’s alarm and notification system can send emails, text notifications or trigger workflows when an alarm is detected or closed. Once an alarm is detected, an alarm event is created. These events can be viewed and acknowledged in a trend, centerline, graphic or alarm list. Users can acknowledge the event by assigning a reason from the reason tree and/or typing a comment to the event. Quick analysis can be done in dataPARC with the Pareto chart to determine the top reasons saved for an alarm or create a tabular report sorted by reason with all comments visible.

Similarly, PI can create event frames and send notifications. Once event frames are detected and a reason assigned, users can see this data as a table in PI Vision, but further analysis or reporting is required to take place in the PI Excel Add-in DataLink. dataPARC’s Excel Add-in also has features to pull in Alarm event data.

More dataPARC Excel Add-in features are explored in the following section.

Manual Data Entry (MDE)

dataPARC’s MDE display is quick to configure and allows users to enter and save manual data to the database rather than on a piece of paper or in Excel.

Manually entered data is represented by tags, thus they can be used in PARCview trends, dashboards, and displays like any other tag.

Need to get better data into the hands of your process engineers? Check out our real-time process analytics tools & see how better data can lead to better decisions.

Calculations

When users don’t have the perfect tag to help manage a process, a calc tag or MDE is often used. dataPARC and PI are both able to perform simple calculations such as adding tags, If/Then statements, or unit conversions.

with PI Vision, PI no longer supports VB scripting. VB scripting opens the doors for custom solutions and dataPARC leverages VB scripting for applications such as database reads, file parsing, web service calls, and much more.

Predictive Analytics

dataPARC’s PARCmodel offers a degree of predictive analysis with PLS (Partial Least Square) and PLC (Principal Component Analysis) modeling capabilities.

PLS

The PLS package has been described by one of the world’s top practical modeling engineers as “…bar-none, better than anything I’ve ever seen before.”  In the processing industry, one of the applications for PLS modeling is in building inferential property predictors (IPPs).

Control engineers in operating companies report that a PLS model generation for one IPP can take more than 8 hours to re-model (longer for the initial model) using multiple tools and off-line activity. dataPARC integrates it all into one tool and the re-model effort can be as little as 5 minutes.

This snappy model generation allows multiple solutions to be generated for comparison to find the best option. The speed of remodeling allows for wider application and benefit of PLS.  Practical engineering methods and even process “hunches” can now be backed with a quick validation by a PLS mathematical session in 2 to 5 minutes. 

dataPARC’s predictive modeling tools

dataPARC delivers huge time savings, better learning environment, better collaboration environment, more useful applications – these all accelerate value to the company’s key business drivers. 

PCA

PCA uses the same modeling advantages that dataPARC’s PLS offers, allowing for easy model generation.  The difference between the two modeling methods is that PLS seeks to model and mimic a single variable using adjacent variables as model inputs.  PCA doesn’t model a single variable but models a whole process. 

The value comes when comparing the current process with the modeled process.  PCA gives the user the ability to know when the current process is off (when compared to the modeled process) and identifies the “offending” process variable(s). 

PCA makes use of two parameters (available to the PLS model as well): DMODX (error from model) and HT2N (Hotelling T2 Normalized – off norm). The PCA model input variables are all graded and staff can see which variable(s) is/are causing the problem.  PCA can be used as an early warning system to help operations see a problem before it happens. 

PARCmodel is separately licensed but incorporated into PARCview and easily accessed in the trend right click menu. PI does not have similar analytics tools.

Looking to replace ProcessBook See why PARCview is regarded as the #1 ProcessBook alternative.

Customer-Centric Development & Support

At dataPARC, above everything is the customer and their very real, timely, practical needs. dataPARC’s strategy involves a high attentiveness to the customer’s needs and solving problems quickly.

dataPARC employs many SMEs serving in key process engineering support roles for operating companies in the industry. Over the years dataPARC’s user features and overall system architecture has been shaped by the SMEs and customers. dataPARC is built by end users for end users.

At dataPARC we sell more than software, we sell our services to help build trends, graphics and other displays to get your system off the ground running. Our Engineers and Support staff are available to help implement new projects and off continual support.

With PI, to get the same displays created, customers would have to outsource to a 3rd party. dataPARC is a one stop shop.

Conclusion

dataPARC and PI have a lot in common, however dataPARC has the upper hand where it counts – user experience, speed of data, and cost. dataPARC is simple, fast, and effective.

The advantages to dataPARC vs PI continue to grow with every new feature and update. Features that are driven by users and customers.

pi processbook alternatives guide

Download the Guide

Discover top alternatives to PI’s ProcessBook and PI Vision analytics toolkits.

Download PDF
0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Where do we start in digitizing our manufacturing operations? one may ask. While there is no easy answer, the solution lies in starting not from the top down, but from the ground up, focusing on the digital transformation roles and responsibilities of the key people in your plant.

Digital transformation in process manufacturing is not only a priority, but now an essential step forward as the world encounters and adapts to a more digital world. To put it simply if you do not adjust your processes to embrace digital change, your competitors will (and may already have) outproduce, outshine and outsell you.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Transformation Teams

Digital change has been slow until now though it has been steady. PLC and DCS systems were manufacturing’s digital beginnings and thankfully there is so much more available now to further digitize operations and minimize downtime, improve your process, enhance data management, data sharing reporting and increase profitability. A truly connected enterprise will be adaptable and agile, allowing it to keep abreast of changes in the operating environment.

Plant roles play an essential part in the digitization of process manufacturing and all can contribute to a seamless digital transformation within your facility. Each role embraces digital change and transforms the process from the inside out. By focusing on these roles and the duties and responsibilities within each of them, plant digitization can lead to a well-oiled machine whose comprehensive outcomes depend on and benefit from.

Where do we start in digitizing our operations? one may ask. While there is no easy answer, the solution does lie in starting not from the top down, but from the ground up, with each role’s responsibilities and contributions enhancing the other, adding to and building on the next, for a comprehensive digital enterprise and solid, data-based reporting.

Integrating sources of plant data is a good place to start, along with the processes themselves becoming digitized for maximum outcomes. In this article we will focus on the various roles in the plant, their responsibilities and how each one can contribute to digital transformation.

Digital Transformation Roles & Responsibilities

The Operator

The Operator’s Role in Digital Transformation

Checking process conditions (temperatures, pressures, line speed, etc.) are an essential task for an operator. These process conditions could have readings directly on the machine with valves or buttons to adjust as needed. With more and more digital transformation in manufacturing these process variables are being set up with PLCs to create a digital tag. This tag can be read through an OPCDA server and visualized throughout the plant on computers, in offices, control rooms and meeting rooms. They can also be set up with a DCS to control the process from the control room rather than having to walk the floor to adjust speeds or valves.

The process variables need to be monitored to produce quality products. There are ranges for each process variable and additive when making a product, if these get out of range, the final product could be outside the final specification. Limits can be drawn on gauges, written in an SOP (Standard Operating Procedure) or set up as limits for alarming. These alarms could appear either on the DCS or data visualization screen to alert the operator a variable needs attention.

To consistently make quality product, operators must communicate with the lab tech to verify the product is within spec. This communication between the lab and operators has been traditionally done through verbal communication, walkie talkies, phone calls, etc. To digitize this process, the lab tech enters tested values into a data visualization program or a lab information management system (LIMS) database. These values can be displayed on dashboard with the specifications next to them. The operator can then see when specification values are out of spec and adjust the process, or when values are trending up/down and adjust the process to keep the product within specification before making bad quality product.

Operators are also responsible for keeping track of a product and lot being produced. This can be done manually with pen and paper or entered digitally into a database.

At the end of the shift operators need to pass key information to the next shift. This can be done with a hand off meeting to verbally discuss, a physical notebook to log key points or a digitalized version of a notebook. With digitalized versions of reports there is opportunity to relay information to multiple control rooms or locations of the company’s operations at once.

The Lab Technician

The Lab Technician’s Role in Digital Transformation

Lab quality testing is an essential part of process manufacturing. Thorough testing of each batch quality results allows for production of the scheduled product. Because other roles such as process engineer and operator rely on the outcomes of lab testing, getting the lab quality data seamlessly disseminated is essential to smooth operations.

Testing multiple variables of the product and comparing it to specifications, manually testing the product, recording the result, and manually comparing the finished product to specifications are among the lab technician’s duties. If the lab tech is entering data into a digital system, limits can typically be saved for different products, speeding things up.

The lab tech would manually test the product, enter the results in a program, and the LIMS system would flag if the result were out of spec. Furthermore, a lab tech can set up the test, a machine conducts the test, the result is then fed to the LIMS system where the value would be flagged if the test is out of spec. performing these tasks digitally is a tremendous time saver and process.

In summary, lab techs are ultimately responsible for testing the final product and passing or failing it to be sold. Digitizing these tests and the corresponding data streamlines and accelerates the entire lab test process.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

The Process Engineer

The Process Engineer’s Role in Digital Transformation

Process Engineers, often called by other titles including chemical engineers, often have a range or duties including product development, process optimization, documentation of SOPs, setting up automatic controls/PLCs, ensuring equipment reliability, communicating with superintendents, operators, lab techs, maintenance managers and customers.

Process engineers monitoring the entire manufacturing process on a daily, weekly, and monthly basis to identify improvement opportunities and evaluate the condition of the assets and processes.

Most sites have an existing system for maintenance requests. A physical system may exist where staff hand writes the issue, area, and other important information and hand deliver it to the maintenance department. Alternately, there could be a system set up to email the maintenance department with pictures attached. A program may be used to submit maintenance requests. This system would provide a unique ticket number, automated status updates, and other key information. Such a program would allow engineers or the maintenance department to see history thus being able to identify repetitive issues, such as a part needing replacement. Digitizing maintenance can help create a preventative maintenance schedule, to replace the part before it is no longer performing, resulting in sub-par product quality.

Another way for engineers to monitor the process is through data visualization. When data is stored, the history can be viewed, and users can identify irregularities, trends, and cycles in the process to help identify root cause when upsets occur. Engineers might set up their own alarms, separate from operator alarms, to keep track of events and determine if an optimization project is possible.

Process optimization and product development are important tasks for project engineers. Engineers may develop and conduct trials to continually optimize the process and develop new products. They often use the Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control) method to do this. The Define step is typically completed by a stakeholder, a superintendent or plant manager. Once the project is defined the engineer moves into the measure step.

The measure step can take many forms, physically measuring, counting, or documenting a process. Collecting necessary data can be time-consuming. With more of the data being digitized, data collection is already done.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Engineers need to organize and collect data to analyze it. Once the data is collected it can put into Excel, Minitab, or other programs to be analyzed. By doing comparisons and statistical analysis, with the help of process knowledge, an improvement plan can be created.

Engineers will work with operators and lab techs to work through their improvement plan. Typically, the plans will include information that the operators and lab techs will have to record to give back to the engineer to determine if an improvement was made. The plans can be printed off and hand to those involved, and the necessary data collected on sheets of paper.

If a program/graphic/database was being used then the engineer could create an improvement plan within said program, then the operator/lab tech can enter necessary values directly, making the data accessible instantly to the engineer. After the project is complete and an improvement was made, a SOP is written and saved.

In his role, the engineer needs to communicate this change to all necessary personnel. The SOP could be saved locally on each computer, in a shared file, on SharePoint, or as a link within a program that has versioning so users can go back and see what changes were made and when. To alert others of the changes, an email can be sent out to supervisors to communicate to their shift, or if a digital notebook is available, a message can be sent to the necessary areas with a link to the newly updated SOP.

As mentioned above, engineers can be responsible for writing and maintaining SOPs. SOPs can be stored in binders in the control room, saved on control room computers, or a shared folder. There are also programs that can save versions of documents so users can see what changed and when. Operators and lab techs would then use the SOPs when performing a task or testing. It is important for operators to be notified of changes made to the SOP. This could be the engineer sending out an email, or a program with a preset list sending updates to emails. Engineers could also have a notification set up on the operator’s computer.

The Plant Manager

The Plant Manager’s Role in Digital Transformation

Plant managers wear many hats and the hats they wear continue to multiply as plants face complexities and pressure to produce more with increased profitability.

Hiring good people – the key to running a digital forward organization is staffing with people in mind. Good, productive people run plants with data, not hunches or best guesses. They make data driven decisions that are the best for the organization and identify root causes through careful anomaly detection and analysis.

Good leaders know that to truly digitize operations at a plant you must start from the bottom and that every role is an important component to the whole and every person’s contribution important.

Ron Baldus, CTO at dataPARC, advises “Clean data” is the key to successful digital operations. What exactly does clean data mean, one might ask? Clean data is the pure data, data-driven data, not hunch-driven data and the one version of the truth. With clean data plant managers and those who work for them can continue to make data and profit driven decisions. A good data visualization software that connects all data sources is a good place to start. With this connected software, extensive reports pulling on many data sources can be run to give the plant manager a key report with important information visible. If there is a problem in the operations, this reporting can allow the plant manager to identify the problem and task his engineers and operators with getting to the source and making the necessary adjustments, all based on fact and not best guesses.

Plant managers know that there are many important moving parts to a plant operation and getting reliable data is the lifeblood of a successful, profitable operation. The more digital the plant becomes, the cleaner data flows to all departments and roles and allows troubleshooting, reporting, and forecasting to be more and more seamless.

Another advantage to digitization at the plant manager level is transferring of skill, information, and expertise at the subject matter expert SME level. Many SMEs are getting close to retirement and in them a wealth of information, experience and methodology that is at risk of being lost. Through the digitization of reports and operations, the methods can be preserved and passed on to the next person assuming the role and responsibility, whether it an operator or an engineer or other essential role.

Looking Forward

Whether it is the operator, the engineer, the lab tech or the plant manager, all digital transformation roles and responsibilities in manufacturing contribute to the transformation of the plant. From the bottom up with effective communication and consistent data, downtime can be minimized, golden runs more common and seamless operations a daily reality.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Integrating manufacturing data in a plant is necessary for many reasons. Among the most important is getting relevant data to various departments quickly. In doing so, downtime is reduced, anomalies are identified and corrected, and quality is improved.

So often integrations are delayed due to fears around losing data quality during integration or simply finding the time in a 24/7 environment. There are pros and cons to each integration type. In this article we will walk you through the different integrations and what to look out for as well as tips and best practices.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Integrating Historian & ERP Data

Enterprise Resource Planning (ERP) is software used by accounting, procurement, and other groups to track orders, supply chain logistics and accounting data. By adding historian data, ERP systems have a fuller picture of the comprehensive plant operations.

combining erp and historian data on a trend

Integrating your historian and ERP data can provide great insight into which processes are affecting quality.

ERP users have access to more information about finished goods such as the exact time of any major production step or if there were an issue with production. For instance, if the texture of newsprint is slippery and not up to spec and because of that cannot be cut properly on the news producer’s rollers, the specific lot can be identified and the challenge of finding out which lot produced poor quality paper is no longer a roadblock.

In a nutshell, The Historian to ERP integration means departments outside of production get all the data they need without engaging another resource. The challenges include a time -consuming integration where erroneous values can have a wide-ranging impact, so double-checking values is essential.

Integrating Historian & MES Data

Connecting a historian to an MES (Manufacturing Execution System) expands the capabilities of the MES. Manufacturing execution systems are computerized systems used in manufacturing to track and document the transformation of raw materials to finished goods, obviously an essential component of manufacturing data capture. The historian provides a historical log of all production data rather than only being able to see current values or near-past values. Being able to pull large amounts of historical data along with data from an MES when needed, allows for projections that are not possible without this long-term perspective and additional data

A relevant example of a MES to historian benefit is an ethanol plant that would like to examine seasonal -winter vs. summer- variability on fermentation rates. The historian has all this data and the MES allows the user to pull out data only for relevant times.

integrating historian and mes data on a trend

Integrating MES data into a historian provides access to years and years of data and allows for long-term analysis.

Using product definitions from the MES and the comprehensive history of production runs for a given product line or product type without manual filters of all historical data is key to fast troubleshooting with this integration type. Historian to MES integrations help to reduce waste and decrease the time it takes to solve an issue. Like the Historian to ERP solution, the Historian to MES integration takes significant work and resources but the benefits are immediately evident and realized.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

Integrating LIMS & ERP Data

A Laboratory Information Management System (LIMS) contains all testing and quality information from a plant’s testing labs. Plants often have labs for quality testing. Federal regulations and standards often dictate a test’s values or results which a batch’s success and quality is ultimately dependent upon. Testing is done at various stages of production and this includes the final stage which is the most important stage. Certificates of analysis are common documents that ensure the safety and quality of tested batches. LIMS to ERP integration is especially important for the food and beverage industry as they depend upon testing to ensuring their product is safe for human consumption.

integrating LIMS and ERP data in a trend

By integrating LIMS and ERP data it’s easy to identify a specific out-of-spec batch or product run for root cause analysis.

Batch quality data from LIMS systems allows ERP users who could be accounting or procurement departments to build documents and reports that share data to certify the quality of shipped product. This integrated data also gives the customer reps immediate access to data about shipped product. The LIMS to ERP integration is very important as so many LIMS departments still rely on a paper trail which can be a tremendous hold up to production. As with the historian to ERP integration the LIMS to ERP integration must have accurate data to provide site-wide value, so double checking is necessary.

Integrating LIMS & Historian Data

Just like historical process data from assets, testing data is very useful information when troubleshooting a production issue. The LIMS system as explained earlier, stores all of the testing data from the lab. Sending LIMS data to the historian allows users to have a greater understanding of the production process and lab values to provide a fuller picture and greater analysis of the issue.

integrating LIMS and historian data in a trend

Integrating LIMS and historian data is one of the most effective ways to analyze how a process affects product quality.

An example could be when a paper brightness is out of spec, lab data can shed light and bring attention to the part of the process that needs adjustment. Alerts within the historian can be set up, giving engineers more time to adjust the process to meet quality. Past testing values are useful when comparing production runs and bring awareness to patterns that production data alone may not have. As with any integration, LIMS to Historian requires planning, a team that is engaged and milestones to check in on the progress and success of the integration.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Integrating CMMS (Computerized Maintenance Management Systems) & ERP Data

Maintenance is a large and necessary part of plant operations. Maintenance records and work order information are often stored in a Computerized Maintenance Management System (CMMS). The CMMS system has comprehensive information that by itself cannot be accessed by departments that may need to learn more details about the specifics of the maintenance.

By connecting the CMMS to an ERP system, ERP users will have access to more data about the finished product. Users can check to see if there were any maintenance issues around the time of the production. Facilities with lengthy scheduled shutdowns like an oil refinery will need to plan out how much gasoline or other fuel to keep in storage to meet their customer obligations.

integrating CMMS and ERP data in a trend

By integrating maintenance and ERP data we’re able to investigate and out-of-spec product run and note that there was a maintenance event that likely caused the issue.

Knowing about shutdowns both planned and unplanned allows the user to better plan out both customer orders and shipping. Anticipating the schedule for planned repairs is also useful for financial planning and forecasting. Users with access to historical work order information can better understand any issues that might come up and gives a bigger glimpse into the physical repair and the associated costs and impact. Integrating these systems can prove to be some of the hardest integrations simply because the data types can vary so much.

The key to a successful CMMS to ERP integration is getting necessary leadership on board and having a detailed roadmap and plan with regular teams check- ins so that obstacles can be addressed immediately.

Integrating Field Data Capture System & Historian Data

The remote nature of field data capture systems means that this data is often siloed and very difficult and slow to access. Field data is just that, captured in the field and often must be pieced together from manual entries, often on paper. Various roles collect this data, and it must be utilized collectively to have any value. Field data types such as temperature, quality and speed must be consistent when entered and even more so when moved on to a historian.

Though often cumbersome to collect, compile and enter, field data in a historian can be enormously empowering to an engineer. For example, oil wells in the Canadian oil sands can be 50 to 200 miles from the nearest human operator. The more data the operator knows about these wells, the less travel they spend checking up on each well.

Integrating Field data and Historian Data

Integrating field data into a historian provides reliable access to long-term data from previously siloed wells.

Connecting field data to a historian also increases the amount of data an engineer can use during troubleshooting. The data in the field is vital to reducing downtime and managing product quality. Sharing that data with the historian gives the data a broader audience where comparison and analysis can be made, resulting in less downtime and greater productivity.

Looking Forward

When integrating manufacturing data, the overriding theme and result is digital data empowerment. When important plant data can flow seamlessly from one person, system or department, better decisions can be made through better analysis which ultimately leads to better operations, less downtime and greater profitability. It is important to understand the full data management and connectivity options available and the pros and cons of each. Various brands of each solution are on today’s market. Ideally, all sources of plant data can be connected and disseminated effectively for maximum efficiency and profitability.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0