Your address will show here +12 34 56 78
Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Both established operational data historians and newer open-source platforms continue to evolve and add new value to business, but the significant domain expertise now embedded within data historian platforms should not be overlooked.

Time-series databases specialize in collecting, contextualizing, and making sensor-based data available. In general, two classes of time-series databases have emerged: well-established operational data infrastructures (operational, or data historians), and newer open source time-series databases.

Enterprise data historian functionality at a fraction of the cost. Industrial time series data collection & analytics tools.

Learn More

Data Historian vs. Time Series Database

Functionally, at a high level, both classes of time-series databases perform the same task of capturing and serving up machine and operational data. The differences revolve around types of data, features, capabilities, and relative ease of use.

Time-series databases and data historians, like dataPARC’s PARCserver Historian, capture and return time series data for trending and analysis.

Benefits of a Data Historian

Most established data historian solutions can be integrated into operations relatively quickly. The industrial world’s versions of commercial off-the-shelf (COTS) software, such as established data historian platforms, are designed to make it easier to access, store, and share real-time operational data securely within a company or across an ecosystem.

While, in the past, industrial data was primarily consumed by engineers and maintenance crews, this data is increasingly being used by IT due to companies accelerating their IT/OT convergence initiatives, as well as financial departments, insurance companies, downstream and upstream suppliers, equipment providers selling add-on monitoring services, and others. While the associated security mechanisms were already relatively sophisticated, they are evolving to become even more secure.

Another major strength of established data historians is that they were purpose-built and have evolved to be able to efficiently store and manage time-series data from industrial operations. As a result, they are better equipped to optimize production, reduce energy consumption, implement predictive maintenance strategies to prevent unscheduled downtime, and enhance safety. The shift from using the term “data historian” to “data infrastructure” is intended to convey the value of compatibility and ease-of-use.

Searching for a data historian? dataPARC’s PARCserver Historian utilizes hundreds of OPC and custom servers to interface with your automation layer.

What about Time Series Databases?

In contrast, flexibility and a lower upfront purchase cost are the strong suits for the newer open source products. Not surprisingly, these newer tools were initially adopted by financial companies (which often have sophisticated in-house development teams) or for specific projects where scalability, ease-of-use, and the ability to handle real-time data are not as critical.

Since these new systems were somewhat less proven in terms of performance, security, and applications, users were likely to experiment with them for tasks in which safety, lost production, or quality are less critical.

While some of the newer open source time series databases are starting to build the kind of data management capabilities already typically available in a mature operational historian, they are not likely to completely replace operational data infrastructures in the foreseeable future.

Industrial organizations should use caution before leaping into newer open source technologies. They should carefully evaluate the potential consequences in terms of development time for applications, security, costs to maintain and update, and their ability to align, integrate or co-exist with other technologies. It is important to understand operational processes and the domain expertise and applications that are already built-into an established operational data infrastructure.

Why use a Data Historian?

Typical connection management and config area from an enterprise data historian.

When choosing between data historians and open source time-series databases, many issues need to be considered and carefully evaluated within a company’s overall digital transformation process. These include type of data, speed of data, industry- and application-specific requirements, legacy systems, and potential compatibility with newly emerging technologies.

According to the process industry consulting organization ARC Advisory Group, modern data historians and data infrastructures will be key enablers for the digital transformation of industry. Industrial organizations should give serious consideration when investing in modern operational historians and data platforms designed for industrial processes.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

11 Things to Consider When Selecting a Data Historian for Manufacturing Operations:


1. Data Quality

The ability to ingest, cleanse, and validate data. For example, are you really obtaining an average, such as if someone calibrates a sensor, will the average include the calibration data? If an operator or maintenance worker puts a controller in manual, has an instrument that failed, or is overriding alarms, does the historian or database still record the data? Will the average include the manual calibration setpoint?

2. Contextualized Data

When dealing with asset and process models based on years of experience integrating, storing, and accessing industrial process data and its metadata, it’s important to be able to contextualize data easily. A key attribute is the ability to combine different data types and different data sources. Can the historian combine data from spreadsheets and different databases or data sources, precisely synchronize time stamps and be able to make sense of it?

3. High Frequency/High Volume Data

It’s also important to be able to manage high-frequency, high-volume data based on the process requirements, and expand and scale as needed. Increasingly, this includes edge and cloud capabilities.

4. Real-Time Accessibility

Data must be accessible in real time so the information can be used immediately to run the process better or can be used to prevent abnormal behavior. This alone can bring enormous insights and value to organizations.?

5. Data Compression

Deep compression based on specialized algorithms that compress data, but enables users to reproduce a trend, if needed.

6. Sequence of Events

SOE capability enables user to reproduce precisely what happened in operations or a production process.

7. Statistical Analytics

Built in analytics capabilities for statistical spreadsheet-like calculations to perform more complex regression analysis. Additionally, time series systems should be able to stream data to third party applications for advanced analytics, machine learning (ML) or artificial intelligence (AI).

8. Visualization

The ability to easily design and customize digital dashboards with situational awareness that enable workers to easily visualize and understand what is going on.

9. Connectability

Ability to connect to data sources from operational and plant equipment, instruments, etc. While often time-consuming to build, special connectors can help. OPC is a good standard but may not work for all applications.

10. Time Stamp Synchronization

Ability to synchronize time stamps based on the time the instrument is read wherever the data is stored – on-premises, in the cloud, etc. These time stamps align with the data and metadata associated with the application.

11. Partner Ecosphere

Can make it easy to layer purpose-built vertical applications onto the infrastructure for added value.

Looking Ahead

Rather than compete head on, it’s likely that the established historian/data infrastructures and open-source time-series databases will continue to co-exist in the coming years. As the open-source time series database companies progressively add distinguishing features to their products over time, it will be interesting to observe whether they lose some of their open-source characteristics. To a certain extent, we previously saw this dynamic play out in the Linux world.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Manufacturers use a variety of tools and systems every day to manage their process from start to finish. It is critical that these systems can provide a “single pane of glass” and “single version of truth” along the way. Meaning, data can be viewed from any device, location or system and will be synchronized across all platforms. Manufacturing Operations Management Systems make this possible.

Real-time manufacturing operations management and industrial analytics tools

Check out PARCview

What is Manufacturing Operations Management?

Manufacturing Operations Management (MOM) is a form of LEAN manufacturing where a collection of systems is used to manage a process from start to finish. The key to MOMs is ensuring data is consistent across all systems being used from scheduling and production to shipment and delivery.

MOM includes software tools designed for the management of people, business processes, technology, and capital assets to meet customer demand while creating shareholder value. Tying in the LEAN manufacturing, the processes must be efficiently performed and resources productively managed. These are the prerequisites for successful operations management.

Key Applications of Manufacturing Operations Management


Supply chain & resource management

MOM systems include tools for planning, procuring, and receiving raw materials and components, especially as it relates to obtaining, storing, and moving necessary materials/components in a timely manner and of suitable quality to support efficient production, something that is certainly critical in these times of supply chain disruptions.

To deal with today’s dynamic business environment ranging from challenges caused by pandemics, shutdowns, geo-political conflicts, and supply chain disruptions, organizations need to be able to be sustainable, operationally resilient, conform to ESG goals, deploy the latest cybersecurity tools, and connect its workforces from any location.

Process & production management

Once all the resources are gathered, MOM tools need to be established for implementing product designs to specifications, developing the formulations or recipes for manufacturing the desired products, as well as manufacturing of product or products that conform to specifications and comply with regulations.

Organizations must monitor and adjust their processes quickly and automatically, to efficiently evaluate the situation when an inevitable glitch occurs. This is a prime opportunity for digital transformation through MOM systems.

Distribution & customer satisfaction management

The final stage of MOM relates to the distribution to the customers, particularly as it relates to sequencing and in-house logistics, as well as supporting products through their end-of-life cycles.

Organizations must react in real-time to changing market conditions and customer expectations. They will have to innovate with new business processes that reach throughout the organization, into the design and supply chain.

Driving innovation & transformation

Successfully innovating at this level involves managing people, processes, systems, and information. When disruptive technologies are in the mix, the first challenge is often tied up in the interplay of people and technology.

Only when the people involved begin to understand what the new MOM technologies are capable of and have the tools to visualize the data and real-time manufacturing analytics software to convert this data into actionable information can they begin to take steps towards achieving the innovation.

One output of manufacturing operations management systems is a production dashboard, like this one, built with dataPARC’s PARCview, which create a shared view of current operating conditions and critical KPIs.

Manufacturing Operations Management Systems

Today’s MOM systems can play a role in achieving the next levels of operations performance because they marshal many or all the needed services in one place and can provide a development and runtime environment for small or large applications.

Common MOM Tools

In addition to leveraging the latest AI, ML, AR/VR, APM, digital twin, edge, and Cloud technologies, MOM systems often consist of one or more of the following:

  • Manufacturing Execution Systems (MES)
  • Enterprise Asset Management (EAM)
  • Human-Machine Interface (HMI)
  • Laboratory Information Systems (LIMS)
  • Plant Asset Management (PAM)
  • Product Lifecycle Management (PLM)
  • Plant Asset Management (PAM)
  • Real-time Process Optimization (RPO)
  • Warehouse Management Systems (WMS)

MOM systems integrate with business systems, engineering systems, and maintenance systems both within and across multiple plants and enterprises.

An example of a multi-site operations with unique manufacturing applications at each site. Some tools like dataPARC’s PARCview, enable manufacturers to integrate data across sites for more effective manufacturing operations management.

Supply Chain Management (SCM), Supplier Resource Management (SRM), Transportation Management (TMS) are commonly used to manage the supply chain.

Plant automation systems, such as Distributed Control Systems (DCS) and Programmable Logic Controllers/Programmable Automation Controllers (PLCs/PACs) are key technologies driving manufacturing production.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

Data Visualization and Real-Time Analytics for MOM

Maybe the most important manufacturing operations management tools for managing production are the data visualization and real-time manufacturing analytics software platforms, like dataPARC, which provide integrated operations intelligence and time-series data historian software.

These MOM tools focus on data connectivity, real-time plant performance, and visualization + analytics to empower plant personnel and support their decision-making process.

Benefits of MOM Visualization & Analytics Tools


Eliminate Data Silos

Most real-time manufacturing operations management analytics tools offer the ability to connect to both manufacturing and operations data. Data from traditionally isolated data silos, such as lab quality data, or ERP inventory data, can be pulled in and presented side-by-side for analysis in a single display.

Establish a single source of truth

MOM analytics tools offering visualization plus integration capabilities enable manufacturers to create a “single version of truth” which everyone from management to the plant floor can use to understand the true operating conditions at a plant.

Often combining multiple sites and multiple data sources to form a single view, users leverage this data to gain perspectives and intelligence from both structured and unstructured operational and business data.

Produce common KPI dashboards

By measuring metrics and KPIs, such as production output, yields, material costs, quality, and downtime, users at multiple levels and roles can make better decisions to help improve production efficiencies and business performance.

Real-time manufacturing operations management dashboards from manufacturing analytics providers can pull in data from multiple physical sites or from multiple manufacturing process areas and display them in a common dashboard.

Manufacturing operations management dashboards from manufacturing analytics providers can pull in data from multiple physical sites for real-time production monitoring

Facilitate data-driven decision-making

Without operations intelligence provided by manufacturing operations management systems, users are often unable to properly understand how their decisions affect the process. MOM analytics software can display data sourced in the business systems for direct access to cost, quality control, and inventory data to support better business decisions.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Final Thoughts

The next generation of MOM systems is here. The economics of steady-state operations have been replaced with a dynamic, volatile, disruptive economic environment in which adapting to changing supply and demand, along with issues such as pandemics, shutdowns and geo-political conflicts are the norm. Tighter production specifications, greater economic pressures, and the need to maintain supply chain visibility in real-time, be sustainable and operationally resilient, plus more stringent process safety measures, cybersecurity standards, ESG goals and environmental regulations further challenge this dynamic environment. Managing these challenges requires more agile, less hierarchical structures; highly collaborative processes; reliable instrumentation; high availability of automation assets; excellent data; efficient information and real-time decision-support systems; accurate and predictive models; and precise control. Uncertainty and risks must be well understood and well managed in all aspects of the decision-making process.

Perhaps most importantly, everyone must have a clear understanding of the business objectives and progress toward those objectives. Increasingly, effective manufacturing operations management requires real-time decisions based on a solid understanding of what is happening, and the possibilities over the entire operations cycle. Organizations pursuing Digital Transformation should consider focusing on MOM systems and not just transformative new technologies to drive operations performance to new levels. This means utilizing software tools, such as manufacturing data integration, visualization, and real-time manufacturing analytics software that gathers a user’s manufacturing data in one single pane of glass view and establishes a single source of the truth.

This article was contributed by Craig Resnick. Craig is a primary analyst ARC Advisory Group. Craig’s focus areas include production management, OEE, HMI software, automation platforms, and embedded systems.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

The most important differences between relational databases, time-series databases, and data lakes and other data sources are the ability to handle time-stamped process data and ensure data integrity.

Enterprise data historian functionality at a fraction of the cost. Industrial time series data collection & analytics tools.

Learn More

The Manufacturing Database Battle

The most important differences between relational databases, time-series databases, and data lakes and other data sources are the ability to handle time-stamped process data and ensure data integrity.

This is relevant because the primary job of the data management technology is to:

  • Accurately capture a broad array of data streams
  • Deal with very fast process data
  • Align time stamps
  • Ensure the quality and integrity of the data
  • Ensure cybersecurity
  • Serve up these data streams in a coherent, contextualized way for operational personnel

Time-Series Databases

Digital technologies and sensor-based data are fueling everything from advanced analytics, artificial intelligence and machine learning to augmented and virtual reality models. Sensor-based data is not easily handled by traditional relational databases. As a result, time-series databases have been on the rise and, according to ARC Advisory Group research, this market is growing much more rapidly than traditional relational databases.

While relational databases are designed to structure data in rows and columns, a time-series database or infrastructure aligns sensor data with time as the primary index.

Time-series databases specialize in collecting, contextualizing, and making sensor-based data available. In general, two classes of time-series databases have emerged: well-established operational data infrastructures (operational, or data historians), and newer open source time-series databases.

To gain maximum value from sensor data from operational machines, data must be handled relative to its chronology or time stamp. Because the time stamp may reflect either the time when the sensor made the measurement, or the time when the measurement was stored in the historian (depending upon the data source), it is important to distinguish between the two.

Searching for a data historian? dataPARC’s PARCserver Historian utilizes hundreds of OPC and custom servers to interface with your automation layer.

Relational Databases

Time series data technologies – whether open-source databases or established historians – are built for real-time data. Relational databases, in contrast, are built to highlight relationships, including the metadata attached to the measurement (alarm limits, control limits, customer spend, bounce rate, geographic distribution between different data points, etc.). Relational technologies can be applied to time series data, but this requires substantial amounts of data preparation and cleaning and can make data quality, governance, and context at scale difficult.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Data Lakes

Data lakes, meanwhile, score well on scalability and cost-per-GB, but poorly on data access and usability. Not surprisingly, while data lakes have the most volume of data, they typically have fewer users. As with time series technologies, the market will decide the time in which and how these different technologies get used.

Looking Ahead

Digital technologies and sensor-based data are fueling everything from advanced analytics, artificial intelligence and machine learning to augmented and virtual reality models. The fourth industrial revolution, or Industrie 4.0, along with major market disruptions, such as the pandemic driving sustainability, and operational resilience initiatives, has led to a great acceleration of digital transformation and exponential changes in industrial operations and manufacturing taking place.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Process Manufacturing

IT/OT Convergence and corresponding buzzwords are all over the place. If you are in manufacturing, chances are you have seen ads and read articles like this one on the necessity of keeping up with the ever-changing digital landscape. No matter what the source, important data needs to get to the right people at the right time, so where does one start?

In a plant, chances are you have many sources of IT and OT data that contribute to seamless operations. Whether it is an oil & gas, food, chemical or mineral process, on the ideal days, your operation runs like a well- oiled machine. That being said, how easily and consistently can the entire plant or enterprise operate efficiently, solve problems and reduce bottlenecks?

Accurate and fast data access, intuitive data visualization and analytics are needed to make decisions that affect production on a 24/7 basis. Resources are also at risk when changes are made, even if they are for the better. Ripping and replacing data systems and equipment is expensive and involves a lot of risk.

No matter what the source, important data needs to get to the right people at the right time, so where does one start to make small, gradual IT/OT changes?

Integrating IT & OT data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Utilize Tools you Already Have While Adding New Tools That Drive Change

Firstly, you can leverage the data assets you already have. Risk is reduced considerable when you can add on products that enhance data sources you already use. For example, if you utilize a data historian that works well for your needs, you can add a data visualization and analytics solution on top, as well as connect other sources that were previously siloed. Not replacing the historian saves a headache and allows your important production to continue without interruption. Using what you already have means saving valuable financial resources that can be allocated to other needs, faster ROI and reduced risk.

How Energy Transfer Empowered Their Operations

An example of a company that utilized current tools and enhanced their operations with new ones was Energy Transfer, a midstream energy company headquartered in Houston, Texas. Energy Transfer had an enormous amount of data at their 477 sites and 90k miles of pipeline. The company has acquired assets over the years, resulting in multiple vendor data systems across the Enterprise, and did not have a good way to share it. Engineers and management both needed data from all sites but couldn’t make operational decisions before the window of opportunity had passed. They also knew future acquisitions with different data systems might come, so a flexible future was critical. Energy Transfer purchased a reliable data visualization and analytics software solution and was then able to combine data from all sites with just one program.

Not only was the data easily available, but the tools enabled the data to be customized to the way that everyone- from operator to engineer to management- wanted to see it. Sites could customize and access data they specifically needed, while corporate headquarters could access data they wanted and customize it to their specific needs. By joining their OT and IT worlds in a future-proof manner, Energy Transfer not only improved efficiency but they saved time and money while doing it to the tune of $10M in annualized savings.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

Start Small and Involve Your Team in the Integration of IT/OT Data

Another step you can take is to get your people on the same page about digitizing as many processes and data as you can at your plant. This means small steps such as LIMS data and process data being shared digitally rather than manually through eliminating time-consuming methods such as handwritten reports and manually entered excel spreadsheets. Manual processes create delays and perpetuate data silos as they are only available to those who receive them through arduous, dated methods. A reliable data visualization and analytics software can help you do this, allowing you to share the data seamlessly. The small things can add up to the big things in IT/OT convergence. Getting the team into a digital mindset is an important step.

Starting with a team to talk about IT/OT convergence is a wise first step. Formulating a plan with goals, milestones and deadlines helps make the process more bite sized and manageable. Defining roles, assigning tasks and meeting about progress are effective ways of incorporating your IT/OT data into your manufacturing process and reporting.

W.R. Grace Connected their Data Sources Enterprise Wide

An example of a company that started small and soon widely adopted IT/OT convergence was W.R. Grace, a $2 billion specialty chemical company that manufactures a wide array of chemicals and is the world’s leading supplier of FCC catalyst and additives. Grace has manufacturing locations on three continents and sells to over 60 countries and is headquartered in Columbia, Maryland. Grace’s IT and OT worlds were very disconnected. Grace had robust OT and IT data but was limited in the way they could use this data. Most Grace staff were required to use the operator control system functions to interact with process data because the existing historian tools were ineffective, slow, and not intuitive, resulting in no buy-in, especially from operations staff.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Grace implemented a data visualization solution across several of its chemical facilities, including its largest facility, Curtis Bay, with over 600 employees. Grace Curtis Bay unified key sources of data with a robust data visualization and analytics product. Grace employed intuitive, fast and effective dashboards, unifying operations and information technologies and thereby increasing process management effectiveness, decreasing downtime and increasing profitability. These dashboards were easy to use and highly intuitive, and in turn they got utilized by many staff who were hesitant to engage the previous system. Grace saw an ROI from the implementation of data visualization software tools in less than six months. Using the data visualization & analytics tools also allowed Grace to make predictive and proactive changes in their operations resulting in increased first pass quality.

If you lack high performing data visualization and analytics across your plant or enterprise, achieving IT/OT convergence in manufacturing is relatively simple when you employ the right resources and have a plan that allows for gradual changes that add up to significant results. Small changes such as combining existing systems, effective software tools and a mindset of proactive digital awareness for your team can significantly impact the overall success of your operations. Newfound ease in your daily tasks and reporting are results of effective IT/OT convergence and will be experienced by everyone. From the operator to the engineer all the way to corporate management, small changes become major impacts when completed strategically and as part of an effective plan for positive digital progress.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Process Manufacturing

Now that PI ProcessBook is headed toward extinction, many engineers & operators in the process industries are faced with the challenge of replacing the tools they’ve depended on for years. But how to go about replacing critical trends and displays? In this article we’ll give you some tips for evaluating ProcessBook alternatives, and present some of the best ProcessBook alternatives available today.

Looking to replace ProcessBook? Learn how you can export your existing displays without issue.

Learn more

Why you Need to Replace ProcessBook

ProcessBook End of Life

In late 2020, OSIsoft announced the retirement of ProcessBook, PI’s venerable data visualization toolkit that debuted all the way back in 1994.

Because of its tight integration with PI’s industry-leading historian, ProcessBook was widely adopted by process engineers to build trends, dashboards, and process graphics from their PI Server time-series data.

Over the years, as competitors have popped up with newer, more powerful analytics tools, many engineers have stuck with ProcessBook, often simply because of their familiarity with the toolkit, or because of the difficulty of migrating critical graphics and displays to a new platform.

OSIsoft’s announcement will likely force their hands.

While current users will continue to be able to use ProcessBook indefinitely, by discontinuing support, OSIsoft is clearly encouraging customers to make plans to transition to alternative platforms. Security updates for ProcessBook are scheduled to end in 2022, and support for the platform will end entirely in December of 2024.

So, if you’ve been on the fence about replacing ProcessBook at your facility, now’s the time to begin looking into some alternatives.

How to Evaluate ProcessBook Alternatives

The industrial analytics marketplace has become quite crowded. There are dozens, maybe hundreds of companies out there making analytics products for everything from broad industrial applications to niche manufacturing processes. So, where do you start in your search for an alternative to ProcessBook? What factors should you consider to determine which of these solutions is the best fit for you? At a high level, you should be thinking about the following:

1. Ease of Integration

Chances are, if you’re using ProcessBook, you’re also using PI Server as a data historian. If you’re looking to replace ProcessBook, the most critical question you’ll have to answer is, how well does the alternative integrate with PI server. The answer to this question will mean the difference between a quick, 10 minute connection to PI Server, or a costly and time-intensive investment in custom development and new OT infrastructure.

Migration is also a key consideration. Your organization has invested much time and money into building the ProcessBook displays that you depend on to run your operations efficiently. Some ProcessBook alternatives provide tools to simply migrate your existing ProcessBook displays to your new platform. Others… don’t.

2. Diagnostic Analytics Capabilities

Diagnostic, or “exploratory” analytics tools are used for root cause investigation and troubleshooting of downtime events or product quality issues.

Trends and trending capabilities are at the core of diagnostic analytics. Effective root cause analysis depends on rapid ad-hoc analysis and the ability to quickly overlay historical data from various process areas to determine correlation and causation.

Trending is likely to be one of the top two capabilities ProcessBook users are looking to address with a replacement system.

In addition to trends, diagnostic analytics are often supported by other visualization tools, such as histograms, X/Y charts, and Pareto charts. When presented with difficult process questions, the more ways you can slice and dice your data the easier it will be to arrive at the correct answer.

3. Operations Management Capabilities

“Operations Management” is broadly defined here as the capabilities that allow for:

  • Production tracking
  • Process monitoring
  • OEE tracking
  • Quality management
  • Process Alarms & Notifications
  • Reporting
  • Manual Data Entry

That’s a lot of functionality, and most of it comes from dashboarding and process graphics-building tools that leverage process data for real-time monitoring. Basic analytics solutions typically only allow for monitoring at the site level, but more sophisticated offerings allow enterprise-wide tracking of production KPIs across multiple sites.

ProcessBook users have probably gotten the most use mileage out of the platform’s dynamic, interactive graphics, and it’s not uncommon for displays built with ProcessBook to see a decade or more of continued use. When looking to replace ProcessBook’s operations management capabilities you have a few options. You could look for point solutions designed specifically for that capability, like SSRS for reporting. You could find a highly customizable product that has coding capabilities like ProcessBook did. Or you could find a broad a solution that has the building blocks necessary to solve multiple business needs.

4. Advanced Analytics Capabilities

Advanced analytics is another loaded term that we’ll define here for the purpose of this post. Often used in relation to leading-edge manufacturing concepts, like machine learning and industrial AI, advanced analytics in ProcessBook replacement tools will typically take two forms: predictive analytics and prescriptive analytics.

Predictive analytics tools promise to prevent downtime and improve OEE by building models from recorded data to anticipate and alert users to potential productivity loss. Prescriptive analytics take the next logical step and tell you which actions need to be taken to address predicted production issues.

Together, and in conjunction with process automation tools, predictive and prescriptive analytics form a sort of elementary Artificial Intelligence used to maximize plant performance.

5. Cost/Pricing

Although it’d certainly be nice if it weren’t the case, cost will likely be a consideration as you consider ProcessBook alternatives. Pricing for these solutions is usually determined by features and the scope of implementation, and most providers don’t publicly list their pricing so providing even ballpark figures will be difficult. However, there’s one key factor you should be aware of when evaluating pricing.

The pricing model

Pricing models vary between process manufacturing analytics providers, to include everything from flat rate pricing, usage-based pricing, tiered pricing, and user-based pricing. These days, many manufacturing analytics solutions use “per-user” pricing, with the licensing cost going up according to the number of individuals using the tools at a facility. The upside with per-user pricing is that for small facilities, or for organizations with few people monitoring and analyzing process data, it can make for a relatively-cost effective solution. The flipside, obviously, is that for data-driven companies who believe in giving every operator, engineer, and SME the ability to contribute to improving plant performance, per-user pricing can get very expensive very fast.

Seamlessly integrate with PI Server for real-time process monitoring and rapid root cause analysis.

Learn more

The Best ProcessBook Alternatives

dataPARC’s PARCview

A real-time data analysis and visualization toolkit developed by the end user, for the end user, dataPARC’s PARCview application has long co-existed with OSIsoft’s PI Server in process facilities around the world. In fact, over 50% of all dataPARC installations are built on top of PI historians.

If you value ProcessBook primarily for its trending and interactive process graphics, dataPARC is a superb alternative, featuring diagnostic analytics and operations management capabilities that are significant upgrades from what you’ve been accustomed to with ProcessBook.

Ease of Integration

dataPARC integration with PI Server is extremely simple, with native integration allowing users to connect and begin visualizing PI historian data in a matter of minutes. By utilizing the latest PI SDK technology and other performance focused features, most ProcessBook users will find improved performance.

Likewise, dataPARC’s ProcessBook conversion utility allows users to bulk import their existing ProcessBook displays without losing any functionality.

Diagnostic Analytics Capabilities

Widely considered the best time-series trending application available for analyzing process data, dataPARC’s greatest strength is in its ability to connect and analyze data from various sources within a facility.

In addition to its powerful real-time trending toolkit, dataPARC is loaded with features that support root-cause analysis and freeform process data exploration, including:

  • Histograms
  • X-Y plots
  • Pareto charts
  • 5-why analysis tool
  • Excel plugin

dataPARC is also likely the fastest ProcessBook alternative you’ll find on this list. dataPARC’s Performance Data Engine (PDE) provides access to both aggregated and lossless datasets , allowing the best of both worlds – super fast access to long-term datasets and extremely high resolution short-term data.

Operations Management Capabilities

dataPARC offers a complete set of tools for operations management. dataPARC’s display design tool offers the ability to create custom KPI dashboards and real-time process displays using pre-built pumps, tanks, and other industry standard objects. dataPARC even allows you import existing graphics or entire ProcessBook displays.

All the standard reporting features are included here, along with smart notifications that can be configured to trigger email or text alerts for downtime events or other process excursions. dataPARC’s Centerline tool is one of the platform’s most powerful features, providing operators with an intuitive multivariate control chart with early fault detection and process deviation warnings, so operators can eliminate quality or equipment issues before they occur.

Additional operations management capabilities offered by dataPARC include a robust module for data entry (manual or electronic), notifications, advanced calculation engine and a task scheduling engine.

Advanced Analytics Capabilities

dataPARC doesn’t make claims to artificial intelligence or machine learning, but the platform provides a solid interface for advanced analytics, offering a data modeling module that uses PLS & PCA to power predictive analytics for maintenance, operations & quality applications.

Cost/Pricing

dataPARC provides an unlimited user license for PARCview, which makes it a good fit for organizations wishing to get production data in front of decision-makers at every level of the plant. Compare dataPARC vs PI.

Evaluate the top alternatives to Processbook & PI Vision in our PI Server Data Visualization Tools Buyer’s Guide.

Get the Guide

PI Vision

OSIsoft’s ProcessBook successor, PI Vision is branded as being the “fastest, easiest way to visualize PI Server data.” PI Vision is a web-based application that runs in the browser, which can be a significant change for ProcessBook users used to a locally-installed desktop app.

Ease of Integration

PI Vision likely offers the most straightforward integration with PI Server, as it’s part of OSIsoft’s PI System.

Migration of existing ProcessBook screens to PI Vision is supported by OSIsoft’s PI ProcessBook to Pi Vision Migration utility, however many users have reported difficulty retaining the full functionality of custom displays and graphics after moving them into Pi Vision.

Diagnostic Analytics Capabilities

Many users report that PI Vision’s trending tools provide less firepower than ProcessBook and competitors for root cause analysis and ad-hoc diagnostics, but it is perfectly capable of performing the basic trending functions of plotting time-series and other data against time on a graph.

OSIsoft also offers their PI DataLink Excel plugin, which is often used for more advanced diagnostic analytics efforts.

Operations Management Capabilities

Process Displays are the heart and soul of the PI System and if you’re moving from ProcessBook you’ll likely feel at home working in PI Vision.

Although it’s not a 100% feature-for-feature replacement of ProcessBook (the SQC module, for instance isn’t available in Vision), some of the ProcessBook features lacking in early versions of PI Vision are being added via periodic updates. In addition, there are some new capabilities in PI Vision that don’t exist in ProcessBook.

The PI Vision displays use HTML5 and are integrated into PI’s Asset Framework (AF), which results in pretty intuitive display building. Basic reporting is available as well, but like much of the PI system, the data must be extracted to Excel via PI DataLink.

Advanced Analytics Capabilities

PI Vision doesn’t include any built-in data modeling tools or other advanced analytics components. PI Server data can be brought into 3rd party analytics apps via PI Integrator for advanced analysis.

Cost/Pricing

PI Vision uses a per-user pricing model, which is great for small organizations with only a few people accessing the platform. For larger manufacturers, or enterprise implementations with teams of operators, process engineers, and data scientists accessing the product, PI Vision can become quite expensive.

Proficy CSense

GE acquired CSense back in 2011 to provide better data visualization and analytics tools for use with their own Proficy Historian. CSense is billed as industrial analytics software that improves asset and process performance. Trending and diagnostic analytics is the focus here, with less emphasis on robust process displays.

Ease of Integration

CSense is optimized for integration with Proficy, GE’s own time-series data historian. A OSISoft PI OLEDB provider is required to integrate with OSIsoft’s PI Server. This interface may affect performance for users familiar with native PI integration.

Diagnostic Analytics Capabilities

CSense’s trending and diagnostic capabilities likely exceed what you’ve experienced with ProcessBook.

Dedicated modules for troubleshooting (CSense Troubleshooter) and process optimization (CSense Architect) provide modern trends, charts, and other visualization tools to analyze continuous, discrete, or batch process performance.

Operations Management Capabilities

GE takes a modular approach to their data visualization products. Much of the operations management functionality provided in a single product like ProcessBook is spread over many separate products within the Proficy suite.

There’s a fair amount of overlap in these products, but somewhere among GE’s iFIX, CIMPLICITY, Proficy Operations Hub, Proficy Plant Applications, and Proficy Workflow products you’ll find operations management capabilities that greatly exceed those of ProcessBook.

Advanced Analytics Capabilities

CSense marketing materials are filled with mentions of industrial advanced analytics, digital twins, machine learning, and predictive analytics.

Like PARCview’s approach, this advanced functionality is enabled by the insights mined from the powerful data visualization tools of the core platform, though models can be developed in CSense to help predict product quality and asset failure.

Cost/Pricing

CSense licensing is offered in three editions – Runtime, Developer, and Troubleshooter, all which come with different component combinations and data connectors.

Canary Axiom

Like CSense and PI Vision, Canary’s Axiom was designed as the visualization component of a larger system. Axiom was built to support analysis of data stored in Canary Historian.

Ease of Integration

Canary’s Data Collectors can connect to process data sources via OPC DA and OPC UA, but they don’t have a dedicated module to connect to PI Server like some of the other options on this list.

Diagnostic Analytics Capabilities

Axiom is browser-based trending application that, while not as powerful as some other ProcessBook replacements, is easy to use and capably performs basic diagnostic analysis. It lacks the more powerful features of products like dataPARC and CSense, but Canary does provide an Excel Add-in for performing more advanced analysis.

If you were perfectly fine with the trending and diagnostic capabilities of ProcessBook, you’ll likely be satisfied with what Axiom provides.

Operations Management Capabilities

Axiom offers dashboarding and reporting capabilities alongside their trending tools, but ProcessBook users may be disappointed by the lack of emphasis on display building provided here. Simply put, if you’re looking to replace your dynamic, interactive ProcessBook displays, you’ll want to look elsewhere.

Canary also lacks support for migrating existing ProcessBook displays, which is a feature that both dataPARC and PI Vision have.

Advanced Analytics Capabilities

Canary avoids making flimsy claims to current buzzwords like Industrial AI or Machine Learning. Axiom’s focus is on nuts-and-bolts trending & analysis of time-series data, though their Excel Add-in does certainly open the door to more advanced analytics applications.

Cost/Pricing

Canary helpfully posts their pricing on their website, with Axiom fetching a per-client fee in the form of both a one-time and monthly charge. This pricing assumes you’ll be using the Canary Historian, so the info on the website won’t be much help if you’re looking to connect Axiom to PI Server data only.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

TrendMiner

Offering “self-service industrial analytics”, TrendMiner provides a complete suite of web-based time-series data analysis tools.

Ease of Integration

Like dataPARC, TrendMiner offers native integration with PI Server. You can expect connecting to your PI historian to be very easy, although TrendMiner doesn’t appear to support the transfer of existing ProcessBook displays.

Diagnostic Analytics Capabilities

With a name like TrendMiner, you’d assume that trending and diagnostic analysis is key for the Software AG brand. And you’d be correct – tag browsing, trend overlays, and data filtering are all favorite features of TrendMiner customers.

Root Cause Analysis is a core use case for TrendMiner, but some users have mentioned difficulty learning to use the system to identify process issues. Reports of slow performance also make this a riskier choice when considering ProcessBook alternatives.

Operations Management Capabilities

Monitoring is at the core of TrendMiner’s operations management capabilities. The platform provides a number of tools that support smart alarming & notifications of process excursions, but visualization of processes seems limited to trends. ProcessBook users who depend on interactive process displays to manage and monitor plant performance will need to look to other options on this list to replace that functionality.

Advanced Analytics Capabilities

TrendMiner’s monitoring features extend the platform into “predictive” analytics territory. Like dataPARC and some of the other ProcessBook alternatives on this list, predictive performance is enabled via models based on historical data.

Cost/Pricing

TrendMiner pricing is dependent on your particular use case and they don’t list pricing on their website, however you can expect them to be on the higher end of the ProcessBook alternatives listed here.

Seeq

Founded in 2012, Seeq offers Advanced Analytics for Process Manufacturing Data. Analysis is the focus with Seeq, and their products are heavy on diagnostic and predictive analytics, without as much support for operations management as some of the other ProcessBook alternatives on this list.

Ease of Integration

Like dataPARC, Seeq is optimized to connect to various different data sources within a plant, including OSIsoft’s PI Server. You should expect a pretty seamless integration with your PI historian data.

Seeq does not support importing existing ProcessBook displays.

Diagnostic Analytics Capabilities

Seeq’s browser-based Workbench application powers the platform’s diagnostic analytics, offering a powerful set of trending and visualization tools.

Advanced trending, bar charts, tables, scatterplots, and treemaps can all be employed to perform rapid root-cause analysis, and represent a significant upgrade in diagnostic capabilities from ProcessBook.

Operations Management Capabilities

Seeq offers the ability to configure alarms for process monitoring, and their Seeq Organizer application allows users to build scorecards and dashboards for KPI monitoring, but they don’t provide anything approaching the display-building capabilities that ProcessBook provides.

Engineers looking for a ProcessBook replacement, and an option for migrating or even replicating their existing displays, will likely want to look at other alternatives.

Advanced Analytics Capabilities

Advanced Analytics is an area where Seeq shines. Claiming predictive analytics, machine learning, pattern recognition, and scalable calculation capabilities, it’s clear that Seeq intends to be your solution for sophisticated process data analysis. Most of Seeq’s advanced analytics capabilities come from their Seeq Data Lab application, which provides access to Python libraries for custom data processing.

Cost/Pricing

Seeq’s pricing isn’t listed on their website, but they reportedly use a per-user pricing model which starts at $1,000/year per user.

Looking to replace ProcessBook See why PARCview is regarded as the #1 ProcessBook alternative.

Ignition

Inductive Automation’s Ignition application is a popular SCADA platform with a broad set of tools for building industrial analytics solutions.

Ease of Integration

Ignition offers the ability connect to virtually any data in your plant. While it lacks the easy native integration capabilities of some of the other ProcessBook alternatives on this list, Ignition supports various methods of connecting to your PI historian, including JDBC and OPC.

Diagnostic Analytics Capabilities

While Ignition shines as an HMI designer or MES, it ranks quite a bit lower than others on this list in its diagnostic analytics capabilities. But it wasn’t designed for root cause investigations and ad-hoc data analysis. Trending in Ignition is basic and integrated into displays, similar to what users may be familiar with in ProcessBook.

Operations Management Capabilities

Ignition’s Designer, on the other hand, is an extremely capable application for building process graphics and HMIs for real-time process monitoring and KPI tracking.

While perhaps lacking the interactivity that ProcessBook users are familiar with, Ignition modules are available to help manage SPC, OEE, material tracking, batch processing, and more.

Advanced Analytics Capabilities

Again, predictive analytics, prescriptive analytics, machine learning, industrial AI… these aren’t Ignition’s focus and don’t factor in to their application feature sets.

Cost/Pricing

Inductive Automation offers unlimited users for a single server license of Ignition, with pricing based per-feature, starting at around $12,500 and going up from there. Ignitions price very much reflects its standing as one of the industry’s most popular SCADA platforms.

Looking for a ProcessBook Alternative?

Well, you have a lot to consider. Presented with the prospect of discontinued support for ProcessBook, your challenge is twofold: first, figure out how to replace the displays, trends, and other features that made ProcessBook valuable to you, and second, evaluate potential replacement candidates for capabilities like predictive and prescriptive analytics that didn’t exist when you got started with ProcessBook.

All the ProcessBook replacement options we listed above feature different toolsets and it’s up to you to identify your needs and determine which solution is the right one for your organization. Hopefully this post got you set off in the right direction on your search.

pi processbook alternatives guide

Download the Guide

Discover top alternatives to PI’s ProcessBook and PI Vision analytics toolkits.

Download PDF
0

Process Manufacturing, Troubleshooting & Analysis

One of the easiest ways to explain why you should implement 5 whys root cause analysis problem solving at your plant is simple: the cause of the problem is often (we’d go so far to say almost always) different that initially speculated. Implementing a lean strategy like 5 whys can save you time and headaches in the future.

Issues with and failures of assets are bound to happen in process manufacturing. Your team’s strategy for resolving problems that occur will determine your productivity in countless ways. Getting a process set up for resolving problems will not only help with the current issues and failures but will create a plan to resolve future problems with the goal of each resolution coming faster and easier.

Real-time process analytics software with integrated 5 Whys analysis tools.

Check out PARCview

What Exactly are the 5 Whys?

The 5 whys is a lean problem- solving strategy that is popular in many industries. Using lean methodology, the 5 whys were developed by Sakichi Toyoda, a Japanese inventor and industrialist. The 5 whys focus on root cause analysis (RCA) and is defined as a systematic process for identifying the origins of problems and determining an approach for responding to and solving them. 5 whys focuses on prevention or being proactive rather than being reactive.

5 whys strives to be analytical and strategic about getting to the bottom of asset failures and issues, it is a holistic approach that is used by stepping back and looking at both the process and the big picture. The essence of 5 whys is revealed in the quote below where something as small as a nail in a horse’s shoe being lost was the root cause of a war being lost:

“For want of the nail the shoe was lost
For want of the shoe the horse was lost
For want of a horse the warrior was lost
For want of a warrior the battle was lost
For want of a battle the kingdom was lost
All for the want of a nail.”

One of the key factors for successful implementation of the 5 whys technique is to make an informed decision. This means that the decision-making process should be based on an insightful understanding of what is happening on the plant floor. Hunches and guesses are not adequate as they are the equivalent of a band-aid solution.

The 5 whys can help you identify the root cause of process issues at your plant.

Below is an example of a 5 whys method being used for a problem seemingly as basic as a computer failure. If you look closely, you will conclude that the actual problem has nothing to do with a computer failure.

  • Why didn’t your computer perform the task? – Because the memory was not sufficient.
  • Why wasn’t your memory sufficient? – Because I did not ask for enough memory.
  • Why did you underestimate the amount of memory? – Because I did not know my programs would take so much space.
  • Why didn’t you know programs would take so much space? – Because I did not do my research on programs and memory required for my annual projects.
  • Why did you not do research on memory required? – Because I am short staffed and had to let some tasks slip to get other priorities accomplished.

As seen in the example above, the real problem was not in fact computer memory, but a shortage in human assets. Without performing this exercise, the person may never have gotten to the place where they knew they were short staffed and needed help.

This example can be used to illustrate problems in a plant as well. Maybe an asset is having repeated failures or lab data is not testing accurately. Rather than immediately concluding that the problem is entirely mechanical, use the 5 whys method and you may discover that your problem is not what you think.

Advantages and Disadvantages of the 5 Whys Method

Advantages are easily identified by such outcomes such as being able to identify the root cause of your problem and not just the symptoms. It is simple and easy to use and implement, and perhaps the most attractive advantage is that it helps you avoid taking immediate action without first identifying the real root cause of the problem. Taking immediate action on a path that is not accurate is a waste of precious time and resources.

Disadvantages are that some people may disagree with the different answers that come up for the cause of the problem. It is also only as good as the collective knowledge of the people using it and if time and diligence is not applied, you may not uncover and address the true root cause of the problem.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

5 Whys Root Cause Analysis Implementation at your Plant

Now that the problem in its essence has been revealed, what are the next steps?

Get Familiar with the 5 Whys Concept

The first step in implementing the 5 whys at your plant is to get familiar with the concept. You may research the 5 whys methodology online or listen to tutorials to gain a deeper understanding. In the information age, we have access to plenty of free information at the touch of our screens. Get acquainted with 5 whys. Even if it is just one video on YouTube and a few articles online, a better understanding means a better implementation.

Schedule a 5 Whys Meeting with Your Team

The second step would be to solve a problem at your plant using the 5 whys method. To do so, follow the steps below to schedule and hold a 5 whys RCA meeting.

These steps are simple to give you a basic understanding. To gain greater understanding in detail on how to implement 5 whys RCA, read on for in depth instructions.

  • Organize your meeting
  • Define your problem statement
  • Ask the first “why”
  • Ask why four more times
  • Determine countermeasures
  • Assign responsibilities
  • Monitor progress
  • Schedule a follow-up meeting

In other words, the root cause analysis process should include people with practical experience. Logically, they can give you the most valuable information regarding any problem that appears in their area of expertise.

What’s Next? What Happens Once I have Held my 5 Whys meeting?

Once your meeting has been held and you begin to implement the 5 whys method, it is essential to remember that some failures can cascade into other failures, creating a greater need for root cause analysis to fully understand the sequence of cause and failure events.

Root Cause Analysis using the 5 whys method typically has 3 goals:

  • Uncover the root cause
  • Fully understand how to address and learn from the problem
  • Apply the solution to this and future issues, creating a solid methodology to ensure the same success in the future

The Six Phases of 5 Whys Root Cause Analysis

Digging even deeper, when 5 whys root cause analysis is performed, there are six phases in one cycle. The components of asset failure may include environment, people, equipment, materials, and procedure. Before you carry out 5 whys RCA, you should decide which problems are immediate candidates for this analysis. Just a few examples of where root cause analysis is used include major accidents, everyday incidents, human errors, and manufacturing mistakes. Those that result in the highest costs to resolve, most downtime, or threats to safety will rise to the top of the list.

There are some software-based 5 whys analysis tools out there, like dataPARC’s PARCview, which automatically identifies potential the 5 top culprits of a process issue & links to trend data for deeper root cause analysis.

Phase 1: Make an Exhaustive List of Every Possible Cause

The first thing to do in 5 whys is to list every potential cause leading up to a problem or event. At the same time, brainstorm everything that could possibly be related to the problem. In doing these steps you can create a history of what might have gone wrong and when.

You must remain neutral and focus only on the facts of the situation. Emotions and defensiveness must be minimized to have an effective list that you can start with. Stay neutral, open. Talk with people and look at records, logs and other fact keeping resources. Try to replay and reconstruct what you think happened when the problem occurred.

Phase 2: Evidence, Fact and Data Seeking and Gathering

Phase 2 is the time when you get your hands on any possible data or files that can lead to the possible causes of your problem. Sources for this data may be databases, digital handwritten or printed files. In this phase the 5 whys list of at least 5 reasons comes into play and needs backup for each outcome or reason why that came up on your list.

Phase 3: Identify What Contributed to the Problem

In Phase 3 all contributions to the problem are identified. List changes and events in the asset’s history. Evidence around the changes can be very helpful so gather this as you are able. Evidence can be broken down into four categories: paper, people, recording and physical evidence. Examples include paperwork specific to an activity, broken parts of the assets and video footage if you have it.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Phase 4: Collect Data and Analyze It

In Phase 4, you should analyze the collected data. Organize changes or events by how much or little you can have an impact on the outcome. Then decide if each event is unrelated, a correlating factor, a contributing factor, or a root cause. An unrelated event is one that has no impact or effect on the problem whatsoever. A correlating factor is one that is statistically related to the problem but may or may not have a direct impact on the problem.

A contributing factor is an event or condition that directly led to the problem, in full or in part. This should help you arrive at one or more one root causes. When the root cause has been identified, more questions can be asked. Why are you certain that this is the root cause instead of something else?

Phase 5: Preventing Future Breakdowns with Effective Countermeasures

The fifth phase of 5 whys Root Cause Analysis is to preventing future breakdowns by creating a custom plan that includes countermeasures, which essentially address each of the 5 whys you identified in your team meeting. Preventive actions should also be identified. Your actions should not only prevent the problem from happening again, but other problems should not be caused because of it. Ideally a solid solution is one that is repeatable and can be used on other problems.

One of the most important things to determine is how the root cause of the problem can be eliminated. Root causes will of course vary just as much as people and assets do. Examples of eliminating the root cause of the issue are changes to the preventive maintenance improved operator training, new signage or HMI controls, or a change of parts or part suppliers.

In addition, be sure to identify any cost associated with the plan. How much was lost because of the problem, and how much it is going to cost to implement it.

To avoid and predict the potential for future problems, you should ask the team a few questions.

  • What are the steps we must take to prevent the problem from reoccurring?
  • Who will implement and how will the solution be implemented?
  • Are any risks involved?

Phase 6 – Implementation of your Plan

If you make it to this step, you have successfully completed 5 whys root cause analysis and have a solid plan.

Depending on the type, severity, and complexity of the problem and the plan to prevent it from happening again, there are several factors the team needs to think about before implementation occurs. These can include the people in charge of the assets, asset condition and status, processes related to the maintenance of the assets, and any people or processes outside of asset maintenance that have an impact on the identified problem. You would be surprised how much is involved with just one asset when you exhaustively think about it and make a list of all people and actions involved during its useful life.

Implementing your plan should be well organized and orchestrated as well as documented. Follow up meetings with your team should be scheduled to talk about what went well, and what could be improved upon. With time, 5 whys can become an effective tool to both solving and preventing future problems at your plant.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

In the process industries, optimization is the key to efficiency. And efficiency is what leads to profit – allowing manufacturers to produce more and waste less. To optimize their processes, many manufacturers use a combination of time series data historian and data visualization software. dataPARC and PI are two of the leaders in this space, and in this article we’ll compare dataPARC vs PI and highlight some of the advantages dataPARC has over PI as a process information management system.

Check out dataPARC’s real-time process data analytics tools & see how better data can lead to better decisions.

Check out PARCview

dataPARC vs PI: Similarities

dataPARC and PI have existed for decades and have large installation bases represented by major manufacturers around the world.

Both dataPARC and PI:

  • Offer a real-time data historian
  • Use a binary, cluster-index, flat file to store history
  • Have an asset structure to address the complexities of large disparate data sources
  • Offer many of the expected analytics & visualization tools: trending, graphics, reports
  • Can connect to various control systems for collecting time-series data in real-time
  • Use a store & forward function in case data connectivity is lost
  • Can work with very large tag-count systems

Now, let’s dive more into their differences and see how dataPARC sets itself apart.

dataPARC vs PI: Differences

Cost

We might as well start with what will be one of the key considerations when evaluating these two data historian and process data visualization toolkits.

Long story short, dataPARC’s total cost of ownership is lower when compared to other “like” industry solutions. Both the initial cost and ongoing costs are considerably lower than the PI System.

Unlimited Users

A key reason for this is dataPARC’s unlimited license model, which makes it a great fit for organizations wishing to get production data in front of decision-makers at every level of the plant without worrying about having to purchase additional licenses.

PI uses a per-user pricing model. This tends to work for small organizations with only a few people needing to access the platform, but for larger organizations or enterprise implementations the cost adds up quickly.

With dataPARC, everyone who needs access to the data can have access at no additional cost – putting the power to make data-data decisions in the hands of every employee.

Looking for an alternative to PI’s Data Historian? Get an enterprise plant data historian at a fraction of the cost. Check out dataPARC’s PARCserver historian.

User Experience

When customers are asked about dataPARC’s top 3 to 5 benefits, ease-of-use is always near the top of the list. The reduced complexity of the dataPARC system allows even the least “computer-savvy” person to begin building content and gaining value, and results in wide adoption of the tools within an organization. 

Though there are many features to dataPARC, a new user can learn how to search tags, trend, and navigate within minutes. From there, users quickly learn they can view trend statistics, manage alarm events, export data, create displays such as X/Y Plot, Histogram, or Pareto and much more all form the right click menu.

dataPARC’s trending tools have long been recognized by customers as the number 1 trend solution in the industry. dataPARC’s trend capabilities are faster and far superior to others and fit better in the practical function realm.

No other package allows for a quicker build of a trend matrix, with quick drag & drop from both the tag browser and displays.

dataPARC makes finding and trending tag data super easy.

Many organizations that were set up with the PI Historian and ProcessBook have since chosen to get dataPARC to “sit on top” of their PI historian simply for PARCview; the visualization tools and ease of use speak for themselves.

Diagnostic Analytics

As mentioned earlier, dataPARC’s trending application is considered the best in industry. Not only for its ease of use and quick access to analysis tools but for its speed as well.

Trend

dataPARC uses a deliberate data speed strategy with multiple components including an embedded Performance Data Engine (PARCpde) to speed data to the user.  The goal is to meet and exceed the user’s “speed of thought.”  PARCpde is a foundational part of the entire dataPARC system. 

Speed tests comparing dataPARC vs PI and other contemporary historians have shown dataPARC to be anywhere from 10X to 50X faster in delivering large or long-term datasets back to the user. 

Several companies have switched to dataPARC in part because of the data speed.  dataPARC also utilizes an aggregate archive and rollup archive in its architecture which greatly reduces the amount of time wasted when solving problems or investigating opportunities. 

From the trend, users can launch a quick statistics grid, generate a new X/Y Chart or Histogram display. Each chart will pull in the tags from the trend, so users don’t have search for them in Tag Browser again.

The X/Y plot sets two tags up for comparison and a best fit line can be generated – linear, polynomial, etc. The formula generated from the fit can be pulled into a trend or other display. PI can also generate X/Y plots, but they are created from scratch and no best fit line is generated.

Excel Add-in

dataPARC’s Excel add-in was built with a high degree of ease-of-use and speed. 

PI and dataPARC both have in-cell functions that can pull data directly into Excel. The dataPARC add-in has multiple other functions.

There is a sheet that can pull multiple tags in the same time range without dealing with formulas. Users can import tag lists from already created dataPARC displays instead of searching for the tags again.

Besides the value gained in legacy Excel add-in tools, dataPARC’s is highlighted by the following:

  • Drag groups of tags/data into Excel from multiple data sources
  • Filter data based on multiple tags values
  • Cross Correlation/R2 matrix generation
  • CUSUM & MSR charting

Additionally, users can display time series-based data from Excel into PARCview trends and displays. This can be used to trend or compare data from outside the company right next to process data.

Evaluate the top alternatives to Processbook & PI Vision in our PI Server Data Visualization Tools Buyer’s Guide.

Get the Guide

Operations Management

Real-time operations management is necessary to keep a plant running at peak efficiency and to be able to respond quickly to process excursions that result in unplanned downtime or product loss.

This is facilitated by dataPARC in a variety of ways:

  • Graphical process displays
  • KPI and Lab data dashboards
  • Manual data entry (MDE) tools
  • Automated reporting
  • Process alarms & notifications
  • & more

When comparing dataPARC vs PI, both offer the creation of dynamic, information packed graphical dashboards, but only dataPARC has the Centerline display.

Centerline

Centerline is a powerful monitoring tool unique to dataPARC. It is a real-time display that reports run based statistics for tags. The runs can be Grade or Time based, and the statistics include time average, standard deviation, CpK, min, max, etc.

Centerline displays data for time periods or runs to ensure process conditions are the same run after run.

The purpose of a centerline display is to help determine the best operational settings for production, and to ensure those settings are normally being used during production.

Centerline is one of dataPARC’s powerful data analysis tools for which there is no PI equivalent.

Alarms and Notifications

dataPARC’s alarm and notification system can send emails, text notifications or trigger workflows when an alarm is detected or closed. Once an alarm is detected, an alarm event is created. These events can be viewed and acknowledged in a trend, centerline, graphic or alarm list. Users can acknowledge the event by assigning a reason from the reason tree and/or typing a comment to the event. Quick analysis can be done in dataPARC with the Pareto chart to determine the top reasons saved for an alarm or create a tabular report sorted by reason with all comments visible.

Similarly, PI can create event frames and send notifications. Once event frames are detected and a reason assigned, users can see this data as a table in PI Vision, but further analysis or reporting is required to take place in the PI Excel Add-in DataLink. dataPARC’s Excel Add-in also has features to pull in Alarm event data.

More dataPARC Excel Add-in features are explored in the following section.

Manual Data Entry (MDE)

dataPARC’s MDE display is quick to configure and allows users to enter and save manual data to the database rather than on a piece of paper or in Excel.

Manually entered data is represented by tags, thus they can be used in PARCview trends, dashboards, and displays like any other tag.

Need to get better data into the hands of your process engineers? Check out our real-time process analytics tools & see how better data can lead to better decisions.

Calculations

When users don’t have the perfect tag to help manage a process, a calc tag or MDE is often used. dataPARC and PI are both able to perform simple calculations such as adding tags, If/Then statements, or unit conversions.

with PI Vision, PI no longer supports VB scripting. VB scripting opens the doors for custom solutions and dataPARC leverages VB scripting for applications such as database reads, file parsing, web service calls, and much more.

Predictive Analytics

dataPARC’s PARCmodel offers a degree of predictive analysis with PLS (Partial Least Square) and PLC (Principal Component Analysis) modeling capabilities.

PLS

The PLS package has been described by one of the world’s top practical modeling engineers as “…bar-none, better than anything I’ve ever seen before.”  In the processing industry, one of the applications for PLS modeling is in building inferential property predictors (IPPs).

Control engineers in operating companies report that a PLS model generation for one IPP can take more than 8 hours to re-model (longer for the initial model) using multiple tools and off-line activity. dataPARC integrates it all into one tool and the re-model effort can be as little as 5 minutes.

This snappy model generation allows multiple solutions to be generated for comparison to find the best option. The speed of remodeling allows for wider application and benefit of PLS.  Practical engineering methods and even process “hunches” can now be backed with a quick validation by a PLS mathematical session in 2 to 5 minutes. 

dataPARC’s predictive modeling tools

dataPARC delivers huge time savings, better learning environment, better collaboration environment, more useful applications – these all accelerate value to the company’s key business drivers. 

PCA

PCA uses the same modeling advantages that dataPARC’s PLS offers, allowing for easy model generation.  The difference between the two modeling methods is that PLS seeks to model and mimic a single variable using adjacent variables as model inputs.  PCA doesn’t model a single variable but models a whole process. 

The value comes when comparing the current process with the modeled process.  PCA gives the user the ability to know when the current process is off (when compared to the modeled process) and identifies the “offending” process variable(s). 

PCA makes use of two parameters (available to the PLS model as well): DMODX (error from model) and HT2N (Hotelling T2 Normalized – off norm). The PCA model input variables are all graded and staff can see which variable(s) is/are causing the problem.  PCA can be used as an early warning system to help operations see a problem before it happens. 

PARCmodel is separately licensed but incorporated into PARCview and easily accessed in the trend right click menu. PI does not have similar analytics tools.

Looking to replace ProcessBook See why PARCview is regarded as the #1 ProcessBook alternative.

Customer-Centric Development & Support

At dataPARC, above everything is the customer and their very real, timely, practical needs. dataPARC’s strategy involves a high attentiveness to the customer’s needs and solving problems quickly.

dataPARC employs many SMEs serving in key process engineering support roles for operating companies in the industry. Over the years dataPARC’s user features and overall system architecture has been shaped by the SMEs and customers. dataPARC is built by end users for end users.

At dataPARC we sell more than software, we sell our services to help build trends, graphics and other displays to get your system off the ground running. Our Engineers and Support staff are available to help implement new projects and off continual support.

With PI, to get the same displays created, customers would have to outsource to a 3rd party. dataPARC is a one stop shop.

Conclusion

dataPARC and PI have a lot in common, however dataPARC has the upper hand where it counts – user experience, speed of data, and cost. dataPARC is simple, fast, and effective.

The advantages to dataPARC vs PI continue to grow with every new feature and update. Features that are driven by users and customers.

pi processbook alternatives guide

Download the Guide

Discover top alternatives to PI’s ProcessBook and PI Vision analytics toolkits.

Download PDF
0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Where do we start in digitizing our manufacturing operations? one may ask. While there is no easy answer, the solution lies in starting not from the top down, but from the ground up, focusing on the digital transformation roles and responsibilities of the key people in your plant.

Digital transformation in process manufacturing is not only a priority, but now an essential step forward as the world encounters and adapts to a more digital world. To put it simply if you do not adjust your processes to embrace digital change, your competitors will (and may already have) outproduce, outshine and outsell you.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Transformation Teams

Digital change has been slow until now though it has been steady. PLC and DCS systems were manufacturing’s digital beginnings and thankfully there is so much more available now to further digitize operations and minimize downtime, improve your process, enhance data management, data sharing reporting and increase profitability. A truly connected enterprise will be adaptable and agile, allowing it to keep abreast of changes in the operating environment.

Plant roles play an essential part in the digitization of process manufacturing and all can contribute to a seamless digital transformation within your facility. Each role embraces digital change and transforms the process from the inside out. By focusing on these roles and the duties and responsibilities within each of them, plant digitization can lead to a well-oiled machine whose comprehensive outcomes depend on and benefit from.

Where do we start in digitizing our operations? one may ask. While there is no easy answer, the solution does lie in starting not from the top down, but from the ground up, with each role’s responsibilities and contributions enhancing the other, adding to and building on the next, for a comprehensive digital enterprise and solid, data-based reporting.

Integrating sources of plant data is a good place to start, along with the processes themselves becoming digitized for maximum outcomes. In this article we will focus on the various roles in the plant, their responsibilities and how each one can contribute to digital transformation.

Digital Transformation Roles & Responsibilities

The Operator

The Operator’s Role in Digital Transformation

Checking process conditions (temperatures, pressures, line speed, etc.) are an essential task for an operator. These process conditions could have readings directly on the machine with valves or buttons to adjust as needed. With more and more digital transformation in manufacturing these process variables are being set up with PLCs to create a digital tag. This tag can be read through an OPCDA server and visualized throughout the plant on computers, in offices, control rooms and meeting rooms. They can also be set up with a DCS to control the process from the control room rather than having to walk the floor to adjust speeds or valves.

The process variables need to be monitored to produce quality products. There are ranges for each process variable and additive when making a product, if these get out of range, the final product could be outside the final specification. Limits can be drawn on gauges, written in an SOP (Standard Operating Procedure) or set up as limits for alarming. These alarms could appear either on the DCS or data visualization screen to alert the operator a variable needs attention.

To consistently make quality product, operators must communicate with the lab tech to verify the product is within spec. This communication between the lab and operators has been traditionally done through verbal communication, walkie talkies, phone calls, etc. To digitize this process, the lab tech enters tested values into a data visualization program or a lab information management system (LIMS) database. These values can be displayed on dashboard with the specifications next to them. The operator can then see when specification values are out of spec and adjust the process, or when values are trending up/down and adjust the process to keep the product within specification before making bad quality product.

Operators are also responsible for keeping track of a product and lot being produced. This can be done manually with pen and paper or entered digitally into a database.

At the end of the shift operators need to pass key information to the next shift. This can be done with a hand off meeting to verbally discuss, a physical notebook to log key points or a digitalized version of a notebook. With digitalized versions of reports there is opportunity to relay information to multiple control rooms or locations of the company’s operations at once.

The Lab Technician

The Lab Technician’s Role in Digital Transformation

Lab quality testing is an essential part of process manufacturing. Thorough testing of each batch quality results allows for production of the scheduled product. Because other roles such as process engineer and operator rely on the outcomes of lab testing, getting the lab quality data seamlessly disseminated is essential to smooth operations.

Testing multiple variables of the product and comparing it to specifications, manually testing the product, recording the result, and manually comparing the finished product to specifications are among the lab technician’s duties. If the lab tech is entering data into a digital system, limits can typically be saved for different products, speeding things up.

The lab tech would manually test the product, enter the results in a program, and the LIMS system would flag if the result were out of spec. Furthermore, a lab tech can set up the test, a machine conducts the test, the result is then fed to the LIMS system where the value would be flagged if the test is out of spec. performing these tasks digitally is a tremendous time saver and process.

In summary, lab techs are ultimately responsible for testing the final product and passing or failing it to be sold. Digitizing these tests and the corresponding data streamlines and accelerates the entire lab test process.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

The Process Engineer

The Process Engineer’s Role in Digital Transformation

Process Engineers, often called by other titles including chemical engineers, often have a range or duties including product development, process optimization, documentation of SOPs, setting up automatic controls/PLCs, ensuring equipment reliability, communicating with superintendents, operators, lab techs, maintenance managers and customers.

Process engineers monitoring the entire manufacturing process on a daily, weekly, and monthly basis to identify improvement opportunities and evaluate the condition of the assets and processes.

Most sites have an existing system for maintenance requests. A physical system may exist where staff hand writes the issue, area, and other important information and hand deliver it to the maintenance department. Alternately, there could be a system set up to email the maintenance department with pictures attached. A program may be used to submit maintenance requests. This system would provide a unique ticket number, automated status updates, and other key information. Such a program would allow engineers or the maintenance department to see history thus being able to identify repetitive issues, such as a part needing replacement. Digitizing maintenance can help create a preventative maintenance schedule, to replace the part before it is no longer performing, resulting in sub-par product quality.

Another way for engineers to monitor the process is through data visualization. When data is stored, the history can be viewed, and users can identify irregularities, trends, and cycles in the process to help identify root cause when upsets occur. Engineers might set up their own alarms, separate from operator alarms, to keep track of events and determine if an optimization project is possible.

Process optimization and product development are important tasks for project engineers. Engineers may develop and conduct trials to continually optimize the process and develop new products. They often use the Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control) method to do this. The Define step is typically completed by a stakeholder, a superintendent or plant manager. Once the project is defined the engineer moves into the measure step.

The measure step can take many forms, physically measuring, counting, or documenting a process. Collecting necessary data can be time-consuming. With more of the data being digitized, data collection is already done.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Engineers need to organize and collect data to analyze it. Once the data is collected it can put into Excel, Minitab, or other programs to be analyzed. By doing comparisons and statistical analysis, with the help of process knowledge, an improvement plan can be created.

Engineers will work with operators and lab techs to work through their improvement plan. Typically, the plans will include information that the operators and lab techs will have to record to give back to the engineer to determine if an improvement was made. The plans can be printed off and hand to those involved, and the necessary data collected on sheets of paper.

If a program/graphic/database was being used then the engineer could create an improvement plan within said program, then the operator/lab tech can enter necessary values directly, making the data accessible instantly to the engineer. After the project is complete and an improvement was made, a SOP is written and saved.

In his role, the engineer needs to communicate this change to all necessary personnel. The SOP could be saved locally on each computer, in a shared file, on SharePoint, or as a link within a program that has versioning so users can go back and see what changes were made and when. To alert others of the changes, an email can be sent out to supervisors to communicate to their shift, or if a digital notebook is available, a message can be sent to the necessary areas with a link to the newly updated SOP.

As mentioned above, engineers can be responsible for writing and maintaining SOPs. SOPs can be stored in binders in the control room, saved on control room computers, or a shared folder. There are also programs that can save versions of documents so users can see what changed and when. Operators and lab techs would then use the SOPs when performing a task or testing. It is important for operators to be notified of changes made to the SOP. This could be the engineer sending out an email, or a program with a preset list sending updates to emails. Engineers could also have a notification set up on the operator’s computer.

The Plant Manager

The Plant Manager’s Role in Digital Transformation

Plant managers wear many hats and the hats they wear continue to multiply as plants face complexities and pressure to produce more with increased profitability.

Hiring good people – the key to running a digital forward organization is staffing with people in mind. Good, productive people run plants with data, not hunches or best guesses. They make data driven decisions that are the best for the organization and identify root causes through careful anomaly detection and analysis.

Good leaders know that to truly digitize operations at a plant you must start from the bottom and that every role is an important component to the whole and every person’s contribution important.

Ron Baldus, CTO at dataPARC, advises “Clean data” is the key to successful digital operations. What exactly does clean data mean, one might ask? Clean data is the pure data, data-driven data, not hunch-driven data and the one version of the truth. With clean data plant managers and those who work for them can continue to make data and profit driven decisions. A good data visualization software that connects all data sources is a good place to start. With this connected software, extensive reports pulling on many data sources can be run to give the plant manager a key report with important information visible. If there is a problem in the operations, this reporting can allow the plant manager to identify the problem and task his engineers and operators with getting to the source and making the necessary adjustments, all based on fact and not best guesses.

Plant managers know that there are many important moving parts to a plant operation and getting reliable data is the lifeblood of a successful, profitable operation. The more digital the plant becomes, the cleaner data flows to all departments and roles and allows troubleshooting, reporting, and forecasting to be more and more seamless.

Another advantage to digitization at the plant manager level is transferring of skill, information, and expertise at the subject matter expert SME level. Many SMEs are getting close to retirement and in them a wealth of information, experience and methodology that is at risk of being lost. Through the digitization of reports and operations, the methods can be preserved and passed on to the next person assuming the role and responsibility, whether it an operator or an engineer or other essential role.

Looking Forward

Whether it is the operator, the engineer, the lab tech or the plant manager, all digital transformation roles and responsibilities in manufacturing contribute to the transformation of the plant. From the bottom up with effective communication and consistent data, downtime can be minimized, golden runs more common and seamless operations a daily reality.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Integrating manufacturing data in a plant is necessary for many reasons. Among the most important is getting relevant data to various departments quickly. In doing so, downtime is reduced, anomalies are identified and corrected, and quality is improved.

So often integrations are delayed due to fears around losing data quality during integration or simply finding the time in a 24/7 environment. There are pros and cons to each integration type. In this article we will walk you through the different integrations and what to look out for as well as tips and best practices.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Integrating Historian & ERP Data

Enterprise Resource Planning (ERP) is software used by accounting, procurement, and other groups to track orders, supply chain logistics and accounting data. By adding historian data, ERP systems have a fuller picture of the comprehensive plant operations.

combining erp and historian data on a trend

Integrating your historian and ERP data can provide great insight into which processes are affecting quality.

ERP users have access to more information about finished goods such as the exact time of any major production step or if there were an issue with production. For instance, if the texture of newsprint is slippery and not up to spec and because of that cannot be cut properly on the news producer’s rollers, the specific lot can be identified and the challenge of finding out which lot produced poor quality paper is no longer a roadblock.

In a nutshell, The Historian to ERP integration means departments outside of production get all the data they need without engaging another resource. The challenges include a time -consuming integration where erroneous values can have a wide-ranging impact, so double-checking values is essential.

Integrating Historian & MES Data

Connecting a historian to an MES (Manufacturing Execution System) expands the capabilities of the MES. Manufacturing execution systems are computerized systems used in manufacturing to track and document the transformation of raw materials to finished goods, obviously an essential component of manufacturing data capture. The historian provides a historical log of all production data rather than only being able to see current values or near-past values. Being able to pull large amounts of historical data along with data from an MES when needed, allows for projections that are not possible without this long-term perspective and additional data

A relevant example of a MES to historian benefit is an ethanol plant that would like to examine seasonal -winter vs. summer- variability on fermentation rates. The historian has all this data and the MES allows the user to pull out data only for relevant times.

integrating historian and mes data on a trend

Integrating MES data into a historian provides access to years and years of data and allows for long-term analysis.

Using product definitions from the MES and the comprehensive history of production runs for a given product line or product type without manual filters of all historical data is key to fast troubleshooting with this integration type. Historian to MES integrations help to reduce waste and decrease the time it takes to solve an issue. Like the Historian to ERP solution, the Historian to MES integration takes significant work and resources but the benefits are immediately evident and realized.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

Integrating LIMS & ERP Data

A Laboratory Information Management System (LIMS) contains all testing and quality information from a plant’s testing labs. Plants often have labs for quality testing. Federal regulations and standards often dictate a test’s values or results which a batch’s success and quality is ultimately dependent upon. Testing is done at various stages of production and this includes the final stage which is the most important stage. Certificates of analysis are common documents that ensure the safety and quality of tested batches. LIMS to ERP integration is especially important for the food and beverage industry as they depend upon testing to ensuring their product is safe for human consumption.

integrating LIMS and ERP data in a trend

By integrating LIMS and ERP data it’s easy to identify a specific out-of-spec batch or product run for root cause analysis.

Batch quality data from LIMS systems allows ERP users who could be accounting or procurement departments to build documents and reports that share data to certify the quality of shipped product. This integrated data also gives the customer reps immediate access to data about shipped product. The LIMS to ERP integration is very important as so many LIMS departments still rely on a paper trail which can be a tremendous hold up to production. As with the historian to ERP integration the LIMS to ERP integration must have accurate data to provide site-wide value, so double checking is necessary.

Integrating LIMS & Historian Data

Just like historical process data from assets, testing data is very useful information when troubleshooting a production issue. The LIMS system as explained earlier, stores all of the testing data from the lab. Sending LIMS data to the historian allows users to have a greater understanding of the production process and lab values to provide a fuller picture and greater analysis of the issue.

integrating LIMS and historian data in a trend

Integrating LIMS and historian data is one of the most effective ways to analyze how a process affects product quality.

An example could be when a paper brightness is out of spec, lab data can shed light and bring attention to the part of the process that needs adjustment. Alerts within the historian can be set up, giving engineers more time to adjust the process to meet quality. Past testing values are useful when comparing production runs and bring awareness to patterns that production data alone may not have. As with any integration, LIMS to Historian requires planning, a team that is engaged and milestones to check in on the progress and success of the integration.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Integrating CMMS (Computerized Maintenance Management Systems) & ERP Data

Maintenance is a large and necessary part of plant operations. Maintenance records and work order information are often stored in a Computerized Maintenance Management System (CMMS). The CMMS system has comprehensive information that by itself cannot be accessed by departments that may need to learn more details about the specifics of the maintenance.

By connecting the CMMS to an ERP system, ERP users will have access to more data about the finished product. Users can check to see if there were any maintenance issues around the time of the production. Facilities with lengthy scheduled shutdowns like an oil refinery will need to plan out how much gasoline or other fuel to keep in storage to meet their customer obligations.

integrating CMMS and ERP data in a trend

By integrating maintenance and ERP data we’re able to investigate and out-of-spec product run and note that there was a maintenance event that likely caused the issue.

Knowing about shutdowns both planned and unplanned allows the user to better plan out both customer orders and shipping. Anticipating the schedule for planned repairs is also useful for financial planning and forecasting. Users with access to historical work order information can better understand any issues that might come up and gives a bigger glimpse into the physical repair and the associated costs and impact. Integrating these systems can prove to be some of the hardest integrations simply because the data types can vary so much.

The key to a successful CMMS to ERP integration is getting necessary leadership on board and having a detailed roadmap and plan with regular teams check- ins so that obstacles can be addressed immediately.

Integrating Field Data Capture System & Historian Data

The remote nature of field data capture systems means that this data is often siloed and very difficult and slow to access. Field data is just that, captured in the field and often must be pieced together from manual entries, often on paper. Various roles collect this data, and it must be utilized collectively to have any value. Field data types such as temperature, quality and speed must be consistent when entered and even more so when moved on to a historian.

Though often cumbersome to collect, compile and enter, field data in a historian can be enormously empowering to an engineer. For example, oil wells in the Canadian oil sands can be 50 to 200 miles from the nearest human operator. The more data the operator knows about these wells, the less travel they spend checking up on each well.

Integrating Field data and Historian Data

Integrating field data into a historian provides reliable access to long-term data from previously siloed wells.

Connecting field data to a historian also increases the amount of data an engineer can use during troubleshooting. The data in the field is vital to reducing downtime and managing product quality. Sharing that data with the historian gives the data a broader audience where comparison and analysis can be made, resulting in less downtime and greater productivity.

Looking Forward

When integrating manufacturing data, the overriding theme and result is digital data empowerment. When important plant data can flow seamlessly from one person, system or department, better decisions can be made through better analysis which ultimately leads to better operations, less downtime and greater profitability. It is important to understand the full data management and connectivity options available and the pros and cons of each. Various brands of each solution are on today’s market. Ideally, all sources of plant data can be connected and disseminated effectively for maximum efficiency and profitability.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

The digital Transformation – everyone and everything is a part of it in some way. In the 20th century, breakthroughs in technology allowed for the ever-evolving computing machines that we now depend upon so totally, we rarely give them a second thought. Even before the advent of microprocessors and supercomputers, there were certain notable scientists and inventors who helped lay the groundwork for the technology that has since drastically reshaped every facet of modern life.

0