Your address will show here +12 34 56 78
Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Where do we start in digitizing our manufacturing operations? one may ask. While there is no easy answer, the solution lies in starting not from the top down, but from the ground up, focusing on the digital transformation roles and responsibilities of the key people in your plant.

Digital transformation in process manufacturing is not only a priority, but now an essential step forward as the world encounters and adapts to a more digital world. To put it simply if you do not adjust your processes to embrace digital change, your competitors will (and may already have) outproduce, outshine and outsell you.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Transformation Teams

Digital change has been slow until now though it has been steady. PLC and DCS systems were manufacturing’s digital beginnings and thankfully there is so much more available now to further digitize operations and minimize downtime, improve your process, enhance data management, data sharing reporting and increase profitability. A truly connected enterprise will be adaptable and agile, allowing it to keep abreast of changes in the operating environment.

Plant roles play an essential part in the digitization of process manufacturing and all can contribute to a seamless digital transformation within your facility. Each role embraces digital change and transforms the process from the inside out. By focusing on these roles and the duties and responsibilities within each of them, plant digitization can lead to a well-oiled machine whose comprehensive outcomes depend on and benefit from.

Where do we start in digitizing our operations? one may ask. While there is no easy answer, the solution does lie in starting not from the top down, but from the ground up, with each role’s responsibilities and contributions enhancing the other, adding to and building on the next, for a comprehensive digital enterprise and solid, data-based reporting.

Integrating sources of plant data is a good place to start, along with the processes themselves becoming digitized for maximum outcomes. In this article we will focus on the various roles in the plant, their responsibilities and how each one can contribute to digital transformation.

Digital Transformation Roles & Responsibilities

The Operator

The Operator’s Role in Digital Transformation

Checking process conditions (temperatures, pressures, line speed, etc.) are an essential task for an operator. These process conditions could have readings directly on the machine with valves or buttons to adjust as needed. With more and more digital transformation in manufacturing these process variables are being set up with PLCs to create a digital tag. This tag can be read through an OPCDA server and visualized throughout the plant on computers, in offices, control rooms and meeting rooms. They can also be set up with a DCS to control the process from the control room rather than having to walk the floor to adjust speeds or valves.

The process variables need to be monitored to produce quality products. There are ranges for each process variable and additive when making a product, if these get out of range, the final product could be outside the final specification. Limits can be drawn on gauges, written in an SOP (Standard Operating Procedure) or set up as limits for alarming. These alarms could appear either on the DCS or data visualization screen to alert the operator a variable needs attention.

To consistently make quality product, operators must communicate with the lab tech to verify the product is within spec. This communication between the lab and operators has been traditionally done through verbal communication, walkie talkies, phone calls, etc. To digitize this process, the lab tech enters tested values into a data visualization program or a lab information management system (LIMS) database. These values can be displayed on dashboard with the specifications next to them. The operator can then see when specification values are out of spec and adjust the process, or when values are trending up/down and adjust the process to keep the product within specification before making bad quality product.

Operators are also responsible for keeping track of a product and lot being produced. This can be done manually with pen and paper or entered digitally into a database.

At the end of the shift operators need to pass key information to the next shift. This can be done with a hand off meeting to verbally discuss, a physical notebook to log key points or a digitalized version of a notebook. With digitalized versions of reports there is opportunity to relay information to multiple control rooms or locations of the company’s operations at once.

The Lab Technician

The Lab Technician’s Role in Digital Transformation

Lab quality testing is an essential part of process manufacturing. Thorough testing of each batch quality results allows for production of the scheduled product. Because other roles such as process engineer and operator rely on the outcomes of lab testing, getting the lab quality data seamlessly disseminated is essential to smooth operations.

Testing multiple variables of the product and comparing it to specifications, manually testing the product, recording the result, and manually comparing the finished product to specifications are among the lab technician’s duties. If the lab tech is entering data into a digital system, limits can typically be saved for different products, speeding things up.

The lab tech would manually test the product, enter the results in a program, and the LIMS system would flag if the result were out of spec. Furthermore, a lab tech can set up the test, a machine conducts the test, the result is then fed to the LIMS system where the value would be flagged if the test is out of spec. performing these tasks digitally is a tremendous time saver and process.

In summary, lab techs are ultimately responsible for testing the final product and passing or failing it to be sold. Digitizing these tests and the corresponding data streamlines and accelerates the entire lab test process.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

The Process Engineer

The Process Engineer’s Role in Digital Transformation

Process Engineers, often called by other titles including chemical engineers, often have a range or duties including product development, process optimization, documentation of SOPs, setting up automatic controls/PLCs, ensuring equipment reliability, communicating with superintendents, operators, lab techs, maintenance managers and customers.

Process engineers monitoring the entire manufacturing process on a daily, weekly, and monthly basis to identify improvement opportunities and evaluate the condition of the assets and processes.

Most sites have an existing system for maintenance requests. A physical system may exist where staff hand writes the issue, area, and other important information and hand deliver it to the maintenance department. Alternately, there could be a system set up to email the maintenance department with pictures attached. A program may be used to submit maintenance requests. This system would provide a unique ticket number, automated status updates, and other key information. Such a program would allow engineers or the maintenance department to see history thus being able to identify repetitive issues, such as a part needing replacement. Digitizing maintenance can help create a preventative maintenance schedule, to replace the part before it is no longer performing, resulting in sub-par product quality.

Another way for engineers to monitor the process is through data visualization. When data is stored, the history can be viewed, and users can identify irregularities, trends, and cycles in the process to help identify root cause when upsets occur. Engineers might set up their own alarms, separate from operator alarms, to keep track of events and determine if an optimization project is possible.

Process optimization and product development are important tasks for project engineers. Engineers may develop and conduct trials to continually optimize the process and develop new products. They often use the Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control) method to do this. The Define step is typically completed by a stakeholder, a superintendent or plant manager. Once the project is defined the engineer moves into the measure step.

The measure step can take many forms, physically measuring, counting, or documenting a process. Collecting necessary data can be time-consuming. With more of the data being digitized, data collection is already done.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Engineers need to organize and collect data to analyze it. Once the data is collected it can put into Excel, Minitab, or other programs to be analyzed. By doing comparisons and statistical analysis, with the help of process knowledge, an improvement plan can be created.

Engineers will work with operators and lab techs to work through their improvement plan. Typically, the plans will include information that the operators and lab techs will have to record to give back to the engineer to determine if an improvement was made. The plans can be printed off and hand to those involved, and the necessary data collected on sheets of paper.

If a program/graphic/database was being used then the engineer could create an improvement plan within said program, then the operator/lab tech can enter necessary values directly, making the data accessible instantly to the engineer. After the project is complete and an improvement was made, a SOP is written and saved.

In his role, the engineer needs to communicate this change to all necessary personnel. The SOP could be saved locally on each computer, in a shared file, on SharePoint, or as a link within a program that has versioning so users can go back and see what changes were made and when. To alert others of the changes, an email can be sent out to supervisors to communicate to their shift, or if a digital notebook is available, a message can be sent to the necessary areas with a link to the newly updated SOP.

As mentioned above, engineers can be responsible for writing and maintaining SOPs. SOPs can be stored in binders in the control room, saved on control room computers, or a shared folder. There are also programs that can save versions of documents so users can see what changed and when. Operators and lab techs would then use the SOPs when performing a task or testing. It is important for operators to be notified of changes made to the SOP. This could be the engineer sending out an email, or a program with a preset list sending updates to emails. Engineers could also have a notification set up on the operator’s computer.

The Plant Manager

The Plant Manager’s Role in Digital Transformation

Plant managers wear many hats and the hats they wear continue to multiply as plants face complexities and pressure to produce more with increased profitability.

Hiring good people – the key to running a digital forward organization is staffing with people in mind. Good, productive people run plants with data, not hunches or best guesses. They make data driven decisions that are the best for the organization and identify root causes through careful anomaly detection and analysis.

Good leaders know that to truly digitize operations at a plant you must start from the bottom and that every role is an important component to the whole and every person’s contribution important.

Ron Baldus, CTO at dataPARC, advises “Clean data” is the key to successful digital operations. What exactly does clean data mean, one might ask? Clean data is the pure data, data-driven data, not hunch-driven data and the one version of the truth. With clean data plant managers and those who work for them can continue to make data and profit driven decisions. A good data visualization software that connects all data sources is a good place to start. With this connected software, extensive reports pulling on many data sources can be run to give the plant manager a key report with important information visible. If there is a problem in the operations, this reporting can allow the plant manager to identify the problem and task his engineers and operators with getting to the source and making the necessary adjustments, all based on fact and not best guesses.

Plant managers know that there are many important moving parts to a plant operation and getting reliable data is the lifeblood of a successful, profitable operation. The more digital the plant becomes, the cleaner data flows to all departments and roles and allows troubleshooting, reporting, and forecasting to be more and more seamless.

Another advantage to digitization at the plant manager level is transferring of skill, information, and expertise at the subject matter expert SME level. Many SMEs are getting close to retirement and in them a wealth of information, experience and methodology that is at risk of being lost. Through the digitization of reports and operations, the methods can be preserved and passed on to the next person assuming the role and responsibility, whether it an operator or an engineer or other essential role.

Looking Forward

Whether it is the operator, the engineer, the lab tech or the plant manager, all digital transformation roles and responsibilities in manufacturing contribute to the transformation of the plant. From the bottom up with effective communication and consistent data, downtime can be minimized, golden runs more common and seamless operations a daily reality.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Integrating manufacturing data in a plant is necessary for many reasons. Among the most important is getting relevant data to various departments quickly. In doing so, downtime is reduced, anomalies are identified and corrected, and quality is improved.

So often integrations are delayed due to fears around losing data quality during integration or simply finding the time in a 24/7 environment. There are pros and cons to each integration type. In this article we will walk you through the different integrations and what to look out for as well as tips and best practices.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Integrating Historian & ERP Data

Enterprise Resource Planning (ERP) is software used by accounting, procurement, and other groups to track orders, supply chain logistics and accounting data. By adding historian data, ERP systems have a fuller picture of the comprehensive plant operations.

combining erp and historian data on a trend

Integrating your historian and ERP data can provide great insight into which processes are affecting quality.

ERP users have access to more information about finished goods such as the exact time of any major production step or if there were an issue with production. For instance, if the texture of newsprint is slippery and not up to spec and because of that cannot be cut properly on the news producer’s rollers, the specific lot can be identified and the challenge of finding out which lot produced poor quality paper is no longer a roadblock.

In a nutshell, The Historian to ERP integration means departments outside of production get all the data they need without engaging another resource. The challenges include a time -consuming integration where erroneous values can have a wide-ranging impact, so double-checking values is essential.

Integrating Historian & MES Data

Connecting a data historian to an MES (Manufacturing Execution System) expands the capabilities of the MES. Manufacturing execution systems are computerized systems used in manufacturing to track and document the transformation of raw materials to finished goods, obviously an essential component of manufacturing data capture. The historian provides a historical log of all production data rather than only being able to see current values or near-past values. Being able to pull large amounts of historical data along with data from an MES when needed, allows for projections that are not possible without this long-term perspective and additional data

A relevant example of a MES to historian benefit is an ethanol plant that would like to examine seasonal -winter vs. summer- variability on fermentation rates. The historian has all this data and the MES allows the user to pull out data only for relevant times.

integrating historian and mes data on a trend

Integrating manufacturing data from an MES into a historian provides access to years and years of data and allows for long-term analysis.

Using product definitions from the MES and the comprehensive history of production runs for a given product line or product type without manual filters of all historical data is key to fast troubleshooting with this integration type. Historian to MES integrations help to reduce waste and decrease the time it takes to solve an issue. Like the Historian to ERP solution, the Historian to MES integration takes significant work and resources but the benefits are immediately evident and realized.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

Integrating LIMS & ERP Data

A Laboratory Information Management System (LIMS) contains all testing and quality information from a plant’s testing labs. Plants often have labs for quality testing. Federal regulations and standards often dictate a test’s values or results which a batch’s success and quality is ultimately dependent upon. Testing is done at various stages of production and this includes the final stage which is the most important stage. Certificates of analysis are common documents that ensure the safety and quality of tested batches. LIMS to ERP integration is especially important for the food and beverage industry as they depend upon testing to ensuring their product is safe for human consumption.

integrating LIMS and ERP data in a trend

By integrating LIMS and ERP data it’s easy to identify a specific out-of-spec batch or product run for root cause analysis.

Batch quality data from LIMS systems allows ERP users who could be accounting or procurement departments to build documents and reports that share data to certify the quality of shipped product. This integrated data also gives the customer reps immediate access to data about shipped product. The LIMS to ERP integration is very important as so many LIMS departments still rely on a paper trail which can be a tremendous hold up to production. As with the historian to ERP integration the LIMS to ERP integration must have accurate data to provide site-wide value, so double checking is necessary.

Integrating LIMS & Historian Data

Just like historical process data from assets, testing data is very useful information when troubleshooting a production issue. The LIMS system as explained earlier, stores all of the testing data from the lab. Sending LIMS data to the historian allows users to have a greater understanding of the production process and lab values to provide a fuller picture and greater analysis of the issue.

integrating LIMS and historian data in a trend

Integrating LIMS and historian data is one of the most effective ways to analyze how a process affects product quality.

An example could be when a paper brightness is out of spec, lab data can shed light and bring attention to the part of the process that needs adjustment. Alerts within the historian can be set up, giving engineers more time to adjust the process to meet quality. Past testing values are useful when comparing production runs and bring awareness to patterns that production data alone may not have. As with any integration, LIMS to Historian requires planning, a team that is engaged and milestones to check in on the progress and success of the integration.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Integrating CMMS (Computerized Maintenance Management Systems) & ERP Data

Maintenance is a large and necessary part of plant operations. Maintenance records and work order information are often stored in a Computerized Maintenance Management System (CMMS). The CMMS system has comprehensive information that by itself cannot be accessed by departments that may need to learn more details about the specifics of the maintenance.

By connecting the CMMS to an ERP system, ERP users will have access to more data about the finished product. Users can check to see if there were any maintenance issues around the time of the production. Facilities with lengthy scheduled shutdowns like an oil refinery will need to plan out how much gasoline or other fuel to keep in storage to meet their customer obligations.

integrating CMMS and ERP data in a trend

By integrating maintenance and ERP data we’re able to investigate and out-of-spec product run and note that there was a maintenance event that likely caused the issue.

Knowing about shutdowns both planned and unplanned allows the user to better plan out both customer orders and shipping. Anticipating the schedule for planned repairs is also useful for financial planning and forecasting. Users with access to historical work order information can better understand any issues that might come up and gives a bigger glimpse into the physical repair and the associated costs and impact. Integrating these systems can prove to be some of the hardest integrations simply because the data types can vary so much.

The key to a successful CMMS to ERP integration is getting necessary leadership on board and having a detailed roadmap and plan with regular teams check- ins so that obstacles can be addressed immediately.

Integrating Field Data Capture System & Historian Data

The remote nature of field data capture systems means that this data is often siloed and very difficult and slow to access. Field data is just that, captured in the field and often must be pieced together from manual entries, often on paper. Various roles collect this data, and it must be utilized collectively to have any value. Field data types such as temperature, quality and speed must be consistent when entered and even more so when moved on to a historian.

Though often cumbersome to collect, compile and enter, field data in a historian can be enormously empowering to an engineer. For example, oil wells in the Canadian oil sands can be 50 to 200 miles from the nearest human operator. The more data the operator knows about these wells, the less travel they spend checking up on each well.

Integrating Field data and Historian Data

Integrating manufacturing data from the field into a historian provides reliable access to long-term data from previously siloed wells.

Connecting field data to a historian also increases the amount of data an engineer can use during troubleshooting. The data in the field is vital to reducing downtime and managing product quality. Sharing that data with the historian gives the data a broader audience where comparison and analysis can be made, resulting in less downtime and greater productivity.

Looking Forward

When integrating manufacturing data, the overriding theme and result is digital data empowerment. When important plant data can flow seamlessly from one person, system or department, better decisions can be made through better analysis which ultimately leads to better operations, less downtime and greater profitability. It is important to understand the full data management and connectivity options available and the pros and cons of each. Various brands of each solution are on today’s market. Ideally, all sources of plant data can be connected and disseminated effectively for maximum efficiency and profitability.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

The digital Transformation – everyone and everything is a part of it in some way. In the 20th century, breakthroughs in technology allowed for the ever-evolving computing machines that we now depend upon so totally, we rarely give them a second thought. Even before the advent of microprocessors and supercomputers, there were certain notable scientists and inventors who helped lay the groundwork for the technology that has since drastically reshaped every facet of modern life.

0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Deviation analysis is a routine form of troubleshooting performed at process manufacturing facilities around the world. When speed is imperative, a robust deviation detection system, along with a good process for analyzing the resulting data, is essential for solving problems quickly.

A properly configured deviation detection system allows nearly everyone involved in a manufacturing process to collaborate and quickly identify the root causes of unexpected production issues.

In a previous post we wrote about time series anomaly detection methods, and how to set up deviation detection for your process. In this article, we’re going to be focusing on how to actually analyze the data to pinpoint the source of a deviant process.

deviation detection webinar signup

Watch the webcast to see us use deviation detection to troubleshoot process issues.

Watch the Webcast

Deviation Analysis: Reviewing the Data

So, if you read our other article about anomaly detection methods, we covered setting up deviation detection, including the following steps:

  1. Selecting tags
  2. Filtering downtime
  3. Identifying “good” operating data
  4. Identifying “bad” operating data

The fifth step is to actually analyze the data you’ve just produced, so you can identify where your problem is occurring.

But, before we get into analysis, let’s review the data we’ve produced.

The examples below show the data we’ve produced with dataPARC’s process data analytics software, but the analysis process would be similar if you were doing this in your own custom-built Excel workbook.

Selecting Tags

Here we have the tags we identified. In our case, we were able to just drag over the entire process area from our display graphic and they all ended up in our application here. We could have also added the tags manually or even exported the data from our historian and dumped it into a spreadsheet.

deviation analysis - getting the tags

We pulled data from 363 tags associated with our problematic process.

Good Data

Next, we have our “good” data. The data when our process was running efficiently. You’ll see that the values here are averages over a one-month period.

deviation analysis - good data example

Average data from a month when manufacturing processes were running smoothly.

Bad Data

This is our problem data. Narrowed down to a specific two-day period where we first recognized we had an issue.

deviation analysis - bad data example

Bad doggie! I mean… Bad data. Bad!

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Methods of Deviation Detection

Again, you can refer to our article on anomaly detection methods for more details, but in this next part we’ll be using 4 different methods of analysis to try and pinpoint the problem.

The four deviation detection methods we’ll be using are:

  1. Absolute Change (%Chg) – The simplest form of deviation detection. Comparing a value against the average.
  2. Variability (COVChg) – How much the data varies or how spread out the data is relative to the average.
  3. Standard Deviation (SDCgh) – A standard for control charts. Measures how much the data varies over time.
  4. Multi-Parameter (DModX) – Advanced deviation detection metric showing the difference between expected values and real data, to evaluate the overall health of the process. The ranges are often rate-dependent.

In the image below you’ll see the deviation values for each method of calculation. Here red means a positive change, and blue means a negative change.

deviation analysis methods

Our four deviation detection methods. Red is positive change in values. Blue is negative value change.

So, if we’re looking for a trouble spot within our manufacturing process, the first thing we’re going to want to do is start to look at the deviation values.

By sorting by the different detection methods, we can begin to identify some patterns. And, we can really pare down our list of potential culprits. Just an initial sort by deviation values eliminates all but about a dozen of our tags as suspects.

So, let’s look at tags where the majority of the models show high deviation values. That gives us a place to begin troubleshooting.

Applied Deviation Analysis

For instance, here we have our Cooling Water tag, and in three of the four models we’re seeing that it has a fairly high deviation value. It’s a prime suspect.

deviation analysis - cooling water data

So, let’s analyze that, and take a closer look.

Need to get better data into the hands of your process engineers? Check out our real-time process analytics tools & see how better data can lead to better decisions.

Within our deviation detection application we can just select the tag and click the “trend” button to bring up the data trend for the Cooling Water tag.

Looking at the trend, it’s definitely going up, and deviating from the “good” operating conditions. But we also know our process. And we know that the cooling water comes from the river, and we know that the river temperature fluctuates with the seasons. So, we’ll add our River Temp tag to the trend, and sure enough – it looks like it’s just a seasonal change.

cooling water vs river temp image

Pairing our Cooling Water Tmp tag with our River Temp tag. Nope, that’s not it!

So, the Cooling Water isn’t our culprit. What can we look into next? This 6X dT tag looks like a problem, with multiple indications of high variation. This represents the temperature change across the sixth section of the extraction train.

deviation analysis - looking at the 6xt data

This looks like the source of our problem.

It’s likely that this is going to be our problem tag. Putting our heads together with the rest of the team, we can pretty quickly get anecdotal evidence to either confirm or deny that, say, maintenance was performed in this part of the process recently. If it’s still unclear, we can pull it up on a trend, like we did with our Cooling Water tag, and see if we are indeed seeing some erratic behavior with the values from this tag.

Looking Ahead

Really, this is routine troubleshooting that is done daily at process facilities around the world. But, when speed is imperative, and you need a quick answer for management when they’re asking why their machine is down or the product quality is out-of-spec, having a robust deviation detection system in place, and a good process for analyzing the resulting data, can really help make things clear quickly.

deviation detection webinar signup

Watch the webcast

In this recorded webcast we discuss how to use deviation detection to quickly understand and communicate issues with errant processes, and in some cases, how to identify problems before they even occur.

Watch the Webcast
deviation detection webinar signup
0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

One of the problems in process manufacturing is that processes tend to drift over time. When they do, we encounter production issues. Immediately, management wants to know, “what’s changed, and how do we fix it?” Anomaly detection systems can help us provide some quick answers.

When a manufacturing process deviates from its expected range, there are several problems that arise. The plant experiences production issues, quality issues, environmental issues, cost issues, or safety issues.

One or more of these issues will present itself, and the question from management is always, “what changed?” Of course, they’d really like to know exactly what to do to go and fix it, but fundamentally, we need to know what changed to put us in this situation.

Usually the culprit is either the physical equipment – maybe maintenance that’s been performed recently that threw things off – or it’s in the way we’re operating the equipment.

From a process engineer or a process operator’s perspective, we need to quickly identify what changed. We’re possibly in a situation where the plant is losing money every minute we’re operating like this, so operators, engineers, supervisors… everyone is under pressure to fix the problem as soon as possible.

In order to do this, we need to understand how the value has changed, and the frequency of those changes. Or rather, how big are the swings and how often are they occurring?

deviation detection webinar signup

Watch the webcast to see us use deviation detection to troubleshoot process issues.

Watch the Webcast

Time Series Anomaly Detection Methods

Let’s begin by looking at some time series anomaly detection (or deviation detection) methods that are commonly used to troubleshoot and identify process issues in plants around the world.

Absolute Change

time series anomaly detection - absolute change

This is the simplest form of deviation detection. For Absolute Change, we get a baseline average where things are running well, and when we’re down the road, sometime in the future, and things aren’t running so hot, we look back and see how much things have changed from the average.

Absolute change is used to see if there was a shift in the process that has made the operating conditions less than ideal. This is commonly used as a first pass when troubleshooting issues at process facilities.

Variability

time series anomaly detection - variability

Here we want to know if the variability has changed in some way. In this case, we’ll show the COV change between a good period and a bad period. COV is basically a way to take variations and normalize them based on the value. So high values don’t necessarily get a higher standard deviation than low values because they’re normalized.

Variability charts are commonly used to identify less consistent operating conditions and perhaps more variations in quality, energy usage, etc.

Standard Deviations

time series anomaly detection - standard deviation

Anyone who’s done control charts in the past 30 years will be familiar with standard deviations. Here we take a period of data, get the average, calculate the standard deviation, and put limits up (+/- 3 standard deviations is pretty typical). Then, you evaluate where you’re out based on that.

Standard deviation is probably the most common way to identify how well the process is being controlled, and is used to define the operating limits.

Multi-Parameter

time series anomaly detection - multi-parameter

This is a more advanced method of deviation detection that we at dataPARC refer to as PCA Modelling. Here we take all the variables and put them together and model them against each other to narrow the range. Instead of having flat ranges, they’re often rate-dependent.

The benefit of PCA Modelling over the other anomaly detection methods, is that it gives us the ability to narrow the window and get an operating range that is specific to the rate and other current operating conditions.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Setting up Anomaly Detection

Now that we have a basic understanding of some methods for detecting anomalies in our manufacturing process, we can begin setting up our detection system. The steps below outline the process we usually take when setting anomaly detection up for our customers, and we typically advise them to take a similar approach when doing it themselves.

1. Select Your Tags

Simple enough. For any particular process area you’re going to have at least a handful of tags that you’re going to want to review to see if you can spot the problem. Find them, and, using your favorite time series data trending application (if you have one), or Excel (if you don’t), gather a fairly large set of data. Maybe a month or so.

At dataPARC, we’ve been performing time series anomaly detection for customers for years, so we actually built a deviation detection application to simplify a lot of these routine steps.

For instance, if we want, we can grab an entire process unit from a display graphic and drag it into our app without having to take the time to hunt for the individual tags themselves. Pretty cool, right?

If we just pull up the process graphic for this part of the plant…

…we can quickly compile all the tags we want to review.

2. Filter out Downtime

This is a CRITICAL step, and should be applied before you even identify your good and bad periods. In order to accurately detect anomalies in your process data, you need to make sure to filter out any downs you may have had at your plant that will skew your numbers.

anomaly detection - filter downtime

Downtime.

dataPARC’s PARCview application allows you to define thresholds to automatically identify and filter out downtime, so if you’re using a process analytics toolkit like PARCview, that’ll save you some time. If your analytics tools or your historian doesn’t have this capability, you can also just filter out the downs by hand in Excel. Regardless of how you do it, it’s a critical step.

Need to get better data into the hands of your process engineers? Check out our real-time process analytics tools & see how better data can lead to better decisions.

3. Identify Good Period

Now you’re going to want to review your data. Look back over the month or so of data you pulled and identify a period of time that everyone agrees the process was running “good”. This could be a week, two weeks… whatever makes sense for your process.

anomaly detection - good time series data

Things are running well here.

4. Identify Bad Period

Now that we have the base built, we need to find our “bad” period. Whether we’re waiting for a bad period to occur, or we’re proactively looking for bad periods as time goes on.

anomaly detection - bad time series data

Here we’re having some trouble.

5. Analyze the Data

Yes, it’s important to understand the different anomaly detection methods, and yes, we’ve discussed the steps we need to take to build our very own time series anomaly detection system, but perhaps the most critical part of this whole process is analyzing the data after we’ve become aware of the deviations. This is how we pinpoint which tags – which part of our process – is giving us problems.

Deviation Analysis is a pretty big topic that we’ve covered extensively in another post.

Looking Ahead

Anomaly detection systems are great for being able to quickly identify key process changes, and really the system should be available to people at nearly level of your operation. For effective troubleshooting and analysis, everyone from the operator, the process engineer, maintenance, management… they all need to have visibility into this data and the ability to provide input.

Properly configured, you should be able to identify roughly what your problem is, within 5 tags of the problem, in 5 minutes.

So, when management asks “what’s changed, and how do we fix it?”, just tell them to give you 5 minutes.

deviation detection webinar signup

Watch the webcast

In this recorded webcast we discuss how to use deviation detection to quickly understand and communicate issues with errant processes, and in some cases, how to identify problems before they even occur.

Watch the Webcast
deviation detection webinar signup
0

Dashboards & Displays, Data Visualization, Process Manufacturing

Most modern manufacturing processes are controlled and monitored by computer based control and data acquisition systems. This means that one of the primary ways that an operator interacts with a process is through computer display screens. These screens may simply passively display information, or they may be interactive, allowing an operator to select an object and make a change which will be then be relayed to the actual process. This interface where a person interacts with a display, and consequently the process, is called a Human-Machine Interface, or HMI.

0

Process Manufacturing

Overall Equipment Effectiveness, or OEE, has several benefits over simple one-dimensional metrics like machine efficiency. If you are not meeting demand and have a low OEE (equipment is underperforming) then you know you have an equipment effectiveness problem. If equipment is operating at a high OEE but not meeting customer demand, you know you have a capacity problem. Also, OEE lets you understand if you have spare capacity to keep up with changes in demand.

0

Process Manufacturing, Uncategorized

In a previous blog post, we explained that contemporary marketing language or buzz words can create confusion. One example of a common buzz word that may cause confusion is the concept of “Digital Twin.” Customers are asking about it and vendors are promoting it. But, what exactly is a Digital Twin? We decided to start with the Wikipedia definition:

0

Dashboards & Displays, Data Visualization, Process Manufacturing, Training

New training dates have been added so now is the time to register for your dataPARC training held in Vancouver Washington just across the river from beautiful Portland, Oregon. Whether you need to escape the heat of summer, the cold of winter, or just need to get away from the plant, our hands-on training is your ticket to a welcome escape. Oh, did we mention the training?

0

Process Manufacturing

All forms of commerce require energy. Industrial processing and manufacturing facilities tend to be the largest consumers, but even service industries such as insurance and banking require large buildings which must be heated, cooled and lit. The newest large energy consuming enterprises are data centers, which are large clusters of computers which store and serve up the data which flows through the internet. Regardless of the end use or the industry, companies strive to minimize production costs by minimizing energy consumption.

0