Your address will show here +12 34 56 78
Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Deviation analysis is a routine form of troubleshooting performed at process manufacturing facilities around the world. When speed is imperative, a robust deviation detection system, along with a good process for analyzing the resulting data, is essential for solving problems quickly.

A properly configured deviation detection system allows nearly everyone involved in a manufacturing process to collaborate and quickly identify the root causes of unexpected production issues.

In a previous post we wrote about time series anomaly detection methods, and how to set up deviation detection for your process. In this article, we’re going to be focusing on how to actually analyze the data to pinpoint the source of a deviant process.

deviation detection webinar signup

Watch the webcast to see us use deviation detection to troubleshoot process issues.

Watch the Webcast

Deviation Analysis: Reviewing the Data

So, if you read our other article about anomaly detection methods, we covered setting up deviation detection, including the following steps:

  1. Selecting tags
  2. Filtering downtime
  3. Identifying “good” operating data
  4. Identifying “bad” operating data

The fifth step is to actually analyze the data you’ve just produced, so you can identify where your problem is occurring.

But, before we get into analysis, let’s review the data we’ve produced.

The examples below show the data we’ve produced with dataPARC’s process data analytics software, but the analysis process would be similar if you were doing this in your own custom-built Excel workbook.

Selecting Tags

Here we have the tags we identified. In our case, we were able to just drag over the entire process area from our display graphic and they all ended up in our application here. We could have also added the tags manually or even exported the data from our historian and dumped it into a spreadsheet.

deviation analysis - getting the tags

We pulled data from 363 tags associated with our problematic process.

Good Data

Next, we have our “good” data. The data when our process was running efficiently. You’ll see that the values here are averages over a one-month period.

deviation analysis - good data example

Average data from a month when manufacturing processes were running smoothly.

Bad Data

This is our problem data. Narrowed down to a specific two-day period where we first recognized we had an issue.

deviation analysis - bad data example

Bad doggie! I mean… Bad data. Bad!

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Methods of Deviation Detection

Again, you can refer to our article on anomaly detection methods for more details, but in this next part we’ll be using 4 different methods of analysis to try and pinpoint the problem.

The four deviation detection methods we’ll be using are:

  1. Absolute Change (%Chg) – The simplest form of deviation detection. Comparing a value against the average.
  2. Variability (COVChg) – How much the data varies or how spread out the data is relative to the average.
  3. Standard Deviation (SDCgh) – A standard for control charts. Measures how much the data varies over time.
  4. Multi-Parameter (DModX) – Advanced deviation detection metric showing the difference between expected values and real data, to evaluate the overall health of the process. The ranges are often rate-dependent.

In the image below you’ll see the deviation values for each method of calculation. Here red means a positive change, and blue means a negative change.

deviation analysis methods

Our four deviation detection methods. Red is positive change in values. Blue is negative value change.

So, if we’re looking for a trouble spot within our manufacturing process, the first thing we’re going to want to do is start to look at the deviation values.

By sorting by the different detection methods, we can begin to identify some patterns. And, we can really pare down our list of potential culprits. Just an initial sort by deviation values eliminates all but about a dozen of our tags as suspects.

So, let’s look at tags where the majority of the models show high deviation values. That gives us a place to begin troubleshooting.

Applied Deviation Analysis

For instance, here we have our Cooling Water tag, and in three of the four models we’re seeing that it has a fairly high deviation value. It’s a prime suspect.

deviation analysis - cooling water data

So, let’s analyze that, and take a closer look.

Need to get better data into the hands of your process engineers? Check out our real-time process analytics tools & see how better data can lead to better decisions.

Within our deviation detection application we can just select the tag and click the “trend” button to bring up the data trend for the Cooling Water tag.

Looking at the trend, it’s definitely going up, and deviating from the “good” operating conditions. But we also know our process. And we know that the cooling water comes from the river, and we know that the river temperature fluctuates with the seasons. So, we’ll add our River Temp tag to the trend, and sure enough – it looks like it’s just a seasonal change.

cooling water vs river temp image

Pairing our Cooling Water Tmp tag with our River Temp tag. Nope, that’s not it!

So, the Cooling Water isn’t our culprit. What can we look into next? This 6X dT tag looks like a problem, with multiple indications of high variation. This represents the temperature change across the sixth section of the extraction train.

deviation analysis - looking at the 6xt data

This looks like the source of our problem.

It’s likely that this is going to be our problem tag. Putting our heads together with the rest of the team, we can pretty quickly get anecdotal evidence to either confirm or deny that, say, maintenance was performed in this part of the process recently. If it’s still unclear, we can pull it up on a trend, like we did with our Cooling Water tag, and see if we are indeed seeing some erratic behavior with the values from this tag.

Looking Ahead

Really, this is routine troubleshooting that is done daily at process facilities around the world. But, when speed is imperative, and you need a quick answer for management when they’re asking why their machine is down or the product quality is out-of-spec, having a robust deviation detection system in place, and a good process for analyzing the resulting data, can really help make things clear quickly.

deviation detection webinar signup

Watch the webcast

In this recorded webcast we discuss how to use deviation detection to quickly understand and communicate issues with errant processes, and in some cases, how to identify problems before they even occur.

Watch the Webcast
deviation detection webinar signup
0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

One of the problems in process manufacturing is that processes tend to drift over time. When they do, we encounter production issues. Immediately, management wants to know, “what’s changed, and how do we fix it?” Anomaly detection systems can help us provide some quick answers.

When a manufacturing process deviates from its expected range, there are several problems that arise. The plant experiences production issues, quality issues, environmental issues, cost issues, or safety issues.

One or more of these issues will present itself, and the question from management is always, “what changed?” Of course, they’d really like to know exactly what to do to go and fix it, but fundamentally, we need to know what changed to put us in this situation.

Usually the culprit is either the physical equipment – maybe maintenance that’s been performed recently that threw things off – or it’s in the way we’re operating the equipment.

From a process engineer or a process operator’s perspective, we need to quickly identify what changed. We’re possibly in a situation where the plant is losing money every minute we’re operating like this, so operators, engineers, supervisors… everyone is under pressure to fix the problem as soon as possible.

In order to do this, we need to understand how the value has changed, and the frequency of those changes. Or rather, how big are the swings and how often are they occurring?

deviation detection webinar signup

Watch the webcast to see us use deviation detection to troubleshoot process issues.

Watch the Webcast

Time Series Anomaly Detection Methods

Let’s begin by looking at some time series anomaly detection (or deviation detection) methods that are commonly used to troubleshoot and identify process issues in plants around the world.

Absolute Change

time series anomaly detection - absolute change

This is the simplest form of deviation detection. For Absolute Change, we get a baseline average where things are running well, and when we’re down the road, sometime in the future, and things aren’t running so hot, we look back and see how much things have changed from the average.

Absolute change is used to see if there was a shift in the process that has made the operating conditions less than ideal. This is commonly used as a first pass when troubleshooting issues at process facilities.

Variability

time series anomaly detection - variability

Here we want to know if the variability has changed in some way. In this case, we’ll show the COV change between a good period and a bad period. COV is basically a way to take variations and normalize them based on the value. So high values don’t necessarily get a higher standard deviation than low values because they’re normalized.

Variability charts are commonly used to identify less consistent operating conditions and perhaps more variations in quality, energy usage, etc.

Standard Deviations

time series anomaly detection - standard deviation

Anyone who’s done control charts in the past 30 years will be familiar with standard deviations. Here we take a period of data, get the average, calculate the standard deviation, and put limits up (+/- 3 standard deviations is pretty typical). Then, you evaluate where you’re out based on that.

Standard deviation is probably the most common way to identify how well the process is being controlled, and is used to define the operating limits.

Multi-Parameter

time series anomaly detection - multi-parameter

This is a more advanced method of deviation detection that we at dataPARC refer to as PCA Modelling. Here we take all the variables and put them together and model them against each other to narrow the range. Instead of having flat ranges, they’re often rate-dependent.

The benefit of PCA Modelling over the other anomaly detection methods, is that it gives us the ability to narrow the window and get an operating range that is specific to the rate and other current operating conditions.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Setting up Anomaly Detection

Now that we have a basic understanding of some methods for detecting anomalies in our manufacturing process, we can begin setting up our detection system. The steps below outline the process we usually take when setting anomaly detection up for our customers, and we typically advise them to take a similar approach when doing it themselves.

1. Select Your Tags

Simple enough. For any particular process area you’re going to have at least a handful of tags that you’re going to want to review to see if you can spot the problem. Find them, and, using your favorite time series data trending application (if you have one), or Excel (if you don’t), gather a fairly large set of data. Maybe a month or so.

At dataPARC, we’ve been performing time series anomaly detection for customers for years, so we actually built a deviation detection application to simplify a lot of these routine steps.

For instance, if we want, we can grab an entire process unit from a display graphic and drag it into our app without having to take the time to hunt for the individual tags themselves. Pretty cool, right?

If we just pull up the process graphic for this part of the plant…

…we can quickly compile all the tags we want to review.

2. Filter out Downtime

This is a CRITICAL step, and should be applied before you even identify your good and bad periods. In order to accurately detect anomalies in your process data, you need to make sure to filter out any downs you may have had at your plant that will skew your numbers.

anomaly detection - filter downtime

Downtime.

dataPARC’s PARCview application allows you to define thresholds to automatically identify and filter out downtime, so if you’re using a process analytics toolkit like PARCview, that’ll save you some time. If your analytics tools or your historian doesn’t have this capability, you can also just filter out the downs by hand in Excel. Regardless of how you do it, it’s a critical step.

Need to get better data into the hands of your process engineers? Check out our real-time process analytics tools & see how better data can lead to better decisions.

3. Identify Good Period

Now you’re going to want to review your data. Look back over the month or so of data you pulled and identify a period of time that everyone agrees the process was running “good”. This could be a week, two weeks… whatever makes sense for your process.

anomaly detection - good time series data

Things are running well here.

4. Identify Bad Period

Now that we have the base built, we need to find our “bad” period. Whether we’re waiting for a bad period to occur, or we’re proactively looking for bad periods as time goes on.

anomaly detection - bad time series data

Here we’re having some trouble.

5. Analyze the Data

Yes, it’s important to understand the different anomaly detection methods, and yes, we’ve discussed the steps we need to take to build our very own time series anomaly detection system, but perhaps the most critical part of this whole process is analyzing the data after we’ve become aware of the deviations. This is how we pinpoint which tags – which part of our process – is giving us problems.

Deviation Analysis is a pretty big topic that we’ve covered extensively in another post.

Looking Ahead

Anomaly detection systems are great for being able to quickly identify key process changes, and really the system should be available to people at nearly level of your operation. For effective troubleshooting and analysis, everyone from the operator, the process engineer, maintenance, management… they all need to have visibility into this data and the ability to provide input.

Properly configured, you should be able to identify roughly what your problem is, within 5 tags of the problem, in 5 minutes.

So, when management asks “what’s changed, and how do we fix it?”, just tell them to give you 5 minutes.

deviation detection webinar signup

Watch the webcast

In this recorded webcast we discuss how to use deviation detection to quickly understand and communicate issues with errant processes, and in some cases, how to identify problems before they even occur.

Watch the Webcast
deviation detection webinar signup
0

Dashboards & Displays, Data Visualization, Process Manufacturing

Most modern manufacturing processes are controlled and monitored by computer based control and data acquisition systems. This means that one of the primary ways that an operator interacts with a process is through computer display screens. These screens may simply passively display information, or they may be interactive, allowing an operator to select an object and make a change which will be then be relayed to the actual process. This interface where a person interacts with a display, and consequently the process, is called a Human-Machine Interface, or HMI.

0

Process Manufacturing, Uncategorized

In a previous blog post, we explained that contemporary marketing language or buzz words can create confusion. One example of a common buzz word that may cause confusion is the concept of “Digital Twin.” Customers are asking about it and vendors are promoting it. But, what exactly is a Digital Twin? We decided to start with the Wikipedia definition:

0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

The digital Transformation – everyone and everything is a part of it in some way. In the 20th century, breakthroughs in technology allowed for the ever-evolving computing machines that we now depend upon so totally, we rarely give them a second thought. Even before the advent of microprocessors and supercomputers, there were certain notable scientists and inventors who helped lay the groundwork for the technology that has since drastically reshaped every facet of modern life.

0

Dashboards & Displays, Data Visualization, Process Manufacturing, Training

New training dates have been added so now is the time to register for your dataPARC training held in Vancouver Washington just across the river from beautiful Portland, Oregon. Whether you need to escape the heat of summer, the cold of winter, or just need to get away from the plant, our hands-on training is your ticket to a welcome escape. Oh, did we mention the training?

0

Process Manufacturing

All forms of commerce require energy. Industrial processing and manufacturing facilities tend to be the largest consumers, but even service industries such as insurance and banking require large buildings which must be heated, cooled and lit. The newest large energy consuming enterprises are data centers, which are large clusters of computers which store and serve up the data which flows through the internet. Regardless of the end use or the industry, companies strive to minimize production costs by minimizing energy consumption.

0

Process Manufacturing

Most people are familiar with compressing data files so that they require less memory and they are easier to send electronically. Similar concepts are popular with process data historians. With process data, compression means reducing the number of data points that are stored, while trying to not affect the quality of the data. Compression can be accomplished using one of several algorithms (swinging door, Box Car Back Slope). Each algorithm uses some criteria to eliminate data between points where there is constant change (slope), within some tolerance.

0

Process Manufacturing

It is that time of year again, time to gather with your peers and talk about some of the great benefits of dataPARC software. This year’s dataPARC user conference will be held at the Sentinel Hotel in Portland, Oregon from October 15- 18-2018. Besides getting to learn in a beautiful setting (Portland, OR in the fall – gorgeous!) the following are five reasons why you should attend:

0