Your address will show here +12 34 56 78
Data Visualization

Predictive models are extremely useful in monitoring and optimizing manufacturing processes. Predictive modeling in manufacturing, when combined with an alarm system, can be used to alert changes in processes or equipment performance and prevent downtime or quality issues before they occur.

A process engineer or operator might keep an eye on real-time dashboards or trends throughout the day to monitor the health of processes. Predictive modeling, when combined with a proper alarm system, is an incredibly effective method for proactively notifying teams of impending system issues that could lead to waste or unplanned downtime.

In this article, we’re going to review two examples of predictive modeling in manufacturing. First we’ll describe and build an example of a PLS model, and then we’ll describe and build and example of a PCA model. PLS vs. PCA. Why choose one or the other? We’ll cover that as well.

Identify process issues before they occur. Prevent unplanned downtime, reduce waste, & improve efficiency.

Learn More

What is PLS in Manufacturing

PLS stands for “Partial Least Squares“. It’s a linear model commonly used in predictive analytics.

PLS models are developed by modeling or simulating one unknown system parameter (y) from another set of known system parameters (x’s).

In manufacturing, for example, if you have an instrument that is sometimes unreliable, but you have a span of time in which it was very reliable, it is possible to simulate, or model, that parameter from other system parameters. So, when it moves into an unreliable state, you have a model that will approximate, or simulate, what that instrument should be reading, were it functioning normally.

PLS Model Formula

We promise we’re not going to get too deep into the math here, but this is a PLS model formula:

y = m1x1 + m2x2 + … + mnxn + b

In this formula, the single (y) is approximated from the (x’s) by multiplying each by a coefficient and adding an intercept at the end.

PLS Analysis Use Cases

Some potential uses for PLS models include:

Simulating flow from valve position, power, or delta pressure (dP)

An example provided by one of our customers involved modeling flow from pump amps.

In this particular case, they had a condensate tank in which the flow kept reading zero on their real-time production trend, even though they knew their pump was pumping condensate.

Using dataPARC’s predictive modeling tools, they looked at the historical data and found periods of time when there was a flow reading, and they modeled the flow based on the pump amps during those same periods.

So, when the flow itself got so low that the flow meter wouldn’t register it, they still had a model of flow based on pump amps, because the pump was still pumping and registering pump amps.

Producing discrete test results modeled from a set of continuous process measurements

For example, there may be something you only test every four to six hours. But, you’d like to know, between those tests, if you’re still approximately on-line, or still approximately the same.

If you have continuous measurements that can be used to approximate that value that you’re going to test in four to six hours, you can build a model of those discrete test results based on what those readings were when the previous test was conducted.

Those are just a couple of examples of how you can use PLS for predictive modeling in manufacturing.

How can predictive analytics work for you? Prevent unplanned downtime, reduce waste, & optimize processes with dataPARC’s predictive modeling software.

How to build a PLS Model

So, now let’s look at building a PLS model. We’ll use the example we discussed where we simulate flow using delta pressure data. First we need to identify the tags or variables we’ll be working with.

Identify Variables

Using dataPARC, we build these models from trends. Here we have a trend showing Flow (blue), Square Root of dP (green), and Specific Gravity (pink). This data is being pulled from our data historian software.

A basic trend with the three variables we’ll be using for our PLS model

Flow

Flow is the variable we want to model, or predict. It’s the “y” in the formula we described above.

Square Root of dP

Flow is related linearly to the square root of dP. Not to dP itself. So, since the PLS model is a linear model, we’ll create a calculated tag in dataPARC by subtracting the downstream pressure from the upstream pressure and taking the square root of that difference. This will be our Square Root of dP variable that we can use in this linear model.

Specific Gravity (SG)

We’ll use Specific Gravity as our second x. In this example we’re not sure if Specific Gravity is necessary for this model, but it’s really easy to add tags to this equation, determine their importance, and remove them if they’re not needed. We’ll include it for now.

Establish Time Periods for Evaluation

So, on the left we’ll select data from Jan 24 – Feb 3. This is the data we’ll use to build our model. On the right we’ll select data from Feb 8 – Feb 17. This is the data we’ll use to run our model against to evaluate its viability.

On the right side of this split graph, we have the same flow tag, but for a different period of time. This is how we’ll evaluate the accuracy of the model we’ve built.

It is very important to evaluate a model against a time period that is not included in the dataset. To determine if the model is valid going forward. Because as time goes on, it will be using data that it never saw.

Generate the Modeled Data

With dataPARC’s predictive modeling tools, building the model is as simple as adjusting some configuration settings and clicking “Create New PLS Model”. The model will be generated using the data from the tags in our trend we looked at previously. Of course, with more effort, this data can also be produced and managed in Excel as well.

Creating a PLS model with dataPARC

Evaluate the PLS Model

The first thing you want to do when you build a PLS model is clean up the data. Or at least look for opportunities to clean up the data.

T1 vs. T2

First let’s look at the T1 vs. T2 graph. Again, we don’t want to get too deep into the math, but what we’re looking for here is a single grouping of data points within the circles on the graph. A single “clump” of data points indicates we’re looking at a single parameter, or operating regime. If it appeared we had two or more clusters of data points, it’d be a good indication we have multiple operating regimes represented in our model. In that case we’d want to go back and build distinct models to represent each regime.

everything looks good here, though, so let’s proceed.

Looking pretty good so far.

If a lot of your data is outside these circles it’s an indication that your model isn’t going to be very good. Maybe there are some additional tags that you need include in the model, or maybe the time period you selected is not good.

Y to Y

Using a common Y to Y plot, we can view the original y and the predicted y plotted against each other. In this example they’re very close together and you can see that the R-squared value is ridiculously high, which we’d expect when we’re modeling Flow from the Square Root of dP.

Check out that R-squared. 0.994.

So, you’d think with that kind of R-squared value we’d be ready to call it a day, but using dataPARC’s predictive modeling software, we like to take a look at one more thing.

Variable Importance

As expected, our Square Root of dP variable is extremely important, with a value of .942 – roughly 94% important to our model.

Looks like we did good with our Square Root of dP calculation.

If you recall, we added another tag, or variable, into the mix at the beginning – Specific Gravity (SG). Now, this was primarily to illustrate this Variable Importance feature.

As you can see less than 6% of the model is dependent on Specific Gravity. We expected this. Specific Gravity isn’t really useful in this model, and this Variable Importance feature backs that up. To simplify our model and perhaps enable it to run faster, we’d want to eliminate Specific Gravity and any other variables that aren’t highly important.

Save Your PLS Model

Now that our model is complete, we’ll want to save it so we can apply it later. In dataPARC’s PARCmodel predictive modeling software, you get this little dialog here where you can put in a project name and model name.

Saving our model in PARCmodel

How to Apply a PLS Model

So, now that we’ve built our model and saved it. We’re going to want to apply it and see if it works.

Remember earlier, when we chose two time periods for evaluation? Well, now, going back to our trending application, we can import the model we built from our source data and lay that over real data from that second time period to see how accurately it would have predicted the flow for that period of time.

Our predicted data, on the right, in red, falls right in line with real historical production data.

Well, well, well. It appears we have a valid test.

We used an 11-day period in late January (the trend on the left) to create a model, and now, the predicted values of the Flow (the red line on the trend on the right) over an 11-day period in mid February are nearly identical to the actual values from that time period. Perfect!

What is PCA in Manufacturing

PCA is one of the more common forms of predictive modeling in manufacturing. PCA stands for Principal Component Analysis. A PCA model is a way to characterize a system or piece of equipment.

A PCA model differs from a PLS model in that, with a PCA model, there is no “y” variable that you’re trying to predict. A PCA model doesn’t attempt to simulate a single variable by looking at the values of a number of other values (x’s).

Instead, each “x” is modeled from all other x’s. A PCA model is a way of showing the relationship between all the x’s, creating a “fingerprint” of what the system looks like when it’s running.

With a PCA model, you’re trying to say “I have a system or a piece of equipment, and I want to know if it has shifted, or moved into a different operating regime.” You want to know if it is operating differently today than it was during a different period of time.

PCA Analysis Use Cases

Some potential uses for PCA models include:

Diagnosing instrument or equipment drift

For example, you may have an instrument in the field that you know scales up over time, or something that is subject to drift, like a pH meter that you have to calibrate all of the time. When reviewing the values from that instrument, it can sometimes be difficult to know if changes in values are due to drift or if they’re a symptom of more significant equipment or process issues.

If you have a period of time during which you know all of your instruments were good and your process was running optimally, you can use that as your “thumbprint”. This is what you build your PCA model from, and then your PCA statistics that you trend into the future can tell you if something is shifting.

Flagging significant process alterations

A common example here is when a manual valve that is always open or should always be open, somehow gets closed. Since there’s no indication in a DCS or PLC that a manual valve has been closed, all the operator sees is that something is different. They don’t know what it is, but they recognize that something is different.

A PCA model can help here by automatically triggering an alarm or flagging significant changes in a process. The model can’t specifically see that the valve has been closed, but what it does see, for example, is that a pressure reading related to the flow is now different. Or, the control valve used to have x impact on flow or x impact on temperature, and it’s no longer affecting those variables.

PCA can tell you that something in the relationship between components or parts of a process is off, and it can help you get to the root cause of the issue.

How to Build a PCA Model

So, let’s take a look at building a PCA model for a pump. It’s a small system, and we’re going to set up a model to see when it deviates from its normal operating regime.

These steps will be nearly identical to those we covered in how to build a PLS model above. The one major difference is that we don’t have a y value that we’re trying to predict, so we’ll just need to select as many x variabls as we need to represent this particular system.

Identify Variables

We’re going to be using the following tags (x’s) to build our pump model:

  • Amps
  • Flow
  • Speed
  • Specific Gravity (SG)
  • Vibration (Vib)
  • Total Dynamic Head (TDH)
  • Temp

Establish Time Periods for Evaluation

Again, as we did with our PLS model, we’ll have our split trend that shows the data on the left that we’ll use to build our model, and the data on the right that we’ll use to evaluate the accuracy of the model.

Source data from our model on the left, and the data we’ll check it against on the right.

Generate the Modeled Data

A couple clicks here and bam. We have our PCA model.

PARCmodel makes predictive modeling in manufacturing easy.

Evaluate the PCA Model

So, how’s our model shaping up?

T1 vs. T2

Looking at T1 vs. T2 we appear to be off to a good start. All of our data seems to be grouped pretty tightly together, so that’s a good indication we’re looking at a single operating regime here.

DModX

Now let’s look at our DModX trend. This is particular to our PCA model.

DModX represents the distance from an observation to the Model in “x” space. “X” meaning how many dimensions we have. So, in this case we have seven x’s, or seven “dimensions.” There are thousands of “observations” that make up this DModX trend.

In our DModX trend, we can see that there are a few observations that are higher than the red line, which we can think of as the point of statistical significance. When we start getting a lot of observations above this line, it’s an indication that our model isn’t very good.

In this case, we have a few points bouncing around the red line, and on occasion going above it, but this is acceptable. This is what an accurate model generally looks like in DModX.

Hotelling’s T-Squared Normalized (HT2N)

Unlike DModX, HT2N isn’t showing us how the model is performing, or how the observations fit within the model. Instead it’s showing us how the observations fit within the range of all the other x’s. HT2N is also particular to our PCA model.

For example, it looks like there was a period of time here were there was something in the system – maybe multiple x’s – that were significantly different in range from all of the other periods of time before and after.

However, if we see a high HT2N it isn’t necessarily an indication that our model is bad. For instance, even though there were some parameters that had an unusual range, this spike in HT2N clearly falls within acceptable parameters of the corresponding DModX trend. As we see below, they fit within the model just fine.

So, sometimes it’s ok to leave a high HT2N set of data in there because you’re leaving the range of your data expanded. And at times there’s a reason you’ll want to do that.

Let’s say one of your x’s is a production rate. The “model set” models production between 500 and 800. And one day, your production rate went above 750. That might result in a spike like we see in the trend above.

How to Apply a PCA Model

Ok. So, we’ve created our pump model and, in our case, saved it using dataPARC’s predictive modeling software. Now we’re going to go back out to our split trend and apply the model to the timeframes we identified earlier.

We’ll use a 4-up view in our PARCview trending application, and isolate the DModX and HT2N tags in the bottom two trends.

PARCmodel automatically adds “limits” to a PCA model when it’s created, so if we turn visibility for limits on in our trending application, we can easily see where our data is going outside of our model.

With the limit data now identified, we can dig in using our favorite analytics toolkit and perform root cause analysis to determine if there’s an issue with this pump assembly.

Predictive Modeling in Manufacturing

So, there you have it. If you’re looking for good examples of applied predictive modeling in manufacturing, PLS and PCA are two common models useful in monitoring and optimizing manufacturing processes.

An engineer or operator might keep an eye on real-time dashboards or trends throughout the day but it can be difficult to spot potential process issues in time to avoid production loss. Predictive modeling software, when combined with an alarm system provides process manufacturers with an incredibly effective and reliable method for identifying issues before they occur – preventing unplanned downtime, reducing waste, and optimizing their manufacturing processes.

Want to Learn More?

Download the datasheet and see how dataPARC’s predictive modeling tools can help you identify process issues before they occur.

Download PDF
0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

Deviation analysis is a routine form of troubleshooting performed at process manufacturing facilities around the world. When speed is imperative, a robust deviation detection system, along with a good process for analyzing the resulting data, is essential for solving problems quickly.

A properly configured deviation detection system allows nearly everyone involved in a manufacturing process to collaborate and quickly identify the root causes of unexpected production issues.

In a previous post we wrote about time series anomaly detection methods, and how to set up deviation detection for your process. In this article, we’re going to be focusing on how to actually analyze the data to pinpoint the source of a deviant process.

deviation detection webinar signup

Watch the webcast to see us use deviation detection to troubleshoot process issues.

Watch the Webcast

Deviation Analysis: Reviewing the Data

So, if you read our other article about anomaly detection methods, we covered setting up deviation detection, including the following steps:

  1. Selecting tags
  2. Filtering downtime
  3. Identifying “good” operating data
  4. Identifying “bad” operating data

The fifth step is to actually analyze the data you’ve just produced, so you can identify where your problem is occurring.

But, before we get into analysis, let’s review the data we’ve produced.

The examples below show the data we’ve produced with dataPARC’s process data analytics software, but the analysis process would be similar if you were doing this in your own custom-built Excel workbook.

Selecting Tags

Here we have the tags we identified. In our case, we were able to just drag over the entire process area from our display graphic and they all ended up in our application here. We could have also added the tags manually or even exported the data from our historian and dumped it into a spreadsheet.

deviation analysis - getting the tags

We pulled data from 363 tags associated with our problematic process.

Good Data

Next, we have our “good” data. The data when our process was running efficiently. You’ll see that the values here are averages over a one-month period.

deviation analysis - good data example

Average data from a month when manufacturing processes were running smoothly.

Bad Data

This is our problem data. Narrowed down to a specific two-day period where we first recognized we had an issue.

deviation analysis - bad data example

Bad doggie! I mean… Bad data. Bad!

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Methods of Deviation Detection

Again, you can refer to our article on anomaly detection methods for more details, but in this next part we’ll be using 4 different methods of analysis to try and pinpoint the problem.

The four deviation detection methods we’ll be using are:

  1. Absolute Change (%Chg) – The simplest form of deviation detection. Comparing a value against the average.
  2. Variability (COVChg) – How much the data varies or how spread out the data is relative to the average.
  3. Standard Deviation (SDCgh) – A standard for control charts. Measures how much the data varies over time.
  4. Multi-Parameter (DModX) – Advanced deviation detection metric showing the difference between expected values and real data, to evaluate the overall health of the process. The ranges are often rate-dependent.

In the image below you’ll see the deviation values for each method of calculation. Here red means a positive change, and blue means a negative change.

deviation analysis methods

Our four deviation detection methods. Red is positive change in values. Blue is negative value change.

So, if we’re looking for a trouble spot within our manufacturing process, the first thing we’re going to want to do is start to look at the deviation values.

By sorting by the different detection methods, we can begin to identify some patterns. And, we can really pare down our list of potential culprits. Just an initial sort by deviation values eliminates all but about a dozen of our tags as suspects.

So, let’s look at tags where the majority of the models show high deviation values. That gives us a place to begin troubleshooting.

Applied Deviation Analysis

For instance, here we have our Cooling Water tag, and in three of the four models we’re seeing that it has a fairly high deviation value. It’s a prime suspect.

deviation analysis - cooling water data

So, let’s analyze that, and take a closer look.

Need to get better data into the hands of your process engineers? Check out our real-time process analytics tools & see how better data can lead to better decisions.

Within our deviation detection application we can just select the tag and click the “trend” button to bring up the data trend for the Cooling Water tag.

Looking at the trend, it’s definitely going up, and deviating from the “good” operating conditions. But we also know our process. And we know that the cooling water comes from the river, and we know that the river temperature fluctuates with the seasons. So, we’ll add our River Temp tag to the trend, and sure enough – it looks like it’s just a seasonal change.

cooling water vs river temp image

Pairing our Cooling Water Tmp tag with our River Temp tag. Nope, that’s not it!

So, the Cooling Water isn’t our culprit. What can we look into next? This 6X dT tag looks like a problem, with multiple indications of high variation. This represents the temperature change across the sixth section of the extraction train.

deviation analysis - looking at the 6xt data

This looks like the source of our problem.

It’s likely that this is going to be our problem tag. Putting our heads together with the rest of the team, we can pretty quickly get anecdotal evidence to either confirm or deny that, say, maintenance was performed in this part of the process recently. If it’s still unclear, we can pull it up on a trend, like we did with our Cooling Water tag, and see if we are indeed seeing some erratic behavior with the values from this tag.

Looking Ahead

Really, this is routine troubleshooting that is done daily at process facilities around the world. But, when speed is imperative, and you need a quick answer for management when they’re asking why their machine is down or the product quality is out-of-spec, having a robust deviation detection system in place, and a good process for analyzing the resulting data, can really help make things clear quickly.

deviation detection webinar signup

Watch the webcast

In this recorded webcast we discuss how to use deviation detection to quickly understand and communicate issues with errant processes, and in some cases, how to identify problems before they even occur.

Watch the Webcast
deviation detection webinar signup
0

Data Visualization, Historian Data, Process Manufacturing, Troubleshooting & Analysis

One of the problems in process manufacturing is that processes tend to drift over time. When they do, we encounter production issues. Immediately, management wants to know, “what’s changed, and how do we fix it?” Anomaly detection systems can help us provide some quick answers.

When a manufacturing process deviates from its expected range, there are several problems that arise. The plant experiences production issues, quality issues, environmental issues, cost issues, or safety issues.

One or more of these issues will present itself, and the question from management is always, “what changed?” Of course, they’d really like to know exactly what to do to go and fix it, but fundamentally, we need to know what changed to put us in this situation.

Usually the culprit is either the physical equipment – maybe maintenance that’s been performed recently that threw things off – or it’s in the way we’re operating the equipment.

From a process engineer or a process operator’s perspective, we need to quickly identify what changed. We’re possibly in a situation where the plant is losing money every minute we’re operating like this, so operators, engineers, supervisors… everyone is under pressure to fix the problem as soon as possible.

In order to do this, we need to understand how the value has changed, and the frequency of those changes. Or rather, how big are the swings and how often are they occurring?

deviation detection webinar signup

Watch the webcast to see us use deviation detection to troubleshoot process issues.

Watch the Webcast

Time Series Anomaly Detection Methods

Let’s begin by looking at some time series anomaly detection (or deviation detection) methods that are commonly used to troubleshoot and identify process issues in plants around the world.

Absolute Change

time series anomaly detection - absolute change

This is the simplest form of deviation detection. For Absolute Change, we get a baseline average where things are running well, and when we’re down the road, sometime in the future, and things aren’t running so hot, we look back and see how much things have changed from the average.

Absolute change is used to see if there was a shift in the process that has made the operating conditions less than ideal. This is commonly used as a first pass when troubleshooting issues at process facilities.

Variability

time series anomaly detection - variability

Here we want to know if the variability has changed in some way. In this case, we’ll show the COV change between a good period and a bad period. COV is basically a way to take variations and normalize them based on the value. So high values don’t necessarily get a higher standard deviation than low values because they’re normalized.

Variability charts are commonly used to identify less consistent operating conditions and perhaps more variations in quality, energy usage, etc.

Standard Deviations

time series anomaly detection - standard deviation

Anyone who’s done control charts in the past 30 years will be familiar with standard deviations. Here we take a period of data, get the average, calculate the standard deviation, and put limits up (+/- 3 standard deviations is pretty typical). Then, you evaluate where you’re out based on that.

Standard deviation is probably the most common way to identify how well the process is being controlled, and is used to define the operating limits.

Multi-Parameter

time series anomaly detection - multi-parameter

This is a more advanced method of deviation detection that we at dataPARC refer to as PCA Modelling. Here we take all the variables and put them together and model them against each other to narrow the range. Instead of having flat ranges, they’re often rate-dependent.

The benefit of PCA Modelling over the other anomaly detection methods, is that it gives us the ability to narrow the window and get an operating range that is specific to the rate and other current operating conditions.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Setting up Anomaly Detection

Now that we have a basic understanding of some methods for detecting anomalies in our manufacturing process, we can begin setting up our detection system. The steps below outline the process we usually take when setting anomaly detection up for our customers, and we typically advise them to take a similar approach when doing it themselves.

1. Select Your Tags

Simple enough. For any particular process area you’re going to have at least a handful of tags that you’re going to want to review to see if you can spot the problem. Find them, and, using your favorite time series data trending application (if you have one), or Excel (if you don’t), gather a fairly large set of data. Maybe a month or so.

At dataPARC, we’ve been performing time series anomaly detection for customers for years, so we actually built a deviation detection application to simplify a lot of these routine steps.

For instance, if we want, we can grab an entire process unit from a display graphic and drag it into our app without having to take the time to hunt for the individual tags themselves. Pretty cool, right?

If we just pull up the process graphic for this part of the plant…

…we can quickly compile all the tags we want to review.

2. Filter out Downtime

This is a CRITICAL step, and should be applied before you even identify your good and bad periods. In order to accurately detect anomalies in your process data, you need to make sure to filter out any downs you may have had at your plant that will skew your numbers.

anomaly detection - filter downtime

Downtime.

dataPARC’s PARCview application allows you to define thresholds to automatically identify and filter out downtime, so if you’re using a process analytics toolkit like PARCview, that’ll save you some time. If your analytics tools or your historian doesn’t have this capability, you can also just filter out the downs by hand in Excel. Regardless of how you do it, it’s a critical step.

Need to get better data into the hands of your process engineers? Check out our real-time process analytics tools & see how better data can lead to better decisions.

3. Identify Good Period

Now you’re going to want to review your data. Look back over the month or so of data you pulled and identify a period of time that everyone agrees the process was running “good”. This could be a week, two weeks… whatever makes sense for your process.

anomaly detection - good time series data

Things are running well here.

4. Identify Bad Period

Now that we have the base built, we need to find our “bad” period. Whether we’re waiting for a bad period to occur, or we’re proactively looking for bad periods as time goes on.

anomaly detection - bad time series data

Here we’re having some trouble.

5. Analyze the Data

Yes, it’s important to understand the different anomaly detection methods, and yes, we’ve discussed the steps we need to take to build our very own time series anomaly detection system, but perhaps the most critical part of this whole process is analyzing the data after we’ve become aware of the deviations. This is how we pinpoint which tags – which part of our process – is giving us problems.

Deviation Analysis is a pretty big topic that we’ve covered extensively in another post.

Looking Ahead

Anomaly detection systems are great for being able to quickly identify key process changes, and really the system should be available to people at nearly level of your operation. For effective troubleshooting and analysis, everyone from the operator, the process engineer, maintenance, management… they all need to have visibility into this data and the ability to provide input.

Properly configured, you should be able to identify roughly what your problem is, within 5 tags of the problem, in 5 minutes.

So, when management asks “what’s changed, and how do we fix it?”, just tell them to give you 5 minutes.

deviation detection webinar signup

Watch the webcast

In this recorded webcast we discuss how to use deviation detection to quickly understand and communicate issues with errant processes, and in some cases, how to identify problems before they even occur.

Watch the Webcast
deviation detection webinar signup
0

Dashboards & Displays, Data Visualization, News, Uncategorized

If you’re like us, you’ve likely been sent an email or told in meetings that part or much of your company staff will now work remotely. Testing for remote computer access and data volume traffic are ongoing as plans are being worked out for this new structure. For most companies, that means VPN or other remoting methods. Virtual meetings are replacing face-to-face ones and pseudo to full quarantines are on the rise. Phone conversations will go on but this won’t fully suffice to cover staffing roles. And besides, you’re talking to neighbors and wondering if you should take one more trip to the grocery store. In the midst of all the chaos, your company still needs you to not only do your job but to excel at it.

0

Dashboards & Displays, Data Visualization, Process Manufacturing

Most modern manufacturing processes are controlled and monitored by computer based control and data acquisition systems. This means that one of the primary ways that an operator interacts with a process is through computer display screens. These screens may simply passively display information, or they may be interactive, allowing an operator to select an object and make a change which will be then be relayed to the actual process. This interface where a person interacts with a display, and consequently the process, is called a Human-Machine Interface, or HMI.

0

Data Visualization, Uncategorized

A lot goes into running a mill, plant or other process operation and a lot depends on running these operations efficiently. We can all agree that those in charge, Plant managers, process engineers and other relevant roles, have a lot on their plates and most of the time are working many hours to get everything done. Profitability and efficiency are dependent on a smooth operation. In order to run an efficient plant operation, one must be able to access to a large amount of information from the various assets and machinery that comprise the operation. Systems are installed to collect time relevant data in order to evaluate what is happening in the process. Successful operations depend on the ability to locate a problem if there is one, as well as access and manipulate data in the way that best suits the person who wants to see it in order to effectively problem solve. dataPARC simply put, is the very best data visualization and analysis software out there for the process industry.

0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

The digital Transformation – everyone and everything is a part of it in some way. In the 20th century, breakthroughs in technology allowed for the ever-evolving computing machines that we now depend upon so totally, we rarely give them a second thought. Even before the advent of microprocessors and supercomputers, there were certain notable scientists and inventors who helped lay the groundwork for the technology that has since drastically reshaped every facet of modern life.

0

Dashboards & Displays, Data Visualization, Process Manufacturing, Training

New training dates have been added so now is the time to register for your dataPARC training held in Vancouver Washington just across the river from beautiful Portland, Oregon. Whether you need to escape the heat of summer, the cold of winter, or just need to get away from the plant, our hands-on training is your ticket to a welcome escape. Oh, did we mention the training?

0

Data Visualization

The purpose of process control alarms is to use automation to assist human operators as they monitor and control processes, and alert them to abnormal situations. Proper process alarm management requires careful planning and has a significant impact on the overall effectiveness of a control system.

Incoming process signals are continuously monitored, and if the value of a given signal moves into an abnormal range, a visual and/or audio alarm notifies the operator of that condition.This seems like a simple concept, almost not worthy of a second thought, and unfortunately, sometimes the configuration of alarms in a control system doesn’t get the attention it deserves.

In this post we’ll talk about the history of process alarms in manufacturing, and discuss best practices for configuring alarms for effective process control.

Early Process Alarm Management

Before digital process control, each alarm indicator required a dedicated lamp and some physical wiring. This meant that:

  1. Due to the effort required, the need for a given alarm was carefully scrutinized, somewhat limiting the total number of alarms
  2. Once the alarm was in place, it had a permanent “home” where an operator could become comfortable with its location and meaning

Capture key process variable excursions and send automatic notifications via email or SMS when events occur.

Process Monitoring Tools

The Introduction of Digital Alarms

As control systems became digital, the creation and presentation of alarms changed significantly. First, where a “traditional” control panel was many square feet in size, digital control system human machine interfaces (HMIs) consisted of a few computer monitors which displayed a representation of the process in an area more appropriately measured in square inches than square feet.

Second, creating an alarm event was a simple matter of reconfiguring some software. Multiple levels of alarms (hi & hi-hi, lo & lo-lo) could easily be assigned to a single process value. This led to an increase in the number of possible process alarm notifications.

Finally, when an alarm was activated, it was presented as an icon, or as flashing text on a process schematic screen, and then logged in a dedicated alarm list somewhere within the large collection of display screens. However when the alarm was presented, it lacked the consistency of location and intuitive meaning that the traditional physical lamp had.

The Dilemma With Digital Alarms

The digital alarm systems worked acceptably well for single alarms and minor upsets. But for major upsets the limited visual real estate and the need to read and mentally place each alarm created bottlenecks to acknowledging and properly responding to large numbers of alarms in a short interval of time.

If a critical component in a process fails, for example a lubrication pump on a large induction fan, the result can be a “flood” of alarms occurring over a short time period. The first wave of alarms is associated with the immediate failure, low lube oil pressure, low lube oil flow, and high bearing temperatures.

The second wave is associated with interlocks shutting down the fan, high inlet pressure, low air flow and low downstream pressure. With no ID fan the upstream boiler will soon start to shut down and generate numerous alarms, followed most likely by problems from the process or processes which are served by the boiler.

The ASM Consortium

Analyses of a number of serious industrial accidents has shown that a major contributor to the severity of the accidents was an overwhelming number of alarms that operators were not capable of understanding and properly responding to in a timely manner. As a result of these findings, in 1992 a consortium of companies including Honeywell and several petroleum and chemical manufacturers was established to study the issue of alarm management, or more generally, abnormal situation management.

The ASM Consortium, with funding from the National Institute of Standards and Technology, researched and developed a series of documents on operator situation awareness, operator effectiveness and alarm management. Since then a number of other industry groups and professional organizations, such as the Engineering Equipment and Materials Users Association in the UK and Instrument Society of America have also examined the issue of alarm management and issued best practices papers.

Exploring process controls? Easily implement online SPC/SQC utilizing dataPARC’s integrated limit management tools and alarm/event engine.

Process Alarm Management: Best Practices

The central message of these process alarm management best practices documents is that the alarm portion of a digital control system should be put together with as much care and design and the rest of the control system. It is not adequate to simply assign a high and low limit to each incoming process variables and call it good. There are a number of practices which can improve the usability and effectiveness of an alarm system. Some techniques are rather simple to implement, others are more complex and require more effort.

1. Planning

When designing or evaluating an existing system, start by looking at each alarm. Evaluate whether it is really needed, and is it set correctly? For example, a pump motor may have an alarm which sounds if the motor trips out. However, if there is also a flow sensor downstream of the pump which has an alarm on it, if the pump stops, two alarms will register. Since the real effect on the process is a loss of flow, it makes sense to keep that alarm and eliminate the motor-trip alarm.

2. Prioritization

Alarms should be prioritized. Some alarms are safety related and should be presented to the operator in a manner that emphasizes their importance. High priority alarms should be presented in a fixed location on a dedicated alarm display. This allows operators to immediately recognize them and react in critical situations. It is very difficult to read, understand and quickly react to an alarm which is presented only in a scrolling list of alarms which will be continuously growing during a process upset.

The reduced speed category captures the losses of running below the target (maximum) speed. For information about using the right speed target in the performance factor calculation read 6 Keys to Successful OEE Implementation.

3. Grouping & Suppression

Correctly identifying the required alarms and prioritizing them is a help, but these techniques alone will not stop a surge of alarms during a crisis. In order to significantly reduce the number of presented crisis alarms, methods like alarm grouping and alarm suppression are needed. As mentioned in the ID fan example above, a single point of failure can lead to several abnormal process conditions and thus several alarms.

It is possible to anticipate these patterns and create control logic which handles the situation more elegantly. In the case of the ID fan, if the inlet pressure to the fan goes high and the outlet flow drops it makes sense to present the operator with virtual alarm of “Fan down” rather than a dozen individual alarms, all presented within seconds of each other, that he or she has to deal with. While the operator is trying to comprehend a cluster of individual alarms to deduce that the fan is down, the upstream boiler may trip out.

Check out our real-time process analytics tools & see how better data can lead to better decisions.

Check out PARCview

Hopefully, with a single concise alarm of a lost fan, the operator can take action at the boiler and perhaps keep that unit running at reduced rate until the fan can be restored. All alarms are still registered by the system for diagnosis and troubleshooting, but only condensed, pertinent information is presented to the operator. This type of grouping and suppression can be done manually as well. If there is a process unit that is sometimes taken offline or bypassed, it makes sense to group and suppress all of the alarms associated with that unit’s operation. An operator shouldn’t have to continuously acknowledge a low flow alarm on a line that he knows has no flow in it.

4. Human Administration

Perhaps the most important part of alarm management is the actual human administration of the system. However a system is designed, its intent and use needs to be clearly communicated to the operators which use the system. Training operators on how to use and respond to alarms is as important as good original system design. Process alarm management is a dynamic endeavor, and as operators use the system they will have feedback which will lead to design improvements. The system should be periodically audited to look for points of failure and areas of improvement. As processes change, the alarm configuration will also need to be changed. This ongoing attention to the alarm system will make it more robust and yield a system which will avert serious process related incidents.

Looking Ahead

Configuring and maintaining process alarms properly requires careful planning and has a significant impact on the overall effectiveness of a control system. Process alarm management best practices dictate that the alarm portion of a digital control system should be put together with as much care and design and the rest of the control system.

Want to Learn More?

Download the datasheet and see how to configure complex, multi-variable process alarms with dataPARC.

Download PDF
0