Your address will show here +12 34 56 78
Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Reducing downtime increases productivity, lowers costs, and decreases accidents. Downtime tracking software can be utilized to help reduce downtime. Knowing why the process is going down is key to reducing it.

Monitor, report, & analyze production loss from unplanned downtime, poor quality, and performance issues.

Learn more

What is Downtime?

Downtime is any duration in which a process is not running. However, not all downtime is created equally. There are two types of downtime, planned and unplanned.

Downtime events represented visually in dataPARC’s process trending software.

What is Planned Downtime?

Planned downtime is when production schedules a time to take the process down. Planned downtime is a necessity to maintain machinery by conducting inspections, cleaning, and replacing parts.

Planned downtime allows operations to organize, schedule and prepare for the downtime. They can coordinate with contractors, order parts and plan tasks to complete while the process is down. Planned downs can be organized so personnel have tasks to accomplish and the necessary tools on hand.

What is Unplanned Downtime?

Unplanned downtime is when the machine or process is down for any unscheduled event. This can be due to a part break, lack of material, power outage, etc. Unplanned downtime is unpredictable and should be targeted when aiming to reduce overall downtime.

Importance of Reducing Unplanned Downtime

Unplanned downtime is significantly more costly and dangerous than planned downtime. Since unplanned downtime is unpredictable and the process could go down for numerous reasons it is impossible to be prepared for every situation.

Waiting on parts or the necessary personnel to fix an issue takes time and could mean the machine is going to stay down for longer. Longer downtime is less time making product, directly effecting the bottom line.

Another cost attributed to unplanned downs is the unsellable product made and wasted material. The time right before, at the time of the down and start up of the process tends to lead to off quality product.

Unplanned downtime can attribute to near-misses or accidents. During unplanned downtime the goal is to get the machine/process up and running again as soon as possible. This pressure can create a stressful, chaotic environment resulting in people reacting rather than stopping to think about the best plan forward.

Reducing unplanned downtime can help lower overall operating costs. It also reduces the times when employees are put in unpredictable situations, decreasing the likelihood of an accident occurring.

How to Reduce Downtime

There are numerous reasons for process downtime and multiple approaches may need to be implemented in the effort to reduce it.

1. Track Downtime

Before jumping into the steps of reducing downtime, it is critical to track it. Tracking downtime lets you see why the process is going down and provides a metric on if it is being improved.

The data collected in tracking downtime will be used to help reduce it. Consider collecting the following data for each down occurrence:

  • Duration
  • Reason/Cause
  • Product at time of down
  • Process Area
  • Shift or Crew
  • Operator Comments
  • Other attributes such as environmental occurrences due to downtime, waste collected over the duration, safety concerns, etc.

This data can be collected manually, however having an automated system will ensure the data is collected for each event. More consistent data will help reduce the downtime.

Downtime tracking software can automate and help organize the downtime. Some considerations when researching downtime tracking software:

  • Ease of use
  • Automatically captures downtime events
  • Records downtime cause and other data
  • Analyze data and events
  • Integrate with process data

There are many options for downtime tracking software on the market. Some are dedicated downtime tracking applications, while others, like dataPARC’s PARCview may offer a suite of manufacturing analytics tools that include a downtime tracking module. The right choice is the one that will be used consistently.

Looking to reduce downtime? dataPARC’s real-time production monitoring software uses smart alarms to automatically alert operators & maintenance crews to u.

2. Monitor Production

Having a system to monitor production can also help reduce downtime.

Visible process trends at operator stations give a visual of how the process is running over time and if variables are migrating or staying consistent.

Real-time production dashboards can be used to display quality data, relaying information directly from the lab to operations. This ensures product is continuously on quality.

Alarms can be used independently or in conjunction with trends and dashboards to warn operators when upset conditions are occurring. This can allow them to react more quickly, potentially preventing a down from happening.

3. Create a Preventative Maintenance Schedule

Preventative maintenance happens during planned downtime or while the process is running. Part replacement during planned downtime allows the site to order the necessary parts and make sure the proper personnel are on site to perform the tasks, saving time and money.

Regular maintenance when the process is running, such as adding or changing lubricating oils, and cleaning can help increase the lifetime of the parts.

Once a scheduled is created it can be tracked to ensure tasks are being accomplished. MDE (PARCview’s Manual Data Entry) can be configured on a time schedule and integrated with alerts. If a task is skipped, a reminder message can be sent to the operator or escalated to a supervisor.

Maintenance data can be captured and digitized to help predict downtime events for the development of preventative maintenance schedules.

Recording preventative maintenance data allows sites to analyze it alongside downtime and process data. Correlations can appear and help drive necessary maintenance and reduce downtime.

4. Provide Operator Decision Support

Unplanned down events are inevitable and cannot be eliminated completely, so a priority of reducing downtime should also be reducing the duration when a downtime event occurs.

Creating tools and troubleshooting guides for operators to use in the event of a down will help get the process back up more quickly.

To get the process running, operators need to know why it went down in the first place. Providing operators with the necessary resources to find the root cause is key to resolving the issue quickly.

Process dashboards, trends 5-Why analysis, and workflows can help determine the root cause.

Trends, dashboards, and centerlines can draw attention to significate changes in the process. dataPARC’s Centerline display is a tabular report with run-based statistics. This format helps ensure the process is consistent and can point to variables running outside of past operating conditions or limits.

Centerlines proved early fault detection and process deviation warnings, so operators can respond quickly to reduce unplanned downtime events.

A workflow or preconfigured 5-Why analysis can also help point the root cause and suggested solution.

Check out our real-time process analytics tools & see how you can reduce downtime & product loss.

Check out PARCview

5. Perform DMAIC Analysis

The above suggestions are starting points to reduce downtime. If those are in place, the DMAIC process (Define, Measure, Analyze, Improve, Control) can be used. It is a fundamental LEAN manufacturing tool and can be used to help reduce downtime.

Define

First, the process, when the process is considered down, and a list of potential reasons need to be defined.

For each process, determine how it is identified as running or not running.

For many downtime tracking software’s a tag/variable is needed to indicate when the process is considered down. If a specific tag does not exist consider a utility feeding the process such as steam, water, or pressure. As long as there is a clear value that would indicate the process is running or not that variable can be used.

Brainstorming a list of potential downtime reasons is also needed prior to tracking the events. This reason list/tree can be shared or unique for each process area.

Assigning reasons to downtime events provides data that can be used to reduce downtime in the future.

These reasons need to include both planned and unplanned causes. During the analyze phase the planned reasons can be filtered out to focus on the unplanned downtime. For more information on creating a reason tree see 5 steps to harness your data’s potential.

Measure

Measuring and assigning a reason to the downtime is a critical step in being able to reduce it. Having a robust downtime tracking software will help make measuring the downtime easier. Make sure to capture the who, what, when, where, and why of the downtime event.

Once the downtime tracking software records the downtime event, a reason can be assigned.

Some systems can automatically assign reasons based on an error code from the machine. Users can verify the reason or select it from the predefined reason tree.

Additional information can be helpful to capture for the analyze phase. You may consider allowing users to type in free form comments in addition to the predefined reason to further explain why a downtime event occurred. If using PARCview, the evidence field can be configured to capture other important process data over the duration of the event.

Analyze

Now that the downtime is recorded and categorized it can be analyzed. Pareto charts are useful when analyzing downtime events. Data can be charted on a pareto by duration or count of events.

Pareto charts can help you analyze downtime events and learn the most significant causes of downtime.

Take into consideration other key process data such as safety concerns, environmental risks or material wasted, in addition to duration of events to help determine which downtime cause will be most beneficial to target and reduce.

It is not always the reason with the most total minutes down that should be the target.

Take for instance, an event that caused 15 hours of downtime but was due to a weak part and is unlikely going to happen again verses a cause that happens monthly but has only results in about 75 minutes of downtime each event. The reoccurring event is going to be more beneficial to improve.

Improve

When looking for ways to reduce a downtime cause look both on how to prevent the event from occurring in the first place, and how to get the process back up when the event does occur. Both approaches are needed to reduce downtime.

Think about the frequency of inspections, cleanings, how long parts last and if they can be put on a schedule to be replaced rather than waiting for them to fail while the process is running. Refer to the preventative maintenance schedule and update as needed.

Determine the best way to reduce the most impactful downtime causes or reduce the effect. A payoff matrix can help point to the most impactful, least costly solutions.

Control

Continue to measure and analyze the downtime to ensure items that have been reduced do not start popping back up. Repeat the cycle and target another reason. Workflows and SOPs (Standard Operating Procedures) can be created to help stay in control.

Conclusion

Reducing unplanned downtime requires multiple approaches, finding the right tools and software for tracking and monitoring is key to reducing downtime. Data is needed to drive improvement both preventing future events from occurring and reducing the duration when the process does go down.

A downtime tracking software can help save, organize, and review downtime events, allowing you to more effectively reduce downtime in your manufacturing process. dataPARC’s PARCview integrates downtime tracking, and process monitoring in one user friendly program.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Go beyond a typical gap analysis with a real-time gap tracking dashboard. Minimize manufacturing gaps such as operational cost or waste by creating gap tacking systems such as dashboards that calculate the gap in real-time rather than at the end of the month. In this article we outline why in line gap tracking is beneficial and walk through the steps to create real-time calculations and dashboards to track manufacturing gaps as they happen, allowing operations to make data-driven decisions.

Implement real-time gap tracking with dataPARC to help you optimize & control your processes

View Process Optimization Solutions

Why Gap Tracking?

Gap Tracking is going a step beyond gap analysis. Gap analysis is the comparison of actual operating conditions against targets, this is typically done monthly and is an important tool in the continuous improvement process. However, gap analysis has some shortcomings. The feedback loop is drawn out — by the time you can collect the data and compare values it can be days or months after the fact. This can prevent actionable solutions and cause lost opportunity.

To resolve these shortcomings operations needs information in real-time correlated to the levers they can pull on the machine. They need the power to make decisions and adjustments based on this information. The real-time information needs to be quick and easy to view and understand. Progress made needs to be measured and visible in real-time. A dashboard can help provide this information.

Dashboards help visualize the live data and updates or changes to a graphic can happen relatively quickly. Operations can utilize a real-time gap tracking dashboard to alert them of what is causing the gap and get the process back on track in the moment, rather than realize the problem days, weeks, or months later.

How to Create a Gap Tracking Dashboard

Follow along as we demonstrate how we built this gap tracking dashboard.

Conduct a Gap Analysis

The first step in creating a Real-Time Gap Tracking dashboard is to complete a Gap Analysis:

  • Define the area of focus and targets
  • Measure the variables
  • Analyze to targets against the current values
  • Improve the process to with Quick Wins to minimize large gaps
  • Control with routine Gap Analysis, this can include creating a Gap Tracking dashboard.

Gap Tracking Requirements

Regardless of the process and gap being tracked, the same general information is needed to build a gap tracking dashboard. Many of the following requirements will be pulled from the Gap Analysis.

Adequate measurements

Similarly to Gap Analysis, adequate measurements are required for gap tracking, however measurements may need to be taken more frequently for a reactive gap tracking calculation to be performed. This can be a challenging step, but the more variables that are able to be measured closer to real-time will provide more accurate gap tracking calculations.

If variables only have 3-4 data points per day, it can be difficult to see how changes affect the gap in real time. It is possible that variables without adequate measurement are removed from the dashboard and more emphasis is taken on those with more datapoints.

Process baselines

Process baselines are a great way to determine targets if they are not already outlined. Overall process targets typically come from upper management or operational plans. It is necessary to break these overall targets into their individual inputs. Those inputs could be broken down even further. Depending on the process, there could be targets for different products.

One way to determine baseline is finding times of good quality and production, what were the operating conditions and how can they be replicated.

Once a list of individual variables is created a target should be assigned. By meeting each target, the overall target should be met. If the individual variables do not have targets, the process baselines can be used instead.

dataPARC’s Centerline display is a smart aggregation tool which can be used to help establish operation baselines

Custom calculations

Another key step in building a gap tracking system is to standardize measurements and units. All variables should be converted to a per unit basis, some common options include dollar per ton, dollar per hour, off quality ton per ton or waste ton per ton.

Once all the input variables are converted to the same unit, they can be combined to create the overall process gap.

Value opportunities

Involving those with a high degree of process knowledge is critical. They will be able to help identify all the process inputs, then narrow the list to variables that can provide the most value opportunities.

These value opportunities are then tied into the gap tracking dashboard as an operator workflow. The workflow will focus on the variables that operators are able to control and has the greatest effect on the gap. This is where the calculated gap gets connect to process levers that the operators can manipulate to get things back on track.

With insight from a process expert one or two variables may stand out as most room for value added opportunities.

These variables should be the focus when it comes to the layout of the graphic. As most read from left to right, top to bottom, the most important information should appear in the upper left of the dashboard (or oriented closest to the operator if the monitor is going to be off to the side or high up). This will help ensure that those variables with the most opportunity to close the gap are being looked at first.

Operator buy-in

Ultimately, operators are the ones who will be using the dashboard to make data driven decisions in real time. Involve those who will end up using the dashboard as part of the design and implementation to help build ownership and operator buy in. Without operator buy in, the dashboard is a waste.

Software to visualize the dashboard and perform the calculations

Find the right visualization tool for your site. A process data visualization tool should be able to view trends, a grid with the ability to change colors or provide alerts and link to other displays or trends for quick data interpolation.

In this example dataPARC’s Graphic Designer was used to build the dashboard.

Depending on the process and number of inputs these calculations can get rather large and take a while to process on the fly, and even longer if looking at data in the past.

dataPARC’s Calc Server allows for calculations to be historized making this a great tool to use for fast calculations and viewing history.

Considerations When Building & Using a Gap Tracking Dashboard

Gap tracking dashboards are going to look different from machine to machine and site to site. Here is a list of suggestions to keep in mind while building your own gap tracking dashboard.

  • Use grids and rolled up data to convey the current gap. Colored or pattered backgrounds can help draw attention.
  • When a gap occurs, what levers can operations pull to make a change? These variables should be a focus on the dashboard. This can be done by adding trends, focused metrics, or a link to another display to “zoom into” that lever and see if a change can be made and how if effects the process.
  • Work with operators to determine those levers and key pieces of information, ask what would be helpful for them to see on this dashboard. The goal is to get all the important information in one place.
  • Monitor the progress, determine if the overall gap is decreasing, if not determine why. Make changes to the dashboard as needed.
  • The dashboard should provide information without settings requirements on how to run as not all the variables are always within the operators control.

Manufacturing Gap Tracking Example

Let’s look at a Gap Tracking dashboard for a paper machine.

1. Summary Trend

The first thing is a large trend showing the overall paper machine gap in dollars per day. The blue line represents the real-time gap and the yellow the target.

2. Category Trend

The second trend in the bottom left shows the individual calculated gaps by category. These are also on a dollars per day basis. This view allows the user to quickly identify it a category is trending in the wrong direction.

3. Gap Tracking Table

The table in the bottom right of the screen shows the current category gaps on a dollar per ton and dollar per hour basis. The values are highlighted red for over expected cost and green for under. At a glance, users can see what the current status of each category is.

4. Chemical Usage

Off to the right is a chemical usage button that will pull up another display. This button was added because chemical usage was found to have multiple inputs and levers for the operator to pull to close the gap.

Watch the video below to see this gap tracking dashboard in action.

Manufacturing analytics software like dataPARC’s PARCview offer tools to help manufacturing companies perform real-time gap tracking post-gap analysis.

Conclusion

A gap tracking dashboard can provide operators with a clearer picture of the gap in real-time, allowing them to make data-driven decisions. The alerts and real-time calculations bring awareness, letting operators know when something isn’t running optimally.

It is a way to drive process savings by tying analytics to actionable changes.

Instead of waiting for the end of the month to find there was a gap, it is found in real time. Bring troubleshooting into the present, changes can be made in real time to reduce or prevent larger process losses.

Although gap tracking dashboards are powerful tools, they do not replace regular Gap Analysis. To drive continuous improvement, Gap Analysis should be done regularly, targets on the gap tracking dashboard should be adjusted to reflect any process changes.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Both established operational data historians and newer open-source platforms continue to evolve and add new value to business, but the significant domain expertise now embedded within data historian platforms should not be overlooked.

Time-series databases specialize in collecting, contextualizing, and making sensor-based data available. In general, two classes of time-series databases have emerged: well-established operational data infrastructures (operational, or data historians), and newer open source time-series databases.

Enterprise data historian functionality at a fraction of the cost. Industrial time series data collection & analytics tools.

Learn More

Data Historian vs. Time Series Database

Functionally, at a high level, both classes of time-series databases perform the same task of capturing and serving up machine and operational data. The differences revolve around types of data, features, capabilities, and relative ease of use.

Time-series databases and data historians, like dataPARC’s PARCserver Historian, capture and return time series data for trending and analysis.

Benefits of a Data Historian

Most established data historian solutions can be integrated into operations relatively quickly. The industrial world’s versions of commercial off-the-shelf (COTS) software, such as established data historian platforms, are designed to make it easier to access, store, and share real-time operational data securely within a company or across an ecosystem.

While, in the past, industrial data was primarily consumed by engineers and maintenance crews, this data is increasingly being used by IT due to companies accelerating their IT/OT convergence initiatives, as well as financial departments, insurance companies, downstream and upstream suppliers, equipment providers selling add-on monitoring services, and others. While the associated security mechanisms were already relatively sophisticated, they are evolving to become even more secure.

Another major strength of established data historians is that they were purpose-built and have evolved to be able to efficiently store and manage time-series data from industrial operations. As a result, they are better equipped to optimize production, reduce energy consumption, implement predictive maintenance strategies to prevent unscheduled downtime, and enhance safety. The shift from using the term “data historian” to “data infrastructure” is intended to convey the value of compatibility and ease-of-use.

Searching for a data historian? dataPARC’s PARCserver Historian utilizes hundreds of OPC and custom servers to interface with your automation layer.

What about Time Series Databases?

In contrast, flexibility and a lower upfront purchase cost are the strong suits for the newer open source products. Not surprisingly, these newer tools were initially adopted by financial companies (which often have sophisticated in-house development teams) or for specific projects where scalability, ease-of-use, and the ability to handle real-time data are not as critical.

Since these new systems were somewhat less proven in terms of performance, security, and applications, users were likely to experiment with them for tasks in which safety, lost production, or quality are less critical.

While some of the newer open source time series databases are starting to build the kind of data management capabilities already typically available in a mature operational historian, they are not likely to completely replace operational data infrastructures in the foreseeable future.

Industrial organizations should use caution before leaping into newer open source technologies. They should carefully evaluate the potential consequences in terms of development time for applications, security, costs to maintain and update, and their ability to align, integrate or co-exist with other technologies. It is important to understand operational processes and the domain expertise and applications that are already built-into an established operational data infrastructure.

Why use a Data Historian?

Typical connection management and config area from an enterprise data historian.

When choosing between data historians and open source time-series databases, many issues need to be considered and carefully evaluated within a company’s overall digital transformation process. These include type of data, speed of data, industry- and application-specific requirements, legacy systems, and potential compatibility with newly emerging technologies.

According to the process industry consulting organization ARC Advisory Group, modern data historians and data infrastructures will be key enablers for the digital transformation of industry. Industrial organizations should give serious consideration when investing in modern operational historians and data platforms designed for industrial processes.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

11 Things to Consider When Selecting a Data Historian for Manufacturing Operations:


1. Data Quality

The ability to ingest, cleanse, and validate data. For example, are you really obtaining an average, such as if someone calibrates a sensor, will the average include the calibration data? If an operator or maintenance worker puts a controller in manual, has an instrument that failed, or is overriding alarms, does the historian or database still record the data? Will the average include the manual calibration setpoint?

2. Contextualized Data

When dealing with asset and process models based on years of experience integrating, storing, and accessing industrial process data and its metadata, it’s important to be able to contextualize data easily. A key attribute is the ability to combine different data types and different data sources. Can the historian combine data from spreadsheets and different databases or data sources, precisely synchronize time stamps and be able to make sense of it?

3. High Frequency/High Volume Data

It’s also important to be able to manage high-frequency, high-volume data based on the process requirements, and expand and scale as needed. Increasingly, this includes edge and cloud capabilities.

4. Real-Time Accessibility

Data must be accessible in real time so the information can be used immediately to run the process better or can be used to prevent abnormal behavior. This alone can bring enormous insights and value to organizations.?

5. Data Compression

Deep compression based on specialized algorithms that compress data, but enables users to reproduce a trend, if needed.

6. Sequence of Events

SOE capability enables user to reproduce precisely what happened in operations or a production process.

7. Statistical Analytics

Built in analytics capabilities for statistical spreadsheet-like calculations to perform more complex regression analysis. Additionally, time series systems should be able to stream data to third party applications for advanced analytics, machine learning (ML) or artificial intelligence (AI).

8. Visualization

The ability to easily design and customize digital dashboards with situational awareness that enable workers to easily visualize and understand what is going on.

9. Connectability

Ability to connect to data sources from operational and plant equipment, instruments, etc. While often time-consuming to build, special connectors can help. OPC is a good standard but may not work for all applications.

10. Time Stamp Synchronization

Ability to synchronize time stamps based on the time the instrument is read wherever the data is stored – on-premises, in the cloud, etc. These time stamps align with the data and metadata associated with the application.

11. Partner Ecosphere

Can make it easy to layer purpose-built vertical applications onto the infrastructure for added value.

Looking Ahead

Rather than compete head on, it’s likely that the established historian/data infrastructures and open-source time-series databases will continue to co-exist in the coming years. As the open-source time series database companies progressively add distinguishing features to their products over time, it will be interesting to observe whether they lose some of their open-source characteristics. To a certain extent, we previously saw this dynamic play out in the Linux world.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Manufacturers use a variety of tools and systems every day to manage their process from start to finish. It is critical that these systems can provide a “single pane of glass” and “single version of truth” along the way. Meaning, data can be viewed from any device, location or system and will be synchronized across all platforms. Manufacturing Operations Management Systems make this possible.

Real-time manufacturing operations management and industrial analytics tools

Check out PARCview

What is Manufacturing Operations Management?

Manufacturing Operations Management (MOM) is a form of LEAN manufacturing where a collection of systems is used to manage a process from start to finish. The key to MOMs is ensuring data is consistent across all systems being used from scheduling and production to shipment and delivery.

MOM includes software tools designed for the management of people, business processes, technology, and capital assets to meet customer demand while creating shareholder value. Tying in the LEAN manufacturing, the processes must be efficiently performed and resources productively managed. These are the prerequisites for successful operations management.

Key Applications of Manufacturing Operations Management


Supply chain & resource management

MOM systems include tools for planning, procuring, and receiving raw materials and components, especially as it relates to obtaining, storing, and moving necessary materials/components in a timely manner and of suitable quality to support efficient production, something that is certainly critical in these times of supply chain disruptions.

To deal with today’s dynamic business environment ranging from challenges caused by pandemics, shutdowns, geo-political conflicts, and supply chain disruptions, organizations need to be able to be sustainable, operationally resilient, conform to ESG goals, deploy the latest cybersecurity tools, and connect its workforces from any location.

Process & production management

Once all the resources are gathered, MOM tools need to be established for implementing product designs to specifications, developing the formulations or recipes for manufacturing the desired products, as well as manufacturing of product or products that conform to specifications and comply with regulations.

Organizations must monitor and adjust their processes quickly and automatically, to efficiently evaluate the situation when an inevitable glitch occurs. This is a prime opportunity for digital transformation through MOM systems.

Distribution & customer satisfaction management

The final stage of MOM relates to the distribution to the customers, particularly as it relates to sequencing and in-house logistics, as well as supporting products through their end-of-life cycles.

Organizations must react in real-time to changing market conditions and customer expectations. They will have to innovate with new business processes that reach throughout the organization, into the design and supply chain.

Driving innovation & transformation

Successfully innovating at this level involves managing people, processes, systems, and information. When disruptive technologies are in the mix, the first challenge is often tied up in the interplay of people and technology.

Only when the people involved begin to understand what the new MOM technologies are capable of and have the tools to visualize the data and real-time manufacturing analytics software to convert this data into actionable information can they begin to take steps towards achieving the innovation.

One output of manufacturing operations management systems is a production dashboard, like this one, built with dataPARC’s PARCview, which create a shared view of current operating conditions and critical KPIs.

Manufacturing Operations Management Systems

Today’s MOM systems can play a role in achieving the next levels of operations performance because they marshal many or all the needed services in one place and can provide a development and runtime environment for small or large applications.

Common MOM Tools

In addition to leveraging the latest AI, ML, AR/VR, APM, digital twin, edge, and Cloud technologies, MOM systems often consist of one or more of the following:

  • Manufacturing Execution Systems (MES)
  • Enterprise Asset Management (EAM)
  • Human-Machine Interface (HMI)
  • Laboratory Information Systems (LIMS)
  • Plant Asset Management (PAM)
  • Product Lifecycle Management (PLM)
  • Plant Asset Management (PAM)
  • Real-time Process Optimization (RPO)
  • Warehouse Management Systems (WMS)

MOM systems integrate with business systems, engineering systems, and maintenance systems both within and across multiple plants and enterprises.

An example of a multi-site operations with unique manufacturing applications at each site. Some tools like dataPARC’s PARCview, enable manufacturers to integrate data across sites for more effective manufacturing operations management.

Supply Chain Management (SCM), Supplier Resource Management (SRM), Transportation Management (TMS) are commonly used to manage the supply chain.

Plant automation systems, such as Distributed Control Systems (DCS) and Programmable Logic Controllers/Programmable Automation Controllers (PLCs/PACs) are key technologies driving manufacturing production.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

Data Visualization and Real-Time Analytics for MOM

Maybe the most important manufacturing operations management tools for managing production are the data visualization and real-time manufacturing analytics software platforms, like dataPARC, which provide integrated operations intelligence and time-series data historian software.

These MOM tools focus on data connectivity, real-time plant performance, and visualization + analytics to empower plant personnel and support their decision-making process.

Benefits of MOM Visualization & Analytics Tools


Eliminate Data Silos

Most real-time manufacturing operations management analytics tools offer the ability to connect to both manufacturing and operations data. Data from traditionally isolated data silos, such as lab quality data, or ERP inventory data, can be pulled in and presented side-by-side for analysis in a single display.

Establish a single source of truth

MOM analytics tools offering visualization plus integration capabilities enable manufacturers to create a “single version of truth” which everyone from management to the plant floor can use to understand the true operating conditions at a plant.

Often combining multiple sites and multiple data sources to form a single view, users leverage this data to gain perspectives and intelligence from both structured and unstructured operational and business data.

Produce common KPI dashboards

By measuring metrics and KPIs, such as production output, yields, material costs, quality, and downtime, users at multiple levels and roles can make better decisions to help improve production efficiencies and business performance.

Real-time manufacturing operations management dashboards from manufacturing analytics providers can pull in data from multiple physical sites or from multiple manufacturing process areas and display them in a common dashboard.

Manufacturing operations management dashboards from manufacturing analytics providers can pull in data from multiple physical sites for real-time production monitoring

Facilitate data-driven decision-making

Without operations intelligence provided by manufacturing operations management systems, users are often unable to properly understand how their decisions affect the process. MOM analytics software can display data sourced in the business systems for direct access to cost, quality control, and inventory data to support better business decisions.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Final Thoughts

The next generation of MOM systems is here. The economics of steady-state operations have been replaced with a dynamic, volatile, disruptive economic environment in which adapting to changing supply and demand, along with issues such as pandemics, shutdowns and geo-political conflicts are the norm. Tighter production specifications, greater economic pressures, and the need to maintain supply chain visibility in real-time, be sustainable and operationally resilient, plus more stringent process safety measures, cybersecurity standards, ESG goals and environmental regulations further challenge this dynamic environment. Managing these challenges requires more agile, less hierarchical structures; highly collaborative processes; reliable instrumentation; high availability of automation assets; excellent data; efficient information and real-time decision-support systems; accurate and predictive models; and precise control. Uncertainty and risks must be well understood and well managed in all aspects of the decision-making process.

Perhaps most importantly, everyone must have a clear understanding of the business objectives and progress toward those objectives. Increasingly, effective manufacturing operations management requires real-time decisions based on a solid understanding of what is happening, and the possibilities over the entire operations cycle. Organizations pursuing Digital Transformation should consider focusing on MOM systems and not just transformative new technologies to drive operations performance to new levels. This means utilizing software tools, such as manufacturing data integration, visualization, and real-time manufacturing analytics software that gathers a user’s manufacturing data in one single pane of glass view and establishes a single source of the truth.

This article was contributed by Craig Resnick. Craig is a primary analyst ARC Advisory Group. Craig’s focus areas include production management, OEE, HMI software, automation platforms, and embedded systems.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

Minimize manufacturing gaps such as operational cost or waste by performing a gap analysis with process data. In this article we will walk through the steps of a basic manufacturing gap analysis and provide an example.

Implement real-time gap tracking with dataPARC to help you optimize & control your processes

View Process Optimization Solutions

What is a Gap Analysis for Manufacturing?

A gap analysis is the process of comparing current operating conditions against a target and determining how to bridge the difference. This is an essential part of continuous improvement and LEAN Manufacturing.

A manufacturing gap analysis can be performed on a variety of metrics, such as:

  • operational costs
  • quality
  • productivity
  • waste
  • etc.

When it comes to bridging the gap, the ideal case is to reach the target with a single and permanent change. This is not always the case, there are times where the gap is going to be fluid and would benefit from constant monitoring and small adjustments. In these cases, operations can utilize a real-time gap tracking dashboard to alert them of what is causing the gap and get the process back on track in the moment, rather than realize the problem days, weeks or months later.

Manufacturing analytics software like dataPARC’s PARCview offer tools to help manufacturing companies perform real-time gap tracking post-gap analysis.

Who Conducts a Manufacturing Gap Analysis?

Gap analysis can be performed by anyone trying to optimize a process. As mentioned above there are a multitude of metrics that can be measured.

A process engineer might want to reduce operational cost by focusing on energy consumption, someone in the finance department may notice an increase in chemical cost every month, and a supervisor may want to reduce the time it takes to complete a task to focus on other items.

Almost every department can leverage gap tracking in one way or another.

How to Perform a Gap Analysis in Manufacturing

Like many other improvement strategies, we can use the DMAIC method (Define, Measure, Analyze, Improve, Control) to perform a gap analysis and implement a live gap tracking dashboard.

To create a gap tracking dashboard, a gap analysis needs to be completed first. In the last stage, Control, the dashboard is created, and operations can perform steps Analyze-Improve-Control in real-time.

1. Define

The first step is to define the area of focus and identify the target. A great place to start when looking for an area of focus is the company’s strategic business plan, operational plan, or yearly operational goals. Many times, these goals will already have targets in place.

2. Measure

Next, the process must be measured. Take a close look at the measurement system. Is the data reliable? Does the measurement system provide the necessary information? If so, measure the current state of the process.

If there is no current measurement system, one will need to be created. Although in-process measurements or calculations are best, manual input can also be used.

Some manufacturing analytics providers, like dataPARC, offer manual data entry tools which allow users to create custom tags for manual input. These tags can be trended and used the process tags in dashboards and displays.

3. Analyze

Take the data and compare it to the goal. How far from the target is the process? This is the gap. It may help to visualize the process gap in multiple ways such as with a histogram or trend display.

This histogram provides the overall distribution of the data which can help narrow the focus. What does the peak look like, is it a normal distribution, skewed to one side, is there a double-peak, or edge peak?

A trend shows how the process is shifting overtime, are there times of zero gap vs large gaps such as shift or season?

With the measurement system in place, the gap realized, and some graphical representations of the data, it is time to brainstorm potential causes of the gap. Brainstorming is not a time to eliminate ideas, get everything written down first. There are a variety of tools that can be used to help in this process:

Fishbone Diagram

This classic tool helps determine root causes by separating the process into categories. The most common categories are People, Process/Procedure, Supplies, Equipment, Measurement and Environment, other categories or any combination can be used to fit the situation.

The fishbone diagram is a classic tool for performing root cause analysis.

The team can brainstorm each category and identify any causes that could play a role in the problem. Dive one step further with a 5-why analysis, a method that simply asks “why” until it cannot be answered any more to ensure the true root cause is uncovered.

Is gap analysis one of your digital transformation goals? Let our Digital Transformation Roadmap guide your way.

get the guide

SWOT Chart

This chart is made of four squares, with labeled sections: Strengths, Weaknesses, Opportunities, and Threats. This strategy is used to determine the internal and external factors that drive the effectiveness of the process. For potential root cases, focus on what appears in weaknesses and see if potential solutions find their way to opportunities.

SWOT charts are another fundamental root cause analysis tool.

McKinsey 7s web.

The McKinsey framework is made up of 7 elements, categorized as 3 “Hard” or controllable elements and 4 “Soft”, non-controllable elements. In each element, write the current and desired state. It is important elements are in alignment with one another, any misalignment could point to a root cause.

The McKinsey Framework.

4. Improve

Determine the best way to bridge the gap and implement the changes. A payoff matrix or efficiency impact trend can help pick the most effective, least costly options. Focus on quick wins. Items in busy work can be completed but are not a priority. Those in Major Projects, you must ask, is the price work the impact? Anything that is low impact and high cost can be dropped.

A payoff matrix or efficiency impact trend can help you determine the best way to bridge gaps.

After the solutions are implemented, check the results by analyzing the data again and see if there was an improvement.

5. Control

Once the target is met it is important to keep it that way. Monthly reports can be used to keep track of the process gap and make sure it stays in the desired range.

Set up a dashboard, or other visual to monitor the process in real time. By tracking the gap of the process in real time, operations can see how changes to the process effect the bottom line in real time, rather than at the end of the month.

On the road to digital transformation? Get our Free Digital Transformation Roadmap, a step-by-step guide to achieving data-driven excellence in manufacturing.

An Example of Gap Analysis for Manufacturing

In this manufacturing example we are going to walk through a gap analysis to improve operational costs on a single paper machine.

Define

The company’s operational plan has a goal for monthly operational cost. To break this down into a manageable gap analysis, the focus will be looking at a single machine. This machine is not currently meeting the monthly operational cost goal on a regular basis.

Measure

Since this is an initiative from an operational plan, there is already a measurement system in place. The machine operational costs are broken down into 5 variables. Speed, Steam, Chemical, Furnish and Basis Weight.

These variables are measured continuously, so data can be pulled in hourly, daily, and/or monthly averages. The variety of data views will help in the next stage. Each of these variables have a target, but some are missing upper and lower control limits.

Analyze

First, the combined daily operational cost was compared against the target. There were days where the target is met, but it is not consistent.

Next, each of the five variables were compared with their targets separately over the past several months. From this view, Chemical and Steam stood out as the two main factors that were driving up the operational cost. With that in mind, we moved onto the Fishbone diagram and 5-why analysis.

Using the fishbone diagram we were able to determine that chemical and steam were the two main factor driving up our costs over the past several months.

Improve

From the fishbone and 5 why, we found that there were targets but no control limits set on all the chemicals. Operators were adding the amount of chemical they felt would accomplish the quality tests without trying to only apply to necessary amount.

Thinking about the cost/effectiveness diagram, it is cost free to add control limits to each chemical additive. Engineers pulled chemical and quality data from multiple months, created a histogram to find the distribution and set up control limits to help the operators have a better gauge of how much chemical to apply and the typical range that is needed to satisfy any quality tests.

For steam, there were a lot of potential root causes around the fiber mix and cook. The mill already has SOPs to deal with situations such as bad cooks. Another root cause that came up during the fishbone was steam leaks. Most leaks can be fixed while the machine is running, so over the next several weeks there was a push to find and close major leaks.

Control

In this case, since limits were created for chemical usage, alarms were also created to alert operations if they exceeded the control limit. Alerts are a great way to notify operations when processes are drifting out of control so quick corrects can be made.

After a few weeks of these changes, another analysis was be completed. The operational costs are meeting target, and it was time to move to the next process. It is important not to forget about the operational cost, this was monitored monthly to ensure it does not exceed the target.

Conclusion

Performing routine gap analysis is an important step in LEAN Manufacturing and continuous improvement. By following the above steps manufacturers can optimize their process by reducing waste, operational costs, improving quality or going after other key metrics.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

The most important differences between relational databases, time-series databases, and data lakes and other data sources are the ability to handle time-stamped process data and ensure data integrity.

Enterprise data historian functionality at a fraction of the cost. Industrial time series data collection & analytics tools.

Learn More

The Manufacturing Database Battle

The most important differences between relational databases, time-series databases, and data lakes and other data sources are the ability to handle time-stamped process data and ensure data integrity.

This is relevant because the primary job of the data management technology is to:

  • Accurately capture a broad array of data streams
  • Deal with very fast process data
  • Align time stamps
  • Ensure the quality and integrity of the data
  • Ensure cybersecurity
  • Serve up these data streams in a coherent, contextualized way for operational personnel

Time-Series Databases

Digital technologies and sensor-based data are fueling everything from advanced analytics, artificial intelligence and machine learning to augmented and virtual reality models. Sensor-based data is not easily handled by traditional relational databases. As a result, time-series databases have been on the rise and, according to ARC Advisory Group research, this market is growing much more rapidly than traditional relational databases.

While relational databases are designed to structure data in rows and columns, a time-series database or infrastructure aligns sensor data with time as the primary index.

Time-series databases specialize in collecting, contextualizing, and making sensor-based data available. In general, two classes of time-series databases have emerged: well-established operational data infrastructures (operational, or data historians), and newer open source time-series databases.

To gain maximum value from sensor data from operational machines, data must be handled relative to its chronology or time stamp. Because the time stamp may reflect either the time when the sensor made the measurement, or the time when the measurement was stored in the historian (depending upon the data source), it is important to distinguish between the two.

Searching for a data historian? dataPARC’s PARCserver Historian utilizes hundreds of OPC and custom servers to interface with your automation layer.

Relational Databases

Time series data technologies – whether open-source databases or established historians – are built for real-time data. Relational databases, in contrast, are built to highlight relationships, including the metadata attached to the measurement (alarm limits, control limits, customer spend, bounce rate, geographic distribution between different data points, etc.). Relational technologies can be applied to time series data, but this requires substantial amounts of data preparation and cleaning and can make data quality, governance, and context at scale difficult.

Integrating manufacturing data at your plant? Let our Digital Transformation Roadmap guide your way.

get the guide

Data Lakes

Data lakes, meanwhile, score well on scalability and cost-per-GB, but poorly on data access and usability. Not surprisingly, while data lakes have the most volume of data, they typically have fewer users. As with time series technologies, the market will decide the time in which and how these different technologies get used.

Looking Ahead

Digital technologies and sensor-based data are fueling everything from advanced analytics, artificial intelligence and machine learning to augmented and virtual reality models. The fourth industrial revolution, or Industrie 4.0, along with major market disruptions, such as the pandemic driving sustainability, and operational resilience initiatives, has led to a great acceleration of digital transformation and exponential changes in industrial operations and manufacturing taking place.

digital transformation guide

Want to Learn More?

Download our Digital Transformation Roadmap and learn what steps you can take to achieve data-driven success in manufacturing.

Download PDF
0

Dashboards & Displays, Data Visualization, Process Manufacturing, Troubleshooting & Analysis

The digital Transformation – everyone and everything is a part of it in some way. In the 20th century, breakthroughs in technology allowed for the ever-evolving computing machines that we now depend upon so totally, we rarely give them a second thought. Even before the advent of microprocessors and supercomputers, there were certain notable scientists and inventors who helped lay the groundwork for the technology that has since drastically reshaped every facet of modern life.

0

Dashboards & Displays, Data Visualization, News, Uncategorized

If you’re like us, you’ve likely been sent an email or told in meetings that part or much of your company staff will now work remotely. Testing for remote computer access and data volume traffic are ongoing as plans are being worked out for this new structure. For most companies, that means VPN or other remoting methods. Virtual meetings are replacing face-to-face ones and pseudo to full quarantines are on the rise. Phone conversations will go on but this won’t fully suffice to cover staffing roles. And besides, you’re talking to neighbors and wondering if you should take one more trip to the grocery store. In the midst of all the chaos, your company still needs you to not only do your job but to excel at it.

0

Dashboards & Displays, Data Visualization, Process Manufacturing

Most modern manufacturing processes are controlled and monitored by computer based control and data acquisition systems. This means that one of the primary ways that an operator interacts with a process is through computer display screens. These screens may simply passively display information, or they may be interactive, allowing an operator to select an object and make a change which will be then be relayed to the actual process. This interface where a person interacts with a display, and consequently the process, is called a Human-Machine Interface, or HMI.

0

Dashboards & Displays, Data Visualization, Process Manufacturing, Training

New training dates have been added so now is the time to register for your dataPARC training held in Vancouver Washington just across the river from beautiful Portland, Oregon. Whether you need to escape the heat of summer, the cold of winter, or just need to get away from the plant, our hands-on training is your ticket to a welcome escape. Oh, did we mention the training?

0