Skip to content

Production Evaluation Method Comparison Between Metrology Gauge and Error-Proofing Systems

In mass production environments, high-quality consistency and high throughput are absolutely critical for any manufacturer to win highly competitive market share.  To achieve the necessary consistency and throughput, many manufacturers are using metrology gauges and error-proofing (poka-yoke) systems. Both metrology gauges and error-proofing systems need performance verification to ensure they are capable of performing their designed functions. However, the verification criteria and procedures are fundamentally different because the systems are designed for two very different purposes.  Unfortunately, people often get confused on what method to use for evaluation, since both metrology gauge and error-proofing systems produce numerical readings. This analysis should provide some much needed guidance.

Metrology Gauges  

A growing variety of metrology gauges are being used for engineering analysis, troubleshooting, auditing, process capability evaluation, statistical process control, etc.  To ensure a metrology gauge is capable of producing the requisite fiduciary data without contributing unnecessary uncertainty, the industry has specified a standard process [1] to evaluate the accuracy, linearity, stability, repeatability, and reproducibility of metrology gauge numerical readings.  This process is commonly called “Gauge R&R” by the automotive industry.

The Gauge R&R process is typically performed in a controlled environment with minimum interaction with other equipment to avoid the environmental impact or the impact from other equipment on gauge’s performance.  A small part sample (5-10 parts to cover a full range of part tolerance) is measured by a small group of operators (2-3 operators) multiple times on the same gauge.  The results are fed into a table or into ANOVA (Analysis of Variation) software to derive the repeatability (“equipment variation (EV)”), and the reproducibility (“appraiser variation (AV)”).  These two numbers are then combined into single gauge repeatability & reproducibility (“Gauge R&R”) number.  This Gauge R&R number needs to be less than 10% of the part tolerance to be unconditionally accepted as a golden standard by the industry.  Some manufacturers even go beyond to calculate Cg and Cgk to bake both accuracy and repeatability/reproducibility of a gauge into their evaluation criteria.

This Gauge R&R process is feasible only because it is performed in a controlled environment with minimum interaction with other equipment.

Error-proofing Solutions

Error-proofing systems, on the other hand, are designed for use in a production environment to catch random manufacturing failures/errors. To catch random failures, an error-proofing system has to inspect every single part and has to survive in a tough and varying production environment: robot movement variation, conveyor/fixturing variation, part temperature variation, ambient lighting variation, part color and texture variation, part surface cleanliness variation, part batch variation from different suppliers or from different die…..the list of production variables can go on and on and on.

For this very specific reason, the overall system robustness is much more important and practically meaningful than a single Gauge R&R number (which is typically generated in a lab rather than a production environment). In fact, some sophisticated error-proofing systems use more than one numerical reading/criterion for decision-making just so that enough redundancy is built up for robustness.    It would be trickier, sometimes even misleading, to attempt to apply a Gauge R&R process to multi-criteria sophisticated error-proofing systems.

Instead, there are mainly two aspects of the robustness of an error-proofing system that need to be evaluated.  The two main criteria for robustness for evaluating an error-proofing system are:

  • False positive (calling good parts bad – type I error), also called nuisance failure or false rejects
  • False negative (calling bad parts good, type II error), also called miss rate

The goal is to keep both numbers as low as possible.  To be able to justify an investment of an error-proofing system, it has to be able to catch every bad part reliably, which means typically 0% false negative is required.  At the same time, to avoid the hassle of manually interacting with the error-proofing system often and/or to minimize the downtime caused by the error-proofing system, the false positive rate has to be very low, typically no more than 0.5%-1% (or no less than 99%-99.5% run rate) in the automotive industry. This rigorous evaluation has to be performed in a production environment on a significant part sample size (such as a full day of production) to cover a full range of production environment and a full range of pre-determined failure modes.  To further ensure an error-proofing system performs as designed, a series of master part samples with representative failure modes can be run through the system at the beginning of every shift for sanity check as a common practice by many manufacturers.

Summary

There is a fundamental difference between the designed purpose of a metrology gauge and an error-proofing system. This difference determines how a gauge and an error-proofing system should be evaluated.  For a metrology gauge, which will be mainly used in a controlled environment with minimum interaction with other equipment, a typical Gauge R&R process should be performed; whereas for an error-proofing system, which will be used in tough production environment with a lot of variations from different sources, robustness (or more specifically false reject rates and miss rates) should be the main criterion for evaluation.

1.Measurement Systems Analysis (MSA), Automotive Industry Action Group, 2010

April 30, 2019