Gage R & R Studies in Machine Vision

July 29, 2022
An area often overlooked in machine vision integration is the aspect of Gage R & R (or Gauge R & R)—the ability to check a measurement vision system for repeatability and reproducibility.
Onvision

Installers of vision systems for measuring tasks often get caught out by simply stating that the pixel calibration task is completed by dividing the number of pixels by the measurement value. This does provide a calibration constant, but it’s not the whole story. After all, vision systems are based on pixels whose size is arbitrarily dependent on the sensor resolutions, fields of view, presentation angle, lighting, and optical quality. It’s essential that any measurements are validated and confirmed from shift to shift, day to day—and that these measurements are repeatable and accurate.

The best practice is to use Measurement Systems Analysis (MSA) through Type I gauge studies and gauge repeatability and reproducibility studies (G R & R). These are tests used to determine the accuracy, repeatability, and reproducibility of measurements. They are the de facto standard in manufacturing quality control and the metrology industry and are especially relevant for machine vision-based checking. Repeated measurements are used to determine variation and bias. Analysis of the measurement results may allow individual components of variation to be quantified.

Before running a study, the system must be calibrated. Metrology calibration involves mapping the pixel coordinates of the vision system sensor back to real-world coordinates. This “mapping” ties the distances measured back to the actual world measurements in inches, millimeters, microns, or another defined unit of measurement. Without calibration, any edges, lines, or points measured would only be given in arbitrary pixel measurements. But, quality engineers need actual, tangible measurements to validate the production process—so all systems must be precalibrated to known sizes.

Calibration is completed using calibration artifacts that are traceable back to national standards, which is key to effectively calibrating the vision inspection machine. This can be in the form of a calibrated slide with graticules, a datum sphere, or a machined piece with traceability certification.

Calibration should also be completed with an eye on optical or linear perspective distortion, which should be corrected by a distortion algorithm trained on a known target before pixel calibration. Distortion typically occurs because of imperfections in the optics or when the optical axis is not perpendicular to the object being imaged—even when using high-quality telecentric optics, combined with collimated lighting, the image will always have an element of optical distortion.

A Type I Gage Study is fundamental and should be executed as a starting point prior to a G R & R to determine the difference between an average set of measurements and a reference value (bias) of the vision system. To perform a Type 1 gauge study, a single operator will carry out a minimum of 30 inspections on a single calibration artifact for each attribute being assessed. The study evaluates the bias and the capability of the gauge. Once determined, the bias can then be used as a constant to convert the measured value to the corrected result.

Any vision gauging solution's three most crucial requirements are repeatability, accuracy, and precision. The variance of repeated measurements, or repeatability, refers to the variation in a set of measurements obtained with one vision system used several times by the same operator measuring the same characteristic on the same part. The accuracy of the measurements refers to how close they are to the true value or accepted reference value. The minimum measurement resolution is the discrimination/readability. How close measurements are to each other is known as the precision. To execute a gauge study, the provider will need to have the vision system set with measurement output, with samples and operators, for loading and unloading (or automation control) ready to run. Standard practice is to run the study as part of the factory acceptance test for the machine vision system.

The G R & R is an industry-standard methodology used to investigate the repeatability and reproducibility of a measurement system. This method typically includes taking a series of three measurements, on a minimum of 10 different parts, by three different operators. The study determines how consistent the operators are with their measurements (repeatability) and if the variation between operators is consistent (reproducibility).

The repeatability aspect of the G R & R technique is defined as the variation in measurement obtained:

  • With one vision measurement system.
  • When used several times by the same operator.
  • When measuring an identical characteristic on the same part.

The reproducibility aspect of the G R & R technique is the variation in the average of measurements made by different operators:

  • Who are using the same vision inspection system.
  • When measuring the identical characteristic on the same part.

Operator variation, or reproducibility, is estimated by determining the overall average for each appraiser and then finding the range by subtracting the smallest operator average from the largest.

Vision system measurements must be checked against all these aspects, so a bias test, process capability, and gauge validation are completed before the machine vision system's production operation. The comprehensive study should be performed per MSA Reference Manual 2010, Fourth Edition. This proves the gauging vision system really will function as intended in the actual production environment.

Earl Yardley is Director for Industrial Vision Systems (IVS). (Oxfordshire, UK; www.industrialvision.co.uk.).

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!