Accuracy, resolution and repeatability: the critical selection factors for speed and position measurement




Despite their frequent use, the terms accuracy, resolution and repeatability are often misunderstood. As critical factors in the selection of position and speed sensors, engineers must ensure they fully understand the terminology, says Mark Howard, General Manager at Zettlex Ltd.

When sourcing and selecting sensors and other precision measuring instruments, many engineers still don't understand the terms accuracy, resolution and repeatability.

The terminology applied to instrumentation can be confusing but is critical when it comes to selecting the right measuring instruments for an application - especially for position and speed transducers. Get this part wrong and engineers could end up paying more than they need to for over-specified sensors. Conversely, a product or control system may lack critical performance if the position or speed sensor does not meet the specification.

First, some important definitions:

An instrument's Accuracy is a measure of its output's veracity.
An instrument's
Resolution is a measure of the smallest increment or decrement in position that it can measure.
A position measuring instrument's
Precision is its degree of reproducibility.
A position measuring instrument's
Linearity is a measurement of the deviation between a transducer's actual output to the perfect slope of displacement versus output.

To most intents and purposes, linearity is tantamount to accuracy, provided the perfect slope - against which linearity is measured - passes through a zero or datum position.

Most engineers get confused about the differences between precision and accuracy. Using the analogy of an arrow fired at a target, accuracy describes the closeness of an arrow to the bullseye.

If many arrows are fired, precision equates to the size of the arrow cluster. If all arrows are grouped together, the cluster is considered precise.

A perfectly linear measuring device is also perfectly accurate.

So, that's pretty straightforward then - as long as you specify very accurate and very precise measuring instruments every time, you'll be OK. Unfortunately, there are snags to this approach. First, high accuracy, high precision instrumentation is always expensive. Second, high accuracy, high precision instrumentation may require careful installation and this may not be possible due to vibration, thermal expansion/contraction, etc. Third, certain types of high accuracy, high precision instrumentation are also delicate and will suffer malfunction or failure if there are changes in environmental conditions - most notably temperature, dirt, humidity and condensation.

The optimal strategy is to specify what is required - nothing more, nothing less. In a displacement transducer in an industrial flow meter for example, linearity will not be a key requirement because it is likely that the fluid's flow characteristics will be non-linear. More likely, repeatability and stability over varying environmental conditions are the key requirements.

In a CNC machine tool, it is likely that accuracy and precision will be key requirements. Therefore, a displacement measuring instrument with high accuracy (linearity), resolution and high repeatability even in dirty, wet environments over long periods without maintenance, are key requirements.

A good tip is always to read the small print of any measuring instrument's specification - especially about how the claimed accuracy and precision varies with environmental effects, age or installation tolerances.

Another useful tip is to find out exactly how an instrument's linearity varies. If this variation is monotonic and slowly varying, the non-linearity could be easily calibrated out using a few reference points. For example, for a gap measuring device this could be achieved using some slip gauges. In the example below, a fairly non-linear transducer is calibrated in to a highly linear (accurate) device with a relatively low number of reference points.

However, in this second example, a rapidly varying device is calibrated with 10 points and its linearity hardly changes. It might take >1000 points for such a rapidly varying measurement characteristic to be linearised. Such a process is unlikely to be practical with slip gauges but it might be practical to compare the readings in a lookup table against a higher performance reference device such as a laser interferometer.

A common pitfall - optical encoders
Optical encoders work by shining a light source onto or through an optical element - usually a glass disk. The light is either blocked or passes through the disk's gratings and a signal, analogous to position, is generated.

The glass disks have tiny features that allow manufacturers to claim high precision. What is often not explicit is what happens if these tiny features are obscured by dust, dirt, grease, etc. In reality, even very small amounts of foreign matter can cause mis-reads. There is seldom any warning of failure - the device simply stops working altogether and suffers from 'catastrophic failure'. What is less well known is the issue of accuracy in optical encoders and optical encoder kits.

Consider an optical device using a 1" nominal disk with a resolution of 18 bits (256k points). Typically, the claimed accuracy for such a device might be +/-10 arc-seconds. However, what should be in big bold print (but surprisingly never is) is that the stated accuracy assumes that the disk rotates perfectly relative to the read head and that temperature is constant.

If we consider a more realistic example, the disk is mounted slightly eccentrically by 0.001" (0.025mm).

Eccentricity comes from several sources, including the following:
Concentricity of the glass disk on its hub.
Concentricity of the hub's through bore relative to the optical disk.
Perpendicularity of the hub relative to the plane of the optical disk.
Parallelism of the optical disk face with the plane of the read head.
Concentricity of the shaft on which the hub is mounted.
Clearances in the bearings and bearing mounts, which support the main shaft.
Imperfect alignment of the bearings.
Roundness of the shaft and roundness of the hub's through bore.
Locating method (typically a grub-screw will pull the hub to one side).
Displacements due to stresses or strain from forces on the shaft's bearings.
Thermal effects.

A perfectly mounted optical disk requires such fine engineering that cost becomes prohibitive. In reality, there is a measurement error because the optical disk is not where the read head thinks it is. If we consider a mounting error of say 0.001", then the measurement error is equivalent to the angle subtended by 0.001" at the optical track radius. To make the maths easy, let's assume that the tracks are at a radius of 0.5".

This equates to an error of 2 milliradians or 412 arc-seconds. In other words, the device with a specification accuracy of 10 arc-seconds is more than 40 times less accurate than its data sheet.

If you get an optical disk to position accurately to within 0.001" of an inch you are doing really well. Realistically, you're more likely to be in the range 2-10 thousandths of an inch, so the actual accuracy will be 80-400 times worse than you might have originally calculated.

The measurement principle of a resolver or a new generation inductive device is completely different. Measurement is based on the mutual inductance between the rotor (the disk) and the stator (reader). Rather than calculating position from readings taken at a point, measurements are generated over the full face of both the stator and rotor. Consequently, discrepancies caused by non-concentricity in one part of the device are negated by opposing effects at the opposite part of the device. The headline figures of resolution and accuracy are often not as impressive as those for optical encoders. However, what's important here is that this measurement performance is maintained across a range of non-ideal conditions.

The quoted measurement performance of some of the new generation inductive devices are not based on perfect alignment of rotor and stator but realistically achievable tolerances (typically +/-0,2mm) are accounted for in any quoted resolutions, repeatabilities and accuracies. Furthermore, stated performance for inductive devices are not subject to variation due to foreign matter, humidity, lifetime, bearing wear or vibration.


From a press release issued by Zettlex (UK) Ltd.

June 2012


   
   


Home - Search - Suppliers - Links - New Products - Catalogues - Magazines
Problem Page - Applications - How they work - Tech Tips - Training - Events -