Multifunction Calibrator Uncertainty Calculation

I promised in the last post to discuss uncertainty calculations, so here we are.

How do I calculate the uncertainty of a transmitter calibration using a multifunction calibrator?

When using a multifunction calibrator, you normally provide the input (simulation) to the Unit Under Test (UUT) and measure the output of the UUT with the same device. However, for the purposes of calculating the uncertainty of the calibration, the multifunction calibrator is considered to be 2 separate devices.

The maximum error can be calculated by just adding the 2 errors. This is not the best practice, though. Best practice would be to use the root sum squares method. For a system with 2 devices you sum the squares of the 2 errors and take the square root of that. This will be the system error or total uncertainty. Here is the generic formula for calculating the system uncertainty in a system with multiple devices.

Where UT is the total uncertainty, U1 is the first item uncertainty, U2 is the second and so on.

Let’s work with an example where the UUT input is an RTD signal and the UUT output is 4 to 20 milliAmps representing a range of 0 to 100°C. We will assume U1 is 0.2°C and U2 is 0.02% of full scale or 0.004 mA.
First we have to convert the mA error to its equivalent temperature error of 0.025°C.

Plugging the values into the equation above, we get ±0.032°C as the total uncertainty. You see that the system uncertainty is more than either of the 2 source uncertainties, but less than the sum of the 2 (0.045°C).

The same calculation can be used to determine the uncertainty of a device with multiple sub-sets of uncertainty.