Calibration is the
process of configuring an instrument to provide a result for a patient sample
within an acceptable range. It is one of the primary processes used to maintain instrument accuracy.
involves using the equipment to test samples of one or more known values and
accuracy called “calibrators”. This in essence “teaches” the
instrument to produce more accurate results compared to if not calibrated. Hence,
produce more accurate results for patient samples which has unknown values or
could be abnormal while using the equipment in a normal working condition.
example, a thermometer could
be calibrated so the error of indication or the correction is determined, and
adjusted (e.g. calibration constants)
so that it shows the true temperature in Celsius at
specific points on the scale. This is the perception of the instrument’s
end-user. However, very few instruments can be adjusted to exactly match the
standards they are compared to. For the vast majority of calibrations, the
calibration process is actually the comparison of a patient (unknown values) to
calibration solutions (with known values) and recording the results.
The signal for each solution is
recorded and plotted on a calibration curve. The relationship between the
result and the value of analyte can be linear or non-linear and the signal can
rise or fall depending upon the reaction of the test.
When conducting a calibration the
highest and lowest values are the values which all patient result must fall
between, i.e. each test is limited to the range of the calibration curve.
Therefore, for example in testosterone measurement (analytical range is
(0.35–55.5 nmol/L), patient results
which fall below the lowest level of calibrator are often reported as ‘less than 55.5 nmol/L. But in some cases, the patient’s result
may require dilution. In this case, the sample is diluted with Access Testosterone
Calibrator S0 (zero) and the result is adjusted by the analyser.
When conducting a calibration, calibrator
with the same matrix as a patient sample, for example, when calibrating serum
chemistry tests, a serum based calibrator is used.
A calibration can fail when the result
of an analyte measured value is too far away from the known amount of analyte
present in the calibrator material.
Therefore the Co-efficient of Variation (%) increases and the shape of
the calibration curve is altered; therefore, any patient results obtained from
that calibration will not be reliable. (Please see a failed Calibration
If the same sample is measured several
times, the result produced should be the same. The analyser must be able to
produce repeatable result, hence why comparisons to standard values are
important. Standardisation relies on the use of accurate and consistent
reference materials and reliable methodology as the calibration curve slightly
changes with each calibration performed, therefore patient’s results can be
In the laboratory, we calibrate every time a reagent lot is changed, when
the QC results are showing a systematic bias
(calibration can eliminate trends or small analytical bias) and after major instrument maintenance, such
as lamp changes, which can cause shifts in QC values.
Quality control is designed to detect,
reduce, and correct deficiencies in a laboratory’s internal analytical process
prior to the release of patient results. Quality controls are used on all analysers and
assays, prior to
the running of patient samples. The quality control material has a known value
for each analytes (e.g. immunoglobulins, electrolytes such as sodium, potassium
etc.) present which has been assured by the manufacturer. But all end-users
must determine their own QC value after being in use for a time. The laboratory
run new QC lot numbers side by side with old lot numbers up to 50 times and
work out the true mean for that assay/analyser before it is becomes in use.
control materials should have the same matrix as patient specimens, including
viscosity, turbidity, composition, and colour. For example, a method that
assays urine samples should be controlled with human urine based controls.
In the SPS laboratory, quality control material is usually run at the beginning of each shift,
after an instrument is serviced, when reagent lots are changed, after
calibration, and when patient results seem inappropriate. This to ensure reproducibility of result by the analyser but note must be
taken to make sure it is reliable as results can be reproducible and still be
inaccurate. The QC results are plotted on a Levey-Jennings plot and interpreted
using the Westguard rules. QC results are only acceptable if within a set
ranges (normally ±2 standard
deviations from the mean) to allow patient samples to be tested.
There are other rules in westguard rule
that could be an indication of issues with analyser or reagents used. For example, the R4S rule, when 1
control measurement in a group exceeds the mean by 4SD e.g. from +2SD and drops
to minus 2SD; in this case the run is rejected. Trained laboratory staffs use
these methods to determine if the analyser is ready and reliable to use on a
Typically three quality controls are
used for each test, high, normal and low levels of the analyte. QC can be run on analysers either once or
several times a day depending on the stability of the reagents used. Finally quality control rules
have been developed to detect excess bias and imprecision as well as shift and
drift in the analysis over the course of the analyser use.