Aiming for the truth: understanding the difference between validity and precision
In teaching generations of registrars, a recurring theme is the conflation of the concepts of validity and precision, resulting in an erroneous understanding of the role of statistics. In this article, we clarify the definition of these two concepts, looking at what drives these measures and how they can be maximised.
Clinical epidemiology is the science of distinguishing the signal from the noise in health and medical research. The signal is the true causal effect of a factor on an outcome; for example, a risk factor on a disease outcome or a medication on disease progression. The noise is the variation in the data, which has traditionally been referred to as error.
There are two types of errors: random error, also called chance or non-differential error, and systematic error, also known as differential error or bias (Box 1). Random error is the scatter in the data, which statistics uses to estimate the central or most representative point, along with some plausible range, known as the confidence interval. Increasing the sample size shrinks the confidence interval so that we have greater precision in our estimate. However, no statistical test can tell us how close that estimate is to the truth, which is a matter of validity — also known as study validity, to distinguish it from measurement validity or accuracy. Study validity can only be judged by critical appraisal…