“There are three types of lies — lies, damn lies, and statistics.”
John Godfrey Saxe wrote this well known poem which illustrates the problem we often experience in our trials involving evidence of causation and scientific connection to events. Too often, it is a matter of interpretation.
To learning much inclined,
Who went to see the Elephant
(Though all of them were blind),
That each by observation
Might satisfy his mind.
And happening to fall
Against his broad and sturdy side,
At once began to bawl:
“God bless me! but the Elephant
Is very like a wall!”
Cried, -“Ho! what have we here
So very round and smooth and sharp?
To me ’tis mighty clear,
This wonder of an Elephant
Is very like a spear!”
And happening to take
The squirming trunk within his hands,
Thus boldly up and spake:
“I see,” -quoth he- “the Elephant
Is very like a snake!”The Fourth reached out an eager hand,
And felt about the knee:
“What most this wondrous beast is like
Is mighty plain,” -quoth he,-
“‘Tis clear enough the Elephant
Is very like a tree!”The Fifth, who chanced to touch the ear,
Said- “E’en the blindest man
Can tell what this resembles most;
Deny the fact who can,
This marvel of an Elephant
Is very like a fan!”
The Sixth no sooner had begun
About the beast to grope,
Then, seizing on the swinging tail
That fell within his scope,
“I see,” -quoth he,- “the Elephant
Is very like a rope!”
And so these men of Indostan
Disputed loud and long,
Each in his own opinion
Exceeding stiff and strong,
Though each was partly in the right,
And all were in the wrong!
Epidemiology plays an important role in pharmaceutical and toxic tort litigation. However, the interpretation is always subject to controversy. While epidemiology applies a scientific method to the study and comparison of groups of people using principles of medicine we often find disputed conclusions.
Some have observed that “a common feature of epidemiologic data is that “they are almost certain to be biased, of doubtful quality or incomplete or all three.” A dispute of scientific studies is nothing new. Let’s remember how many years ago prestigious authors disputed there was a causal connection between cancer and tobacco.
The commonly accepted factors that could make epidemiological studies unreliable include the following: (1) failure to control for all relevant risk factors and all material variables (2) failure to conduct a study of sufficient size or power from which reliable inferences can be drawn parentheses 3) failure to provide consistent or repeatable results and (4) failure to eliminate bias in the selection of the cases for the control group.
Associations are often relied upon to prove a fact. There is a limit to drawing conclusions from associations. Consider an experiment in which subjects are given large drinks of whiskey and water and other subjects are given large drinks of rum and water. A third group is brandy and water. All show showed signs of intoxication. A conclusion that the intoxication was due to the common factor of water would be incorrect.
The epidemiologist “gold standard” is a cohort study of two groups of subjects, and exposed or study group and an unexposed or control group. A less accurate substitute is the retrospective case-control study, were a group of individuals without a known abnormality is selected as a control group that will be compared with the collective group of individuals who have the abnormality being studied.
Epidemiological studies include “confidence intervals” that indicate there is a 95% probability that the true risk ratio lies between two numbers. In simple terms this means that nothing less than 95% certainty or confidence will permit a statement that there is adequate proof of a connection and preferably, the level of confidence should be 99%. Therefore, a study that shows that one can with a 90% sense of confidence, state that a toxin adversely affects humans, can be disregarded and may be cited in the literature instead offered as proof that the substance is safe.
We sometimes find papers or studies which exaggerate evidence and minimize criticism to support a conclusion. Statistics can state with numbers that a difference exists which was not likely to have been caused by chance alone. However, they cannot tell you why there was a difference or what caused that difference. Statistics are generalizations that are true for particular population, but say little about any individual within that population and less about any individual outside the population. Census takers can tell us that the average American family is composed of 3.5 people. However obviously, no such family exists. Statistics do not permit reference to the particular.
Another common error involves confusing correlation with causation. It’s also important to be alert for qualifiers and hedging words. For example: “the epidemiologic evidence suggested that there may be a 50% increase in lung cancer risk.” When someone cites a “pattern” or a “trend in the data,” it’s time to look more closely. In rigorous science, close doesn’t count. Discrepancies are also relevant. When two versions of a verifiable fact diverged sharply, we should be on guard. For example, these statements: “thousands of studies have shown that secondary smoke increases the risk of heart and lung disease.” versus a statement that “The Tobacco Institute insisted that fewer than 100 studies have been done on the effects of secondary smoke.”
The lesson for us is to not accept studies and research relied upon by experts without investigating the accuracy. We need to be prepared to challenge faulty research and cross examine with skill these issues.