The web has provided unprecedented access to medical information. That's the good news. The bad news is that it can be quite difficult discerning good data from junk. This problem can be encountered in any source, a blog, a news paper article, a television special, a web site or medical journal. In this post I suggest a quick methodical approach that can give you a good idea about the validity of any medical report.
Unfortunately we must contend not only with our incomplete understanding of medical conditions, but also with the willful manipulation of information for profit. The pool of data found on the web is contaminated by a variety of influences including pharmaceutical and nutriceutical companies, marketeers, the USDA, the FDA and many others. Here's just one recent example. We were told by the authorities (who cited research papers) to stop eating fat and replace it with "heart-healthy" carbs. This resulted in an obesity and diabetes epidemic. There is controversy over such basic things as how much water to drink, how much exercise is necessary, what diet is healthiest.
So here's what you have to do. Go to the original research paper. Do Not Trust Someone Else's Interpretation of the data. We all have a weakness for listening to "experts" and suspending our own judgement. This is especially true when we're sick. Be skeptical. Use the 8 point check system below and you'll be on solid ground.
MOST OF THIS INFORMATION CAN BE OBTAINED QUICKLY FROM THE ABSTRACT, A ONE PARAGRAPH SUMMARY OF THE RESEARCH PAPER
FOUND AT THE BEGINNING OF THE REPORT
1. Where was it published?
If it's not in a peer-reviewed journal it has not been vetted by other researchers in the same field.
2. What was study hypothesis (was it consistent with conclusion)?
If the conclusion is not an answer to what the study was designed to observe, no conclusions can be drawn.
3. What was study design?
Randomized Controlled Trial (RCT)
This is the gold standard.
Compares intervention in study group compared with control group
Subjects randomly assigned to groups
Ideally should be double-blinded (eg. neither the subjects nor the researchers know who's getting
placebo until completion of the study)
Cross-sectional (One point in time) vs Longitudinal (over defined period of time)
Longitudinal usually are more telling.
Prospective (started study and observed groups going forward in time) vs Retrospective (looked back in time at groups)
Prospective usually more informative.
4. Sample Size (N) THIS IS KEY
Was there a large enough population observed to have sufficient "power" to detect statistically significant results
N < 50-100 generally not significant
Dropouts (how many completed study, that is the real N, not how many started)
5. Correlational vs. Experimental Research
Correlational: no manipulation of variables
CANNOT PROVE CAUSAL RELATIONSHIP
Experimental: manipulate variable and measure effect
Example: homocysteine and CVD
Correlational: observe presence of CVD in subjects with certain homocystine levels
Experimental: manipulate homocysteine level and observe effect on CVD
Independent Variable: manipulated variable (homocysteine)
Dependent Variable: observe how affected (CVD)
6. Statistical Significance (p-value) THIS IS KEY
Probability that observation between variables occurred by pure chance
The higher the p-value, the less valid the observation
p-value 0.05 means there is 5% probability that relationship between variables is a fluke
(every 20 repetitions of experiment will get same or stronger relationship between variables
p >/= .05 borderline significant
Combines the results of several studies with the aim of more powerfully estimating "effect size" (how many cases needed to see effect, e.g. how many patients treated to save "x" number of lives)
Meta analyses are problematic because they combine studies with different designs, inclusion criteria etc.
8. Funding THIS IS KEY
If the milk industry funded a report on the benefits of milk throughout the lifespan, it is suspect until repeated by a neutral investigator.
Finding out about an authors neutrality is often made easy by a list of affiliations reported at the end of the paper. Most reputable journals now require such reporting.