Wednesday, August 24, 2011

High Fructose Corn Syrup - The Poisoning of America

   Why should you know the high fructose corn syrup (HFCS) story? Because it's an incredible parable that has it all; presidential politics, the unintended consequences of manipulating our foods, good guys and bad guys, the obesity epidemic, and the neuroscience of appetite control. If you think you're not a HFCS consumer, I'll bet you're wrong. It is added to practically all prepared food products, not just sodas and juices (breads, breakfast cereals, ketchup, cookies, ice cream, crackers, cough syrup, cottage cheeses, yogurts, applesauces, pickles, jams, fruit, salad dressings, sauces, soups, sports drinks... you get the picture).

     The tale begins with president Nixon's War on Poverty in the early '70's. He feared that unstable food prices (especially sugar) could cost him the election. So he assigned his Secretary of Agriculture, Earl Butz, the task of exploring ways to produce cheap food. In 1966 a Japanese scientist had invented HFCS,  a very inexpensive, very sweet, and very stable substitute for traditional sugar (sucrose). Bingo! HFCS was introduced to this country, stabilized the cost of sugar and quickly found its way into almost everything.

      Generally speaking, the sweeter a food, the more people like it. If you want to increase sales, make the product sweeter. Once there was a very cheap way of doing that we were off to the races. Soda and juice led the way. Soft drink consumption has increased by 41% in the last 20 years. Fruit drinks have posted a 35% increase in the same period. But something curious happened. Somehow this increased sugar consumption did not translate into our feeling full. In fact, we started eating more. Our innate appetite feed-back system had been circumvented.

     It is essential to understand that human physiology evolved over millions of years in an environment that provided very little sugar. In other words, we're not made to handle much of the stuff. A quick illustration of how that has changed: In the late 15th century when Columbus introduced sugar cane to the New World, most Europeans has never eaten sugar. By 1700 the average Englishman consumed 4 pounds of sugar per year, 1800 18 pounds, 1900 90 pounds. But the United States has surpassed all other nations in this arena. The average American now consumes more than 140 pounds of sugar per year (much of it in the form of HFCS). And it shows.

     But why aren't we sated? How is it possible to knock back a 20 ounce soda that provides 240 calories and eat as much or more than we would have if we'd had 20 ounces of water?  This is where HFCS distinguishes itself.

     Our bodies control energy balance (the eating and burning or storage of calories) by a complex feedback system of hormones and neural connections where glucose is the primary indicator of global energy status. If there is an energy surplus, we store glucose as glycogen and make fat. If there is an energy shortage, we break down glycogen and make new glucose. When we eat, our blood glucose rises initiating a sequence of reactions that reach higher brain centers where this "information" is processed and a behavioral response (stop eating) is triggered. Fructose does none of this. Not only does increased  fructose consumption not produce the experience of satiety, it increases appetite!

     This is why the curves for HFCS and obesity track together. The food industry has found the perfect ingredient, sweeter than old-fashioned sugar and an appetite stimulant. We have outsmarted ourselves.

     In my next post I will look at the medical advice that fueled the obesity epidemic.





 
   
   

   

Tuesday, August 9, 2011

Drowning in Health Advice: How to Separate The Wheat From The Chaff

                                                                          
         The web has provided unprecedented access to medical information. That's the good news. The bad news is that it can be quite difficult discerning good data from junk. This problem can be encountered in any source, a blog, a news paper article, a television special, a web site or medical journal. In this post I suggest a quick methodical approach that can give you a good idea about the validity of any medical report.  

     Unfortunately we must contend not only with our incomplete understanding of medical conditions, but also with the willful manipulation of information for profit. The pool of data found on the web is contaminated by a variety of influences including pharmaceutical and nutriceutical companies, marketeers, the USDA, the FDA and many others. Here's just one recent example.  We were told by the authorities (who cited research papers) to stop eating fat and replace it with "heart-healthy" carbs.  This resulted in an obesity and diabetes epidemic.  There is controversy over such basic things as how much water to drink, how much exercise is necessary, what diet is healthiest. 

     So here's what you have to do. Go to the original research paper. Do Not Trust Someone Else's Interpretation of the data.  We all have a weakness for listening to "experts" and suspending our own judgement. This is especially true when we're sick. Be skeptical. Use the 8 point check system below and you'll be on solid ground.  


MOST OF THIS INFORMATION CAN BE OBTAINED QUICKLY FROM THE ABSTRACT, A ONE PARAGRAPH SUMMARY OF THE RESEARCH PAPER
FOUND AT THE BEGINNING OF THE REPORT                                                               
                                                                     
                                             
                                              
                                           
1.  Where was it published? 
     If it's not in a peer-reviewed journal it has not been vetted by other researchers in the same field.

2.  What was study hypothesis (was it consistent with conclusion)?
     If the conclusion is not an answer to what the study was designed to observe, no conclusions can be       drawn.

3.  What was study design?

    Randomized Controlled Trial (RCT) 
    This is the gold standard.

    Compares intervention in study group compared with control group
    Subjects randomly assigned to groups
    Ideally should be double-blinded (eg. neither the subjects nor the researchers know who's getting    
    placebo until completion of the study)

    Cross-sectional (One point in time) vs Longitudinal (over defined period of time)
    Longitudinal usually are more telling.

    Prospective (started study and observed groups going forward in time) vs Retrospective (looked back    in time at groups)
    Prospective usually more informative.

4.  Sample Size (N)  THIS IS KEY

    Was there a large enough population observed to have sufficient "power" to detect statistically significant results  
    N < 50-100 generally not significant

    Dropouts (how many completed study, that is the real N, not how many started)

5.  Correlational vs. Experimental Research

    Correlational: no manipulation of variables
                   CANNOT PROVE CAUSAL RELATIONSHIP

    Experimental:  manipulate variable and measure effect

    Example:  homocysteine and CVD  

              Correlational:  observe presence of CVD in subjects with certain homocystine levels
              Experimental:  manipulate homocysteine level and observe effect on CVD

              Independent Variable: manipulated variable (homocysteine)
              Dependent Variable:  observe how affected (CVD)

6.  Statistical Significance (p-value)  THIS IS KEY 

    Probability that observation between variables occurred by pure chance

    The higher the p-value, the less valid the observation

    p-value 0.05 means there is 5% probability that relationship between variables is a fluke
    (every 20 repetitions of experiment will get same or stronger relationship between variables   

    p >/= .05 borderline significant

    p

7.  Meta-Analysis

    Combines the results of several studies with the aim of more powerfully estimating "effect size" (how   many cases needed to see effect, e.g. how many patients treated to save "x" number of lives) 
    Meta analyses are problematic because they combine studies with different designs, inclusion criteria etc.

8.  Funding THIS IS KEY  

    If the milk industry funded a report on the benefits of milk throughout the lifespan, it is suspect until repeated by a neutral investigator.

    Finding out about an authors neutrality is often made easy by a list of affiliations reported at the end of the paper.  Most reputable journals now require such reporting.