Wednesday, December 14, 2011

Our Two Selves: Experiencing and Remembering

     The past decade has witnessed an explosion of new ways to look at how we humans make decisions. These insights have sprouted from the fields of psychology, computational neuroscience and behavioral economics. The traditional model of how we choose centered around psychic conflict, warring parts of the mind, instinct versus reason, id against ego, unconscious motivations avoiding conscious recognition. Both Freud and Plato used allegories of mental conflict that depicted a battle between a horse and it's rider.

     The horse provides the locomotor energy, and the rider has the prerogative of determining the
     goal and of guiding the movements of his powerful mount towards it.  But all too often in the
     relations between the ego and the id we find a picture of the less ideal situation in which the
     rider is obliged to guide his horse in the direction in which it itself wants to go.
     Freud from New Introductory Lectures on Psychoanalysis

     The charioteer of the human soul drives a pair, and one of the horses is noble and
     of noble breed, but the other quite the opposite in breed and character. Therefore in our case the
     driving is necessarily difficult and troublesome.
     Plato Phaedrus

     Both these thinkers paint a picture of human intellect or reason fighting forces within us that lead us astray. These unconscious agents distort our perception of "reality" and hide our true motivations. But there has always been an optimism about overcoming these influences through self-awareness and discipline.

     More recent work is less sanguine about even such basic things as our ability to know what makes us happy, or the capacity to store accurate memories of what we've experienced.

     Daniel Kahneman, who received the Nobel Prize in Economic Sciences in 2002 for his work on decision making, has elegantly demonstrated how our brain is designed in such a way that we often cannot trust our preferences to reflect our interests. His work vividly shows how this is a consequence of having two mental operating systems, an experiencing self and a remembering self.

     The experiencing self is the you in the moment who lives through the event. The remembering self is the you that writes the history. It is also the remembering self that is consulted when planning the future. Choices are made based on the remembering self's construction of what happened in the past. Now here's the problem.  The experiencing self and the remembering self don't agree on what happened. In fact, Kahneman has shown that certain discrepancies are hard wired. Let's look at some examples.

     Subjects had a hand immersed in ice water at a temperature that causes moderate pain. They were told that they would have three trials. While the hand was in the water the other hand used a keyboard to continuously record their level of pain. The first trial lasted 60 seconds. The second trial lasted 90 seconds, however in the last 30 seconds the water was slowly warmed by 1 degree (better but still painful).  For the third trial, they were allowed to choose which of the first two trials was less disagreeable, and repeat that one.

     Here's what they found. Are you sitting down? 80% of the subjects who reported experiencing some decrease in their pain in the last 30 seconds of the second trial chose to repeat the 90 second experience! In other words, their remembering self selected the option that required an additional 30 seconds of suffering.

     What gives?

     Many similar experiments have revealed two rules that govern the remembering self's recording of an experience.
     1.Duration does not count.
     2. Only the peak (best or worst moment) and the end of the experience are registered.

     This has profound implications. For instance, should a doctor attempt to minimize a patient's memory of pain or experience of it?  A procedure's duration and anesthesia level would be addressed differently depending on the priority.

     It is only by confusing experience and memory that we believe experiences can be ruined. Kahneman speaks of "the tyranny of the remembering self" in the way it makes decisions.

   We seem to be in the business of creating memories, not experiences.

     I'll leave you with a question that will tell you something about your relationship with experience versus memory. You have a choice of two vacations. One is your fantasy of the perfect getaway. It could not be improved upon. The second is a typical good vacation. The only caveat is that if you choose to go on the dream vacation, you will have no memory of it.

     Your call.







Friday, December 2, 2011

The Trouble With Knowing Thyself

     In the August 2010 entry "A Hen Is Just An Egg's Way of Making Another Egg" we showed how the driving force behind natural selection is survival and reproduction, not truth, and gave several examples of deception in nature. Our evolutionary history has hardwired false belief systems that turn out to be an essential part of our nature.

     For example, it has been repeatedly proven that men overperceive the sexual interest and intent of women. The Darwinian rational for such a distortion is that the cost of this misbelief is much less detrimental to reproductive success than it's opposite, that is the man's sense that the woman is uninterested. For women, not surprisingly, the cost asymmetry is reversed. For a woman to falsely believe in a man's interest in familial investment is more detrimental because it will result in abandonment and therefore a lower chance of the offspring's survival. If she were wrong in this biased perception of a man's familial investment, it would merely delay reproduction, a much less costly error. Let the mating dance begin.

     Evolution has also crafted certain misbeliefs about ourselves. One particularly striking example is the "better than average effect". Most people judge themselves to be more intelligent, honest, original, friendly, and reliable than the average person. Drivers who have been hospitalized as a consequence of their poor driving, rate themselves as having better than average driving skills. My favorite example is that most people perceive themselves as less prone to such self-serving distortions than others.

     Traditional psychological theories have considered a close relationship with truth as an essential ingredient of mental health.  We're no longer so sure. In an amusing study, investigators assessed reality testing (how accurate one's observations are about oneself and the environment) in people on a spectrum of moods. The scale ran from clinically depressed, moderately depressed, normal mood, elevated mood, to manic. Surprisingly the moderately depressed won the contest, providing the most "accurate" responses. The normal mood group consistently demonstrated unrealistically positive evaluations of themselves and their loved ones, exaggerated perceptions of personal control or mastery, and unrealistic optimism about the future.

     These positive illusions are more accurately understood as design features of a normal mind rather than a brain function failure. In fact such positive misbeliefs are key to physical health as well. Unrealistically positive views about one's medical condition have been repeatedly linked to better outcomes than more accurate beliefs.

     One might wonder how we are so good at fooling ourselves.

     Because deceit is so fundamental in animal communication, there must be a strong selection to spot deception. This in turn led to a selection for self-deception, burying certain facts and motivations in the unconscious, so as to be the least obvious when our deception is being enacted. This protective failsafe- like system filters what we will allow ourselves to know.

     We know our neighbor better than ourselves.

In the next entry we'll take a look at some of the ways our memory distorts things and how that makes it difficult to know how to pursue happiness.




Friday, November 4, 2011

Normal Medical Lab Results Not So Normal

     Theses days the typical annual check-up is inevitably preceded by "blood work". The technician assembles what appears to be enough tubes to drain your tank. (After seeing the little tubes used for babies and young children, I've wondered why adults can't do the same.) You watch the deep crimson liquid flow into the test tubes and hope it harbors no surprises, that everything will be analyzed and found "normal".

     In our medical system, health is defined in the negative, the absence of abnormal findings. Medical language reflects this in the curious nomenclature of test results. A "negative" test result means your not sick, a positive thing. The medical field continues to define health as the absence of disease, an impoverished conceptualization that underlies the most fundamental problems in our health care (it should be called sick care) system.

     Over the past twenty years the history taking (a time when physicians sat with patients and got to know them, who they lived with, the work they did, the family story, the struggles, their habits, their vices and values, in other words got to know what was "normal" for them) and physical exam have been eclipsed by sophisticated blood tests and imaging techniques. This is a casualty of dramatic advances in science and technology as well as financial pressures that encourage abbreviated visits and greater patient volume.

     So you might have a quick chat before undressing for a rapid physical exam. When you rejoin your doctor at her desk, she takes a moment and pages through your chart. She scans the lab results checking the abnormal column. If it's clear, you're good to go. "See you next year. You're fine."

     But where do these normal ranges come from? A reasonable person might presume normal ranges are defined by the test results of people whose health has been carefully scrutinized and found to be optimal.


     Normal lab numbers are determined by huge reference laboratories. Let's take blood glucose as an example. All the glucose test results from the past few months are pooled. The middle 95% are defined as normal. If you fall in the 2.5% above or below that range, you have an abnormal blood glucose. In other words, the medical use of the term "normal", which is understood to mean healthy, is really a statistical concept.

     There is an attempt at refining the normal ranges by using information about age, gender, and location. So a 70 year old male in Flagstaff will be compared to more men around that age than anything else. But no diagnostic history is available to the reference labs. Test results from venues that are thought to have mostly "healthy" people, such as health fairs, are also used. But isn't someone more likely to get blood drawn at a health fair if they think there might be a problem.

     So let's take a look at how this plays out. Your annual blood glucose creeps up each year, but remains in the normal range for a decade. Year 11, your doctor says, "Bill, I'm afraid you've got a sugar problem." Bill thinks, "wow, and last year I was fine."

     Bill was not fine.

     So there are three fundamental problems. One, we are compared to a population, many of whom are not healthy. Two, 99% of physicians do not follow trends in your lab values in order to catch a problem before it's progressed to a point where it compromises your health. And three, most physicians no longer "know" their patients.

     The most common illnesses of our culture are chronic, such as diabetes, coronary artery disease, high blood pressure, and obesity. The kind of medicine I've described above is partly to blame. You don't wake up one day with a chronic disease that you didn't have the day before. Nobody became obese or diabetic or hypertensive on Tuesday. Until preventive medicine replaces the acute care model, it won't get better.

                                          WHAT YOU CAN DO

1.  You are the most important member of your health care team.
     Own a copy of your test results. The system is broken and you can not assume essential data will 
     be available to the doctors making decisions about your care. 
     It is your job to make yourself "known" to your doctor. You must inform her of what "normal"
    (as defined above in history taking) life is for you. 

2. You provide your own normal lab reference range. Get a baseline set of blood work when you're well  
     your results over time.
    Address trends early and do not wait until results have hit the abnormal range.
3. Take all medical statements about your health with a dose of skepticism. The doctors can provide      information about large populations. That is not the same as knowing what is necessarily in store for you.
Do your homework. Read about your condition. Get second opinions.

Friday, October 28, 2011

The Skinny on Fat Loss

     J.B.S. Haldane, one of the most eccentric and brilliant biologists of the twentieth century described four stages in the acceptance of a new theory.
1. This is worthless nonsense.
2. This is an interesting, but perverse, point of view.
3. This is true, but quite unimportant.
4. I always said so.

     The public remains in stage one or two with regard to some essential truths in health and fitness.

     Let's start with one of the most unfortunate misunderstandings in health and fitness history, "FAT BURNING ZONES". The most common reason people exercise is to lose fat. And in the vast majority of cases, they fail. But it's not their fault. The public was told that longer duration low-intensity exercise, like a jog on a treadmill or a steady session on a stationary bike provides the heart rate that is optimal for burning fat.


     For a definitive review of how aerobic exercise fails as a weight loss strategy see Thorogood et al. American Journal of Medicine 2011 Aug;124(8):747-55

     It is true that in this "fat burning" heart rate zone proportionally more fat than carbohydrate is used. But there is much less total fat burnt with these exercises than more demanding routines, such as interval training. But we should have known something didn't sound right.

     Here's a quick quiz?

     Who has a higher body fat percentage, the marathoner or the sprinter?

     If you said this has to be a trick question and I'll go with the counterintuitive response, you were correct! Why is the sprinter, whose training runs total less distance, less time, and consume less calories, leaner? Because he does resistance/weight training. That's right. It's exactly the opposite of what we were led to believe. Metabolic or interval resistance training, which the pros have been doing since the 1950's, is the most time-efficient way to burn fat. It gives the biggest bang for the buck and you see results faster than with any other intervention.

     The reason these more intense forms of exercise burn more fat is because they induce a metabolic disturbance that requires lots of energy to recover from. This is a key point. It is not how many calories or how much fat you burn during the exercise. It is what happens after. All the pathways that are stimulated in order to address the "insult" of excessive demand during the exercise are sometimes called the "afterburn". This includes a host of reactions such as EPOC (Excess Post-Exercise Oxygen Consumption) and increases in resting metabolic rate. Let's face it, if you work out an hour a day (which is heroic) that leaves 23 non-exercise hours per day. If your routine only changed your physiology during that hour, it could not have much impact on anything.

      I know this sounds like an advertisement, but it gets even better.

     The "fat burning zone" concept is actually a symptom of a much larger misconception, the endurance/cardio training - strength/resistance training dichotomy.  Most of the world, from doctor to fitness professional to health conscious layman believe that these are mutually exclusive domains. This has also proven to be false. And that's good news.

     Traditionally, exercise has been classified as either strength or endurance. Strength training consisting of short-duration, intense muscular work that results in hypertrophy, vs. endurance training which is characterized by prolonged, low to moderate intensity work that results in increased oxidative or aerobic capacity. The scientific community believed that these two forms of exercise triggered different pathways that could not be engaged simultaneously. However, recent research has found considerable overlap in these two pathways.

   For example, high intensity interval training (which is really just using resistance training with supersets or circuits to elevate heart rate and not allowing for sufficient recovery between sets) produces similar metabolic and performance adaptations to endurance training. No one thought it was possible to improve aerobic performance this way. In fact, HIIT appears to be better than endurance-type training for muscle buffering capacity (getting better at eliminating lactic acid).

     I'll leave you with a study that beautifully illustrates what we've been speaking about today. Bryner et al. in Effects of resistance vs. aerobic training combined with an 800 calorie liquid diet on lean body mass and resting metabolic rate in Journal of the American College of Nutrition, 1999 April;18(2):115-21.

 ( I'm sure you noticed the 800 calorie diet! The authors were interested in looking at the effect of different types of exercise on subjects on a very low-calorie diet (VLCD). One of the problems with VLCDs is their tendency to cause muscle loss and lower resting metabolic rate, two things that make it even more difficult to loose weight. )

   So, there were two groups, one did aerobic exercise, the other resistance training. The aerobic group exercised for four hours per week and the resistance group did 2-4 sets of 8-15 repetitions for 10 exercises, three times per week.

     Both groups lost weight. But the resistance training subjects did not lose muscle, lost much more fat  and experienced an increase in their resting metabolic rate compared to the aerobic group. (The aerobic group's metabolic rate decreased.) The most stunning result, however was the VO2max increased equally in the two groups!

     So if you want to improve aerobic performance, get stronger, and lose fat, intensify and shorten your workout.

     For once, less is more.


Monday, October 24, 2011

Probiotics: Pro or Con?

     There are way more bacterial cells living in our gut than the total number of our own cells in our entire body. We are, so to speak, colonized. These gut microbes turn out to be incredibly important. Anyone who has been on antibiotics, which kill many of these bacteria, can attest to the stomach misery caused by upsetting the balance of these little lodgers. Growing evidence suggests that too many of the wrong bugs can cause obesity.

     We are born with a pristine intestine, literally sterile. However, it is immediately invaded by the bacteria in mother's milk and environmental bacteria introduced by bottle. The average adult harbors between 1,000 and 1,500 bacterial species, 160 of which constitute the core group or what's called the core microbiota.

     Researchers have noticed that altered gut microbiota is associated with diseases that became prevalent in the 21st century. For instance, a reduced diversity of these bugs is seen in inflammatory bowel disease, metabolic syndrome (prediabetes) and obesity.  Specifically, the number of Firmicutes was increased, and the number of Bacteroidetes was reduced in obese people compared with lean folks. Interestingly, weight loss by dieting eliminated those differences. These two types of bacteria represent over 90% of all bacterial cells found in the human intestine.

     So how do these critters make us fat?

     Diet, not surprisingly, has a profound effect on what grows in our gut.  Switching from a lean diet to a high-fat Western diet dramatically alters the microbiota in a negative way. These changes are incredibly fast, starting in the first 24 hours of the introducing the new foods.

     Once the "bad" bacteria overpopulate, it is easier to absorb calories from the gut. The bugs provide an increased capacity not only to breakdown nutrients, but also make the gut wall more permeable. This allows more nutrient absorption (mostly glucose i.e. sugar). They also exert their influence beyond the gut promoting fat storage throughout the body by a variety of mechanisms including the altering of hormone levels responsible for orchestrating appetite, satiety, and fat metabolism.

     What can we do?

     We are still learning how best to harness probiotics. Different strains of lactobacilli (gasseri is one) have been shown to decrease fat and the risk for type 2 diabetes by increasing insulin sensitivity. Inulin-type fructans (found in fruits and vegetables) reduced weight, appetite, and blood sugar levels, and increased insulin sensitivity.

     But the bottom line remains, if eating meats, make them lean. The fresher and less altered the food, the better, in part because it will have a positive effect on the gut microbiota. Lots of local vegetables, and fresh dairy with live cultures are best for the same reason.

     Probiotics are no con job. Just do your homework before collecting bottles of preparations in your medicine cabinet. We are just beginning to understand these microbes.

Thursday, October 13, 2011

If It Ain't Broken, Don't Fix It: A Bad Week For Vitamins

     There is an old Sufi story in which Mulla Nasrudin is in his yard, in front of his house throwing corn. A man passing by is puzzled and stops. "Mulla Nasrudin", he asks, "why are you throwing corn all over your yard?" "It keeps the tigers away," he replies. "But there are no tigers around here." "Well it works then, doesn't it?" the mulla replies.

     This parable is similar to our health behavior. We are advised by "experts" (often people selling us something) who cite epidemiological studies that would suggest that if you take some product you'll prevent some malady. The problem with most of these studies is they cannot demonstrate a cause effect relation between an intervention (like a vitamin) and an outcome (lets say not getting cancer).

     Approximately one-half of adult Americans used dietary supplements in the year 2000, with sales of $20 billion that year. This represents a shift from using vitamins and minerals to prevent deficiency states, to using them in the absence of malnourishment, to promote wellness and prevent disease. Unfortunately, we have no good data that indicates this makes sense. In fact, the results of randomized clinical trials suggest vitamin and mineral use can be harmful.

     A large, well designed, and well conducted study published this week in the Archives of Internal Medicine reported the results of the Iowa Women's Health Study. The investigators assessed the relation between vitamin and mineral supplementation and mortality in 38,772 women with a mean age of 61.6 years. They found that the use of multivitamins, B6, folic acid, magnesium, zinc, iron and copper was associated with an increased risk of mortality. The association was strongest for supplemental iron. The association for iron was dose dependent, that is, the higher the dose, the more deaths observed.

     A second study published this week in the Journal of the American Medical Association demonstrated that vitamin E supplementation not only does not protect against prostate cancer, it may increase risk. Men receiving a common dose of vitamin E (400IU) had a 17% increased risk for prostate cancer compared to men who received placebo.

     In my recent blog, That Which Does Not Kill Us Makes Us Stronger: Why You Should Throw Out Your Vitamins, I discussed the negative effect of antioxidant vitamins before exercise. The JAMA paper cited above contributes to a growing body of evidence indicating that vitamin E, vitamin A, and beta-carotene can be harmful.

     These negative reports are particularly concerning given that those who take supplements show a greater range of healthy lifestyle factors (non-smokers, low-fat diets, exercisers) than non users. So even in the context of good preventive health practices, vitamins can impair health.

     If one thinks about evolution, and how we've eaten throughout our existence, it is not shocking to read these reports. We have never taken in such quantities of these nutrients in our history. The vitamins and minerals were always packaged in foods that dictated how how they were assimilated into our system. More is not better. It seems to be worse.

     We must wonder how much of our life is spent throwing corn, and what our own personal tigers are.


Vitamin takers were stunned to learn that their supplements

Saturday, October 8, 2011

Sweet Nothings? Sugar Substitutes and Weight Gain

     The only source of sweet for 99.9% of human existence has been glucose and fructose. Not surprisingly we developed a physiology where feeding behavior is largely controlled by the ebb and flow of blood levels of these sugars and their metabolites which reflect our energy status. In other words, a part of the brain watches our gas tank and sends messages accordingly, directing us toward or away from the kitchen. The obesity epidemic strongly suggests that we have lost this signal.

      The sources of sweet started to change after World War II. The combination of a sugar shortage and a changing esthetic that favored a thin figure encouraged women to try a sugar substitute. Saccharin (Sweet N' Low), the oldest nonnutritive sweetener was discovered in 1879 at Johns Hopkins during experimentation with coal tar derivatives. Saccharin had been used to replace sugar in soda marketed to diabetics until after the war when soda bottle labels were changed from "for use only in people who must limit sugar intake" to "for use in people who desire to limit sugar intake." Saccharin, which is 300 times sweeter than sucrose (table sugar, a disaccharide composed of 50% glucose and 50% fructose) was followed by cyclamate in 1937. Concern over cyclamate's capacity to cause cancer took it off the market in 1969. Similar concerns resulted in the FDA's plan to pull saccharin in 1977, but consumer protest reversed the decision. A warning label accompanied all saccharine products until 2000 when subsequent studies demonstrated that it is not carcinogenic. These investigations essentially silenced concerns over the safety of artificial sweeteners. Cyclamate continues to be available in 50 countries including Canada.      

     The next generation of sugar substitutes gave us aspartame (NutraSweet, Equal, 200 times sweeter than sucrose), sucralose (Splenda, 600 times sweeter than sucrose), and Neotame, the sweetest, weighing in at 7,000 times the sweetness of sucrose. These sweeteners have been well received. Between 1999 and 2004 6,000 food products containing these agents went to market. According to, an ingredient search engine, there are now no fewer than 3,648 foods containing these chemicals in the U.S.. A sizable majority of americans consume artificial sweeteners, usually believing that they are making the healthy choice. In fact diet soda drinkers diets contain more whole grains and low-fat dairy, and less processed meat and refined sugar, than the general population. The idea that diet soda is a health food has accelerated with the recent "low-carb" diet fad.

     Unfortunately, what was supposed to provide the perfect solution to caloric overload and weight gain by eliminating the need for sugar failed miserably. In fact, many large epidemiological studies have demonstrated a positive correlation between artificial sweeteners and weight gain. How could this happen? Ironically, exactly what seemed to make nonnutritive sweeteners ideal, the capacity to provide unlimited sweetness with zero caloric load, opened the door to overeating on a scale our species has never witnessed.

     Human taste provides sensations of sweet, sour, salty, bitter, savory and possibly fat and metallic. While the identification and tracking of food relied upon the visual and olfactory systems, animals developed the capacity for taste in order to recognize potential nutrients and poisons. A keen sense of taste was enormously adaptive because it provided a guide to what was full of energy/calories (sweet), a source of electrolytes (salty), rich in protein (savory) and a potential toxin (bitter).

     Because life ceases without an energy source, our capacity to discern small differences in sweetness and our preference for the sweeter, is innate, not learned. We come into the world fully loaded with a genius for choosing the sweeter option, the product of about two and half million years of evolution. Newborns will invariably prefer a sweetened nipple. Numerous experiments have documented infants' pleasure response to sweetened water including a slowed heart beat, relaxed face, hedonic brain pattern and endorphin release. Infants also learn to associate thicker fluids with greater sweetness because the viscosity and caloric density of human breast milk vary together.

     Experiments in a variety of animals including humans have repeatedly demonstrated that artificial sweeteners increase hunger and total energy intake while sugar seems to trigger a mechanism that keeps energy consumption fairly constant. Functional MRI studies, where they take pictures of the brain while someone ingests something and see what areas are active, indicate that the food reward system responds differently to sugar versus artificial sweetener. This reward system is not only what drives appetite, but also when turned off, allows us to push away from the table before loosening our belts.

     When man tampered with nature and uncoupled the sensory signal (sweetness) from caloric load, a pairing that we adjusted to for over 100,000 generation, our capacity to know when we had enough was eradicated.  Failure to activate the full food reward response fuels increased consumption.

     There is another unanticipated side-effect of these sugar impostors. In 2005 Americans ate 24 pounds of sugar substitutes per person, double the 1980 rates. Surprisingly, sugar consumption increased by 25% between 1980 and 2005. Our sweet receptors evolved in environments with so little sugar they seem to have no shut off point. By exposure to compounds that are hundreds to thousands of times sweeter than sugar, our taste for sweetness is being up-regulated. This has translated into consuming more sugar while using sugar substitutes.

     Once again what seemed like a no-brainer proved to be a disaster because of a disregard for our evolutionary history. It is not unreasonable to suggest that sugar substitutes have significantly contributed to the obesity and type 2 diabetes epidemics. Completely change something as basic as the fuel we've survived on since the beginning? What could possibly go wrong?


Saturday, October 1, 2011

That Which Does Not Kill Us Makes Us Stronger: Why you should throw out Your vitamins

     In this era of mass consumption of supplements and foods hawked as medicinal, we are bombarded by "healthspeak", a language that few understand. Antioxidant, free radical, and oxidative stress are prime examples of mystery expressions. Every field develops its own terminology in an attempt to create precise and agreed upon meanings to facilitate the communication of complicated ideas. Such technical jargon may be used by professionals in its field of origin or in the culture at large as a means to gloss over what is poorly understood, language masquerading as comprehension. It can provide a false sense of mastery, an attempt at reassuring ourselves that we know what's going on. But the most basic questions expose our ignorance. Ask anyone what an antioxidant is, if you want to have some fun.

     In order to tell you why you should throw out your vitamins, we need to go over some of the language and science in that arena. I  promise to make it as painless as possible.

     As you may have guessed oxidation has to do with oxygen. It so happens that oxygen is both necessary for life and extremely toxic. The tragic cases of blindness in premature infants in the 1940s caused by high oxygen levels in the newly invented incubators gave us a taste of oxygen's destructive potential. The discovery of superoxide dismutase (SOD) in 1969, an agent that protects against oxygen damage and is found in almost all aerobic cells, marked the beginning of a vibrant field dedicated to the study of oxygen's effect on cell signaling, disease and ageing.

     You might wonder how we came to experience oxygen as both vital and deadly.  The answer is simple. There was no oxygen in the earth's atmosphere when the earliest life forms developed. 2.45 billion years ago blue-green algae evolved from the primordial ooze with the capacity to use sunlight, water and carbon dioxide to produce carbohydrates and oxygen, a process known as photosynthesis.
It then took 1 billion years (the "boring billion") for the oxygen levels to get high enough to enable the evolution of animals. There was a significant advantage in utilizing oxygen metabolically to generate energy, but it came with a price.

     Oxygen's structure is unstable (its outer ring lacks a full set of electrons) and makes it want to react with almost anything in its vicinity. In doing so it destabilizes its neighbor. This can cause all sorts of damage to proteins, including DNA/RNA, and is considered a major cause of ageing and disease. Every compound, including oxygen, that can accept electrons is an oxidant or oxidizing agent. These oxidants are often referred to as "radicals". On the other hand, any agent that can donate electrons is an antioxidant or reducing agent.

     So with the advent of an oxygen-rich atmosphere, organisms had to develop defenses against these new noxious oxidizing agents. Two of the most important antioxidants that our bodies manufacture are superoxide dismutase (SOD) and glutathione peroxidase, names you'll see bandied about in many health foods/products. However, despite nature's defense systems, some oxidative damage is always occurring.

     Enter the antioxidant vitamins, center stage. Theoretically, it makes perfect sense. If oxidative damage is a common cause of disease and ageing, antioxidant vitamins, like C and E, should help. Unfortunately, these vitamins have been a disappointment. They work beautifully in the laboratory, in test tube experiments, but not in animal studies. This has had no impact on antioxidant vitamin sales, the most popular nutraceutical.

     But it gets worse. In 2009 Ristow et al. published a stunning report in the Proceedings of the National Academy of Sciences entitled, "Antioxidants prevent health-promoting effects of physical exercise in humans" that turned everything on its head. Exercise, the most effective defense against obesity and type 2 diabetes (the acquired type associated with excess weight), exerts its therapeutic effects by increasing insulin sensitivity. In fact, exercise is more effective than medication in preventing type 2 diabetes in high risk individuals.

     For years it had been believed that exercise (contracting muscle fibers) caused oxidative damage. Ristow's lab demonstrated that exercise-induced oxidation actually plays an essential role in promoting insulin sensitivity. These changes are eliminated by daily consumption of the antioxidant vitamins C (500mg twice/day) and E (400IU/day). That is to say, C and E appear to block one of the most important beneficial effects of exercise on metabolism.

     This suggests that oxidative stress, something we thought was bad, is necessary to promote the production of our innate defense mechanisms. Interestingly, the use of antioxidants in type 2 diabetes is associated with increased hypertension and with overall mortality in the general population. How do we make sense of this?

     The repeated exposure to sub-lethal doses of stress results in greater stress resistance. This adaptive phenomenon is called hormesis. Such exposure has been shown to improve immune responses, decrease tumor formation and significantly slow ageing.

     It is not outlandish to wonder whether antioxidant vitamins are actually contributing to the diabetes epidemic. The fact that a diet rich in vegetables (a source of many antioxidants) decreases the risk of type 2 diabetes may be true despite vegetables antioxidant content.

     If all the vitamins were thrown into the sea, it would be all the better for mankind and all the worse for the fishes.



Friday, September 23, 2011

Sleeping with Big Pharma

     In the last blog, The Insomnia Epidemic, I spoke about how our natural sleep patterns differ from the "cultural norm" of one 6-8 hour block at night and suggested that this causes many folks sleep problems.
Sleep experts believe that our 24/7 culture has created such pervasive sleep deprivation that abnormal sleepiness is the norm. Unfortunately, there is no adapting to getting less sleep than we need. What happens is we adjust to a sleep-deprived state in which our judgement, memory, reaction time, and many other functions are impaired. Studies also document how our subjective assessment of performance with sleep deprivation is way off. We consistently think we're doing fine, until we really can't function. The experts say that if you feel drowsy during the day, even when bored, you haven't had enough sleep. Similarly, falling asleep within 5 minutes of lying down in bed suggests severe sleep deprivation. So there's no question, the public is hungry for more (or better) sleep. And a smorgasbord of medications are on offer everywhere you look, in magazines, on television, and on the internet.

     According to Medco Health Solutions, a prescription drug benefit program manager, the number of adults ages 20 - 44 using sleeping pills doubled from 2000 to 2004.  Children apparently have not been spared. Usage increased 85% in 10-19 year olds during the same period. And the trend has continued. In 2008 the sale of prescription sleep aids totaled $3 billion. The number of prescriptions written for sleep meds exceeded 59 million in 2009, an increase of approximately 4 million scrips from the previous year.  In 2010 the pharmaceutical industry took in $5 billion from the sale of sleep medications.

     How do we explain this astronomical rise in sleep medication usage? There are only a few possibilities.  It's hard to imagine that this population's sleep deteriorated so dramatically in these 4 years it accounts for this trend. Some might suggest that the sleep impaired were always out there but hadn't been diagnosed and treated until the recent focus on insomnia. While this may have some truth, it hardly explains the rate of increased usage of these meds. After all, insomnia is not a sexually transmitted disease. There is not a stigma attached to sleep disorders that would make it difficult for patients to report their concerns. Has the cultural environment changed significantly? Has this time period witnessed big changes in our use of mobil devices, lap tops, etc.? Did we become even more 24/7 since the turn of the century? Yes, to some extent we have.

    But I believe it's none of the above. So what happened?

     The combined effect of changes in three areas, medicine, advertising, and our psyche, created the perfect climate for these medications. Let's take them one at a time.

     Over the past two decades the practice of medicine has been transformed (some would say ruined). The average patient visit is 10 minutes. There is no time to discuss the patient's home life, work situation, social or financial stressors. There is no continuity of care. Patients bounce around to specialists without anyone overseeing the whole person. So if people complain of difficulty sleeping, there will not be an exploration of what's going on in their life. That takes way too much time.  Physicians continue to want to help. They want to respond to the patient's complaint.  In this context, the prescription is the best they've got.
     In addition, physicians diminishing control of the field of medicine (now run by government and insurers) and decreased remuneration has made them more susceptible to dubious practices such as responding to patient requests for specific medications.
     These forces have catalyzed the "medicalization" of sleep, a process where a formerly normal behavior is reframed as a medical problem. In fact, analysis of data over a 15 year period shows the new generation nonbenzodiazepine sleep med prescriptions grew 21 times more rapidly than did sleeplessness complaints and 5 times more rapidly than did insomnia diagnoses.  The inappropriate use of medical solutions to treat problems of living fits neatly into the changes in the American psyche that I discuss below.

     In the early 1980's the pharmaceutical industry began marketing prescription drugs directly to the public. The FDA questioned this practice and imposed a moratorium in 1983, but lifted it in 1985. Not surprisingly, there is a striking correlation between the amount of money spent on advertising for a drug and that drug's sales. The 4 million sleep scrip increase from 2008 to 2009 coincided with a direct-to-consumer 2008 ad budget of $500 million for Ambien CR and Lunesta, the most prescribed sleep meds that year.

Our Psyche:
     The funny thing about sleep medications is they don't change your sleep very much. If you look at efficacy studies you realize that people are not sleeping much better on these sleep medications that are selling like hot cakes. The studies show that on average, subjects fall asleep about 12 minutes faster and increase their total sleep time by about 15 minutes compared to placebo. And yet these very same subjects report that they slept well. So what gives?

     Well, it so happens that a side effect of these medications is something called anterograde amnesia, a state in which you cannot form new memories. In other words, you don't remember how you slept. (In fact you may not remember all sorts of things that went on during the night, like driving around, sending emails, or eating more than you thought humanly possible. But that's another story.) These agents also have an anti-anxiety effect. Not only might this help you fall asleep, but it also minimizes the impact of not doing so.    

     One Lunesta advert has a mellifluous women's voice cooing sympathetically "Does you restless mind keep you from sleeping?" That is a bullseye.  The American mind has reason to be restless. A decade ago, 9/11 presented the greatest shock in this country's history. What initially seemed to have the potential to unite the population and create common purpose quickly deteriorated. The country became polarized into red and blue, antagonistic camps speaking different tongues, accusing the other of being unpatriotic, unamerican.  For the past 10 years we have engaged in the "War on Terror", an amorphous conflict without boundaries, a means for assessing how we're doing, or consensus. We do not feel safer. 
Iraq and Afghanistan have not been transformed, but we have. Our belief in American ability and fair play has been compromised. Historically, national traumas have provided a fulcrum from which we have forced positive changes. None of the past decade's national nightmares, Hurricane Katrina, mortgage defaults, stock bubbles, a collapsing economy, or unemployment have been able to provide the stimulus for a gathering of transformative momentum.

     Why are sleep medications so popular? Because in the quiet darkness of our bedrooms with nothing to distract us, the mind struggles to empty itself of haunting anxieties. To swallow these pills is to change our state of consciousness. We forget. Then we sleep.



Sunday, September 18, 2011

The Insomnia Epidemic: Let There Be Light, But Not 24/7

     More than 30% of adult Americans, about 40 million people, complain of difficulty sleeping.  For most of these individuals,
treatment  begins with medication. This tells us two things. Sleep is a big problem and a big business. These two aspects  of the ecology of sleep create  a complicated calculus in an already enormously complex field. But I think it is possible to keep the two issues separate and tell a few good stories in parallel.  It must be said that the cultural influences on the medical  community in deciding what constitutes  normal and disordered sleep are profound.  While all medical conditions are culturally bound, the fact that sleep is a universal behavior (that takes many forms and can be willfully modified) in addition to a biological one, makes its conceptualization  particularly susceptible to the vagaries of a given era’s customs and beliefs.  

     So how does one of the most basic biological functions become so disordered?  After all, what could be more natural than sleep?

     The first thing you notice when digging into what we know about sleep is how little we understand. The function of sleep, a state that occupies one third of our lives, remains unclear. Why is sleep necessary for our survival?  Why do we dream?  Sure we have made some connections by observing what happens to people who are sleep-deprived or perform shift work. Clearly physical and cognitive function take a hit. Medical interns working on the night shift are twice as likely as others to misinterpret hospital test records that could endanger their patients.  The Exxon Valdez oil spill and the Three Mile Island and Chernobyl nuclear power plant accidents were attributed in large part to the consequences of compromised night shift workers. We know memory and learning is impaired. Protein synthesis that produces the building blocks needed for cell growth and repair is markedly diminished.  But theses are crude observations, not understanding.   

     The second thing you realize, and this boggles the mind, is that almost everything we do know about human sleep has been learned in the last fifty years. Unfortunately, like the first beliefs in any discipline, many of the early theories about our sleep were wrong.  Until recently, humans were thought to be different from all other animals in having sleep that is consolidated into one continuous nocturnal episode.  This notion of uniquely human sleep held sway until the early 1990’s when Thomas Wehr, a sleep researcher at NIMH inadvertently stumbled on something that changed everything, or should have.

     Wehr selected healthy untroubled sleepers who were accustomed to 16-17 hour days and 7-8 hours of sleep, a routine that many of us live by or envy because we get less sleep. He exposed them to ten hours of light and fourteen hours of dark per day and watched what happened to their sleep. This ratio of light to dark (10:14) mimics the natural light of a typical winter day in a temperate climate.  Initially they slept for 11 hours per night, suggesting a chronic sleep deficit, and then settled into an average of 8.9 hours each night. By the fourth week Wehr saw something that wasn’t supposed to happen in humans. They all developed a sleep pattern characterized by two sleep sessions. Subjects tended to lie awake for one to two hours and then fall quickly asleep.   After about 4 hours of solid sleep, they would awaken and spend one to two hours in a state of quiet wakefulness before a second four hour sleep period. 

     This bimodal sleep has been observed in many other animals.  One such creature turns out to be pre-industrial man. Only recently have anthropologists and historians scrutinized the sleep of other cultures, earlier centuries and prehistoric humans.  In the remarkably informative “At Day’s Close, Night in Times Past”, Roger Ekirch unveils nocturnal life in the pre-industrial west.   Drawing from a broad range of sources he found a trove of evidence documenting our history of bimodal sleep.  Until the late 1700s, and the widespread use of artificial light, people retired to bed soon after sun down and entered what was called “first sleep.”  They would awaken three or four hours later and enjoy a couple hours of quiet. During this time they often prayed, chatted about dreams and had sex. A French physician described this time between  sleeps as a particularly good opportunity for sexual intimacy when couples “do it better” and have “more enjoyment”.  The middle night interactions seem to have been essential for social cohesion.  This was followed by “second sleep” that again lasted 3-4 hours and ended with sunrise.

     In fact a study of contemporary cultures across the globe reveals a wide spectrum of sleep habits. Some anthropologists now speak of three sleep cultures: monophasic cultures (the West where one consolidated sleep period dominates), siesta cultures (where one afternoon nap is added in the afternoon, the word siesta meaning the 6th hour) and polyphasic cultures (China, Japan, India where multiple naps throughout the day of varying lengths are the norm).

     Researchers have replicated and expanded on Wehr’s work.  Several studies have taken subjects to deep underground bunkers free of any artificial light in order to observe our internal clock’s rhythm.  Again, they observe this biphasic pattern.  Subjects sleep in two  four hour solid blocks separated by a couple hours of meditative quiet during which there is a remarkable surge of prolactin, unseen in modern humans.  The participants report feeling so awake during the day that it is as if they experience true wakefulness for the first time.

 So we find ourselves in a somewhat perverse situation.  We have not evolved to naturally drift rapidly into one continuous nocturnal snooze.  But according to the medical community and the pharmaceutical industry, if we don’t do this, we suffer from a sleep disorder that merits medicating.  However, if you ask any sleep expert how some people seem to fall asleep quickly and sleep continuously for seven or eight hours they’ll say that such a sleep pattern  is characteristic of chronic sleep deprivation.

     We evolved in an environment of alternating light and darkness and developed internal clocks to manage in such conditions. Every known organism with two or more cells has an internal clock.  In this regard we are not unique.  It is our use of artificial light to extend our day and defy our natural rhythms that distinguishes humans. We have just begun to understand the consequences of this Promethean sin. Sleep deprivation has been linked to obesity, hypertension, insulin resistance, cardiac disease, compromised immune function and depression. In the same way that food products/supplements are replacing normal eating with dire health effects, sleep continues to be condensed by the 24/7 culture.  The recent rapid growth of a new category of medications that promote wakefulness makes one wonder if sleep will soon be optional or ultimately obsolete.    


So what are you supposed to do if you?

     The constraints of work schedules and family responsibilites make radical changes in sleep-wake timing difficult. Here's some guidelines:

  1. Abandon the idea of going to bed for 6-8 hours of sleep at night (unless this works for you).
  2. Get a feel for what your sleep cycle looks like. If you wake up before you need to, get up. This is probably a natural cycle end.  You will make up for lost nighttime sleep with a nap(s).
  3. Napping Guidelines:
·      Timing: Afternoon (3-5 PM) Proven to provide more sleep efficiency, more slow wave sleep, and less time to fall asleep
·      Duration: Optimally 10 – 20 minutes. People experience greater cognitive impairment due to sluggishness after a nap of 30 or more minutes than that due to sleep deprivation.
·      The full benefits of naps comes with habitual napping. Stick with it!

       4.  If possible, when you feel like reaching for that afternoon caffeine fix, take a nap. 

     In the next blog I will take a look at the impact of the pharmaceutical industry on our sleep culture.



Monday, September 5, 2011

Lipitor's Legacy

     On the first day of medical school we were told that half of what we would be taught would be proven false. We just didn't know which half.  If people knew that they might take medical gospel with a grain of salt. But we all have a need to believe in experts, especially when we're sick. Trying to swim through the medical literature to find out what's what is difficult enough for the highly trained medical researcher. And sometimes things are only half wrong.

     Statins (cholesterol lowering drugs) are the most widely prescribed medications in the US, taken by over 40 million people.  The story of how this group of medications climbed to such prominence is a perfect example of a therapeutic intervention based on false assumptions that has life-saving effects.

     Coronary heart disease (clogging of the arteries that feed the heart causing heart attacks) is the principal cause of death in the developed world.  Atherosclerosis (the accumulation of plaque on the walls of the coronaries) is the primary disorder in coronary heart disease. Researchers found an association between elevated blood LDL cholesterol and increased atherosclerotic disease. Statins were observed to lower this bad cholesterol and reduce coronary heart disease. Eureka! Case closed.

     But there is no apparent association between coronary events and the level of LDL reduction. In fact many patients who achieve recommended LDL cholesterol goals still  develop the complications of atherosclerosis. So what gives?

     It so happens that statins do many things in addition to lowering cholesterol.  They reduce inflammatory responses and our tendency to form clots.  The atherosclerotic plaques that already exist in our coronaries are stabilized by statins so they don't shed particles that plug up the heart's blood vessels. And the pathological process that occurs in the tissue that lines our coronaries (endothelium) and leads to plaque formation is markedly improved by statins. Inflammation, clotting, unstable plaque, and disease of  the endothelium are the primary causes of coronary heart disease, not cholesterol.

     And that's not all. A recent study looking at statin users after 11 years suggests that they had a lower death rate from all causes.  This increased survival amongst statin users was mainly attributed to a reduction in deaths from infection and respiratory illness, not cardiovascular deaths.

     Lipitor's legacy?  We still don't know the half of it.

Saturday, September 3, 2011

Medicine's War on Dietary Fat: How America Got Fat

     The story of how dietary fat was pathologized and became the focus of US public health policy is a story about what happens when public demand for simple advice collides with the confusing ambiguity of real science.  Ancel Keys, an enormously influential american scientist who studied the effect of dietary fats on health (same fellow developed mobile meals for the armed forces in the 1940s - "K Rations") almost single-handedly convinced the american medical establishment of the need to decrease our fat intake. Where did this idea come from and why did it get such traction?

     In the 1950's the US appeared to experience the start of a dramatic increase in coronary heart disease. Keys published  his seminal "Seven Countries Study" in this context, demonstrating that serum cholesterol was strongly correlated with cardiovascular illness.  He suggested that dietary fat was the culprit, causing an increase in cholesterol which in turn triggered heart disease. This causal chain was not accepted by many in the medical community who cited flaws in the methodology of his study. But the skeptical camp was eclipsed by the politically adept Keys. By1982 the Keys ideology held sway and the US Department of Agriculture, the American Medical Association, and the American Heart Association told an increasingly health-conscious public that fat consumption had to be reduced. Americans were flooded with food products pumped up with carbohydrates to replace the fat and marketed as heart healthy foods. Unfortunately this well intentioned intervention not only stood on false assumptions, it initiated the obesity epidemic.

     Einstein once said "Not everything that can be counted counts, and not everything that counts can be counted."  It turns out, this applies to cholesterol. Not all LDL cholesterol is created equal. In fact, LDL can be divided into two (or four) camps according to size and buoyancy. Large buoyant LDL do not cause disease and usually coexist with low triglycerides and high HDL levels. This profile is known as Pattern A. On the other hand, small dense LDL are usually found with high triglycerides and low HDL. This B Pattern does form plaques in coronary arteries and causes heart disease.

     Now for the kicker. Dietary fat generally increases large buoyant (benign) LDL. Dietary carbohydrate increases small dense LDL! In other words, the mantra dietary fat is causing heart disease and obesity is a myth.  And when your doctor gives you an LDL cholesterol reading, you don't know what it means. You need to know how much of what type of LDL (the test is available at Quest).  This is not to suggest you go on a saturated fat binge, but let's take a look at what the data tells us.

  • Reducing fat and increasing carbs (especially refined carbs) leads to weight gain and a metabolic state that favors a worsening of atherosclerotic heart disease
  • Reducing carbohydrate, but not saturated fat, improves the disordered blood fat condition
  • Despite conventional wisdom that reducing all saturated fat is beneficial for cardiovascular health the evidence is lacking
  • In the Seven Countries Study the regions with the highest coronary heart disease (Finland) and the lowest (Crete) had the same amount of total fat intake, about 40%, the highest amongst the 16 populatios (yes 16, Keys only included 7!)
  • While there is no question that trans fats are associated with increased heart disease, total fat is not

     So over the past 3 decades, while fat consumption has steadily decreased, the prevalence of obesity and type two diabetes has skyrocketed and cholesterol-lowering medications have become a multibillion dollar per year enterprise.

     In the next blog I will discuss statins, those cholesterol lowering medications, and why they may be really good for you. It has nothing to do with cholesterol!


Wednesday, August 24, 2011

High Fructose Corn Syrup - The Poisoning of America

   Why should you know the high fructose corn syrup (HFCS) story? Because it's an incredible parable that has it all; presidential politics, the unintended consequences of manipulating our foods, good guys and bad guys, the obesity epidemic, and the neuroscience of appetite control. If you think you're not a HFCS consumer, I'll bet you're wrong. It is added to practically all prepared food products, not just sodas and juices (breads, breakfast cereals, ketchup, cookies, ice cream, crackers, cough syrup, cottage cheeses, yogurts, applesauces, pickles, jams, fruit, salad dressings, sauces, soups, sports drinks... you get the picture).

     The tale begins with president Nixon's War on Poverty in the early '70's. He feared that unstable food prices (especially sugar) could cost him the election. So he assigned his Secretary of Agriculture, Earl Butz, the task of exploring ways to produce cheap food. In 1966 a Japanese scientist had invented HFCS,  a very inexpensive, very sweet, and very stable substitute for traditional sugar (sucrose). Bingo! HFCS was introduced to this country, stabilized the cost of sugar and quickly found its way into almost everything.

      Generally speaking, the sweeter a food, the more people like it. If you want to increase sales, make the product sweeter. Once there was a very cheap way of doing that we were off to the races. Soda and juice led the way. Soft drink consumption has increased by 41% in the last 20 years. Fruit drinks have posted a 35% increase in the same period. But something curious happened. Somehow this increased sugar consumption did not translate into our feeling full. In fact, we started eating more. Our innate appetite feed-back system had been circumvented.

     It is essential to understand that human physiology evolved over millions of years in an environment that provided very little sugar. In other words, we're not made to handle much of the stuff. A quick illustration of how that has changed: In the late 15th century when Columbus introduced sugar cane to the New World, most Europeans has never eaten sugar. By 1700 the average Englishman consumed 4 pounds of sugar per year, 1800 18 pounds, 1900 90 pounds. But the United States has surpassed all other nations in this arena. The average American now consumes more than 140 pounds of sugar per year (much of it in the form of HFCS). And it shows.

     But why aren't we sated? How is it possible to knock back a 20 ounce soda that provides 240 calories and eat as much or more than we would have if we'd had 20 ounces of water?  This is where HFCS distinguishes itself.

     Our bodies control energy balance (the eating and burning or storage of calories) by a complex feedback system of hormones and neural connections where glucose is the primary indicator of global energy status. If there is an energy surplus, we store glucose as glycogen and make fat. If there is an energy shortage, we break down glycogen and make new glucose. When we eat, our blood glucose rises initiating a sequence of reactions that reach higher brain centers where this "information" is processed and a behavioral response (stop eating) is triggered. Fructose does none of this. Not only does increased  fructose consumption not produce the experience of satiety, it increases appetite!

     This is why the curves for HFCS and obesity track together. The food industry has found the perfect ingredient, sweeter than old-fashioned sugar and an appetite stimulant. We have outsmarted ourselves.

     In my next post I will look at the medical advice that fueled the obesity epidemic.



Tuesday, August 9, 2011

Drowning in Health Advice: How to Separate The Wheat From The Chaff

         The web has provided unprecedented access to medical information. That's the good news. The bad news is that it can be quite difficult discerning good data from junk. This problem can be encountered in any source, a blog, a news paper article, a television special, a web site or medical journal. In this post I suggest a quick methodical approach that can give you a good idea about the validity of any medical report.  

     Unfortunately we must contend not only with our incomplete understanding of medical conditions, but also with the willful manipulation of information for profit. The pool of data found on the web is contaminated by a variety of influences including pharmaceutical and nutriceutical companies, marketeers, the USDA, the FDA and many others. Here's just one recent example.  We were told by the authorities (who cited research papers) to stop eating fat and replace it with "heart-healthy" carbs.  This resulted in an obesity and diabetes epidemic.  There is controversy over such basic things as how much water to drink, how much exercise is necessary, what diet is healthiest. 

     So here's what you have to do. Go to the original research paper. Do Not Trust Someone Else's Interpretation of the data.  We all have a weakness for listening to "experts" and suspending our own judgement. This is especially true when we're sick. Be skeptical. Use the 8 point check system below and you'll be on solid ground.  

FOUND AT THE BEGINNING OF THE REPORT                                                               
1.  Where was it published? 
     If it's not in a peer-reviewed journal it has not been vetted by other researchers in the same field.

2.  What was study hypothesis (was it consistent with conclusion)?
     If the conclusion is not an answer to what the study was designed to observe, no conclusions can be       drawn.

3.  What was study design?

    Randomized Controlled Trial (RCT) 
    This is the gold standard.

    Compares intervention in study group compared with control group
    Subjects randomly assigned to groups
    Ideally should be double-blinded (eg. neither the subjects nor the researchers know who's getting    
    placebo until completion of the study)

    Cross-sectional (One point in time) vs Longitudinal (over defined period of time)
    Longitudinal usually are more telling.

    Prospective (started study and observed groups going forward in time) vs Retrospective (looked back    in time at groups)
    Prospective usually more informative.

4.  Sample Size (N)  THIS IS KEY

    Was there a large enough population observed to have sufficient "power" to detect statistically significant results  
    N < 50-100 generally not significant

    Dropouts (how many completed study, that is the real N, not how many started)

5.  Correlational vs. Experimental Research

    Correlational: no manipulation of variables

    Experimental:  manipulate variable and measure effect

    Example:  homocysteine and CVD  

              Correlational:  observe presence of CVD in subjects with certain homocystine levels
              Experimental:  manipulate homocysteine level and observe effect on CVD

              Independent Variable: manipulated variable (homocysteine)
              Dependent Variable:  observe how affected (CVD)

6.  Statistical Significance (p-value)  THIS IS KEY 

    Probability that observation between variables occurred by pure chance

    The higher the p-value, the less valid the observation

    p-value 0.05 means there is 5% probability that relationship between variables is a fluke
    (every 20 repetitions of experiment will get same or stronger relationship between variables   

    p >/= .05 borderline significant


7.  Meta-Analysis

    Combines the results of several studies with the aim of more powerfully estimating "effect size" (how   many cases needed to see effect, e.g. how many patients treated to save "x" number of lives) 
    Meta analyses are problematic because they combine studies with different designs, inclusion criteria etc.

8.  Funding THIS IS KEY  

    If the milk industry funded a report on the benefits of milk throughout the lifespan, it is suspect until repeated by a neutral investigator.

    Finding out about an authors neutrality is often made easy by a list of affiliations reported at the end of the paper.  Most reputable journals now require such reporting.