July 2008

You are currently browsing the monthly archive for July 2008.

tofu mealA study recently published in Human Reproduction demonstrated that intake of soy foods significantly reduces sperm counts in men.

The study is especially significant because it is the largest study in humans to examine the relationship between semen quality and phytoestrogens (plant compounds that can mimic the physiological effects of the endogenous hormone, estrogen).

Dr. Jorge Chavarro of the Harvard School of Public Health and his colleagues found that men who ate the most soy food had 41 million sperm per milliliter less than men who did not consume soy products. The normal sperm concentrations for men ranges between 80 and 120 million/ml.

The association between soy food intake and sperm concentrations was even stronger in men who were overweight or obese, and 72% of study participants were. They also found the relationship between soy foods and sperm concentration was strongest in men with “normal or high” sperm counts.

Animal studies have linked the high consumption of isoflavones with infertility, but until now there has been little evidence of this effect in humans. Isoflavones are plant compounds with estrogen like effects and are found mainly in soybeans and soy-derived products.

What is particularly revealing is that the men in the highest intake group (who had the largest sperm count reduction) had a mean soy food intake of only half a serving per day. This is equivalent to having one cup of soy milk or one serving of tofu, tempeh or soy burgers every other day!

I don’t know about you, but I happen to know quite a few people who consume a lot more soy than than that on a regular basis. Sadly, many of them are children whose parents innocently believe that soy products are “healthy”. This is not their fault, of course; this erroneous and dangerous message has been aggressively promoted in the mainstream media for decades.

If the effect of such moderate servings of soy on adult males is so significant, what effect might soy foods have on developing boys who have not yet reached sexual maturity?

“Early puberty (caused by consuming soy products) may increase a boy’s chances of developing testicular cancer later in life, because it means longer exposure to sex hormones,” said University of North Carolina researcher Marcia Herman-Giddens. Congenital abnormalities of male genital tracts are also increasing. Recent studies found a higher incidence of birth defects in male offspring of vegetarian, soy-consuming mothers.

What about babies? Preliminary studies indicate that children given soy formula go through puberty much earlier than children who were not fed soy products. A 1994 study done in New Zealand revealed that, depending on age, potency of the product, and feeding methods, infants on soy formula might be consuming the equivalent of up to 10 contraceptive pills a day. By exposing your baby to such large amounts of hormonal-like substance, you are risking permanent endocrine system damage (pituitary gland, pineal gland, hypothalamus, thyroid, thymus gland, pancreas, ovary, testis, adrenal glands).

Dr Chavarro speculates that the increased estrogenic activity caused by consumption of soy foods may have an adverse effect on the production of sperm by interfering with other hormonal signals. This effect could be strengthened further in overweight and obese men because men with high levels of body fat produce more oestrogen than slimmer men, leading to high overall levels of oestrogen in the body and reproductive organs.

If you’re wondering how soy continues to be so widely accepted and aggressively promoted as a “health food” in spite of the overwhelming evidence to the contrary, I recommend reading The Whole Soy Story by Kaayla Daniel, PhD, CCN. You can read introduction to this eye-opening book here.

The history of soy products and their designation as a “health food” is particularly revealing, as Daniel points out:

Early soy food promotion in America aimed at two specific markets—vegetarians and the poor—soy milk and soy cereals for Seventh Day Adventists, Bac-O-Bits and meat extenders for the budget conscious. But there was a lot of soy to sell and these markets were limited. There was so much to sell because the market for processed foods had experienced explosive growth since the 1950s—and most processed foods contain soy oil. The industry found itself saddled with a waste problem, the leftover sludge from soy-oil manufacture which it could either dump or promote. The exigencies of corporate life naturally chose profit-seeking over disposal and that meant expanding the market, finding more ways to use soy ingredients in processing and convincing more people to pay money for soy-based imitation foods.

“The quickest way to gain product acceptability in the less affluent society,” said a soy-industry spokesperson back in 1975, “. . . is to have the product consumed on its own merit in a more affluent society.” Thus began the campaign to sell soy products to the upscale consumer, not as a cheap poverty food, but as a miracle substance that would prevent heart disease and cancer, whisk away hot flashes, build strong bones and keep us forever young. Soy funds for research enlisted the voices of university professors who haplessly demonized the competition—meat, milk, cheese, butter and eggs.

Soy is one of the “Big Four” cash crops in the U.S. and the funds for its marketing are enormous:

“Farmers pay a fee for every bushel of soybeans they sell and a portion of every dollar spent on Twinkies, TV dinners and the thousands of other processed foods that contain soy in one form or another, ultimately go towards the promotion of the most highly processed foods of all—imitation meat, milk, cream, cheese, yogurt, ice cream, candy bars and smoothies made from soy.

All soybean producers pay a mandatory assessment of one-half to one percent of the net market price of soybeans. The total—something like eighty million dollars annually—supports United Soybean’s program to “strengthen the position of soybeans in the market place and maintain and expand domestic and foreign markets for uses for soybeans and soybean products.”

And of course, these advertising dollars are largely responsible for creating the erroneous notion that highly processed soy foods are “healthy”:

“A survey of March 2004 health magazines reveals five-and-one-half pages of ads for products containing soy in Alternative Medicine (two of which promote soy as a solution to the problems of menopause); five-and-one-half pages in Vegetarian Times; and five pages in Yoga Journal. The ads that keep today’s health-oriented publications afloat aim at mainstream, not alternative, culture: soy milk ads feature faces of smiling children; high-protein bars create expressions of ecstacy on upside-down models; and a hostess who serves chocolate-covered soy nuts is the toast of her party.”

However, in spite of advertising and popular belief, processed soy products are not health foods. Because of their estrogenic effects, they act more like drugs in the body than foods. And as we all know, drugs can be extremely dangerous when taken irresponsibly and without indication. Millions of men, women and children around the world are “drugging” themselves daily with soy products, and the tragic irony is that this is done in the name of “health”.

Keep in mind that tofu, tempeh and soy milk are not the only sources of soy. In fact, almost all processed food has soy in it, in the form of soy oil, soy lecithin, soy flour or soy protein. Everything from your favorite corn chips to hamburger buns to mayonnaise is likely to contain a substantial amount of soy.

The most sensible approach, then, is to eliminate processed soy products from your diet and dramatically reduce or eliminate your consumption of processed food (of course there are many other reasons to do this – soy is just one).

A small amount of miso or natto or other fermented soy product as a condiment every now and then is probably not harmful. But those are not the soy products Americans tend to eat.

For more information about the dangers of soy products, please see my recent article called The Soy Ploy.

man on bench

Today’s article is the sixth in an ongoing series on antidepressants and depression. It’s long, so you might want to print it out or go grab a cup of tea. If you are visiting the blog for the first time, or you haven’t had a chance to read the previous articles, you might find it helpful to do so before diving into this one.

The treatment of depression with drugs is based on the enormous collective delusion that psychiatric drugs act by correcting a chemical imbalance in the brain. As a result, a large percentage of the population has been convinced to take drugs in order to deal with the problems of daily life. Everything from break-ups to job difficulties to worries about the future have been transformed into “chemical problems”.

The myth that depression is caused by a chemical imbalance has permeated public consciousness, changing the way we view our lives and ourselves. We have become, in the words of sociologist Nicholas Rose, a society of “neurochemical selves”, recoding our moods and ills in terms of the supposed functioning of our brain chemicals and acting on ourselves in light of this belief.

This is reflected in the growing market for non-prescription products claiming to “enhance serotonin levels” in health food shops and on the Internet, and the cascade of claims that everything from chocolate to exercise makes you feel good because it “balances brain chemicals”. It also largely explains the 1300% growth between 1990 and 2000 in prescriptions of selective serotonin reuptake inhibitors (SSRIs), the most popular class of antidepressant drugs.

Yet, as I have explained in a previous article, there is no evidence to support the notion that depression is associated with an abnormality or imbalance of serotonin (or any other brain chemical), or that antidepressants work by reversing such a problem. Moreover, recent meta-analyses (Kirsh et al. 2008; Kirsh et al. 2004) suggest that antidepressants have only a small advantage over placebo, and that this advantage is most likely clinically meaningless. It has never been demonstrated that antidepressants act in a specific, disease-centered manner, nor have antidepressants ben shown to be superior to other drugs with psychoactive properties (Moncrieff & Cohen, 2006).

In spite of the complete lack of evidence supporting their use, one still often hears the familiar refrain “yes, but drugs are necessary in some cases!” This statement may in fact be true, but not because drugs have been demonstrated to be effective for certain types of depression or with certain patients. Instead, drugs may be necessary in a society where traditional social support structures which play a therapeutic role have completely broken down.

Studies have shown that most individuals with a healthy social support network are able to easily handle major stressors in life. When that network is underdeveloped or non-existent, it is far more likely that depression will occur (Wade & Kendler, 2000).

It has been observed, for example, that schizophrenia and other mental disorders occur less frequently and have a much more favorable prognosis in so-called “Third World” countries than in the West (Sartorious et al 1986). The influence of culture has been mentioned as an important determinant of differences in both the course and outcome of mental illness.

In developing countries strong connections between family members, kin groups and the local community are more likely to be intact. In addition, cultural, religious and spiritual beliefs in these societies provide a context in which symptoms of depression and other mental illness can be understood outside of the label of medical disease or pathology. Possession and rites of passage are two examples of such contexts.

In the West, however, these traditional support structures have been replaced by new cultural norms that do not offer support or therapeutic value to people experiencing mental distress. Among the socio-cultural factors identified by researchers as having a negative influence in Western societies are: extreme nuclearization of the family and therefore lack of support for mentally ill members of the kin group; covert rejection and social isolation of the mentally ill in spite of public assertions to the contrary; immediate sick role typing and general expectation of a chronic mental illness if a person shows an acute psychotic reaction; and the assumption that a person is insane if beliefs or behavior appear somewhat strange or “irrational”.

Therefore, in the West depression is far more likely to occur because of the breakdown of strong family and community support structures, the stigmatization of mental illness, the belief (perpetuated by drug companies) that all mental illness is “chronic”, and the lack of any cultural, religious or spiritual support for people who do not share the consensus view of reality. Statistics measuring the prevalence of depression around the world bear this out. According to the World Health Organization, if current trends continue, by the year 2020 depression will be the leading cause of disability in the West.

In contrast, in developing countries that have not yet fully adopted Western culture transient (i.e. temporary) psychotic reactions and brief depressive episodes are more common than chronic mental illness. When an individual begins to experience distress, the surrounding family and community respond with sympathy, support and traditional therapeutic resources. Surrounded by a rich support structure, the individual is able to return relatively quickly to healthy mental functioning – without drugs.

The cultural differences in the incidence of and response to mental illness suggests something that may be entirely obvious to you but has been largely forgotten in contemporary discussions about depression: that it cannot be properly defined or understood without considering the social context in which it occurs.

In other words, depression is both an individual and a social disease.

Unsurprisingly, epidemiological evidence has tied depression to poor housing, poverty, unemployment and precarious or stressful working conditions. Imagine, for example, a single parent working two low-paying jobs trying to support her child with no family or close friends nearby to help and little time to spend with them even if they were present. Or consider a child that spends most of his days in a school that doesn’t value his style of learning, eats a steady diet of sugar and processed food and lives with an alcoholic parent who is verbally and perhaps physically abusive. It makes perfect sense that both of these individuals could frequently feel sad, hopeless and even desperate. But are these individuals “depressed”?

Even if we agree that the intense feelings they are experiencing could be labeled as “depression”, perhaps a more relevant question might be this: is depression always a pathology? Or is it possible that much of what we call depression is simply a natural and entirely human response to certain circumstances in life?

This is exactly what Allan Horwitz and Jerome Wakefield argue in their book “The Loss of Sadness: How Psychiatry Transformed Normal Sorrow into Depressive Disorder“. The authors point out that the current epidemic of depression has been made possible by a change in the psychiatric definition of depression that allows the classification of normal sadness as a disease, even when it is not.

Horwitz and Wakefield define normal sadness as having three components: it is context-specific; it is of roughly proportionate intensity to the provoking loss/stimulus; and it tends to end roughly when the loss or situation ends, or else it gradually ceases as coping mechanisms adjust individuals to new circumstances.

The hypothetical examples I gave above of the single parent and the child living in an abusive home environment undoubtedly meet Horwitz & Wakefield’s criteria for “normal sadness”. The feelings occur in a specific context and are roughly proportionate to the circumstances. And though we can’t know this for sure since our example is hypothetical, one might assume that if the conditions of their lives were more favorable they may not feel so sad, hopeless and desperate. Nevertheless, in the West today both of these individuals would almost certainly be labeled as depressed and treated with psychoactive drugs.

While I appreciate the importance of Horwitz and Wakefield’s distinction between normal sadness and depression, I believe it is incomplete. In their framework, there must be some stimulus such as the death of a loved one, the loss of a job or the end of a relationship in order for someone to “escape” the depression label. Yet such events are not the only causes of discontent.

Regardless of economic status people in the West live in increasing isolation and alienation from each other, their communities and the natural world. Phone and email have replaced face-to-face interaction. The impersonality of big-box chain stores and strip mall outlets have replaced the intimacy and familiarity of the local corner store. The pace of life has become so fast that most people feel they are struggling just to get by. And even though we are far richer as a nation now, studies show that people today are not as happy as they were in the 1950s.

Sociologist Alain Ehrenberg has recently suggested that depression is a direct result of the new conceptions of individuality that have emerged in modern societies (Ehrenberg 2000). In societies that celebrate individual responsibility and personal initiative, the reciprocal of that norm of active self-fulfillment is depression – now largely defined as a pathology involving a lack of energy or an inability to perform the tasks required for work or relations with others. The continual incitements to action, to choice, to self-realisation and self-improvement act as a norm in relation to which individuals govern themselves, and against which differences are judged as pathologies.

Another way to speak of this change is as an increase in psychological stress. It is difficult to accurately compare stress levels today to those of the past, but sociologists like Juliet B. Schorr at Harvard University have observed that Americans (and likely people in all Western societies) are working longer hours, often with less pay, and have far less time for leisure. Since recent studies have identified a causal link between work stress and depression, one can safely make the assumption that the increase in work hours together with the decrease in leisure time could very well be contributing to the epidemic of depression.

Consider a middle-class individual living in an “exurban” housing tract 100 miles from their workplace. Each day they commute for two hours in each direction, fighting traffic all the way. Their job lacks any relevance or meaning to them and is simply done to make money and survive, without any joy or satisfaction. They have little control or agency at work and spend there day performing trivial tasks that do not challenge or engage them. They do not know their neighbors, they are disconnected from nature, and perhaps they have recently gone through a painful divorce.

If this person is experiencing apathy, sadness and a lack of enthusiasm for life, does that mean they are depressed? And even if we do label their condition as “depression”, can we truly understand or treat them successfully without addressing the circumstances (or root causes) of this person’s so-called depression?

There is little doubt that the people who seek treatment for depression are suffering. But should psychological and emotional suffering always be viewed as “something to get rid of”? Despite claims made by the companies who market antidepressant drugs, suffering cannot be pulled out of the brain like a splinter from the foot. Great religious and spiritual traditions from around the world view suffering as an avenue to greater understanding of oneself, life and God. Suffering can be viewed as a signal drawing our attention to issues in our life that need to be addressed.

If we simply use chemicals to diminish these signals and numb ourselves from their effects, we lose the opportunity to grow, evolve and heal. According to world-renowned psychiatrist David Healy, when strong feelings are suppressed by rejecting them or with drugs, people become “binded” to their own psychological or spiritual state. Psychiatric drugs blunt and confuse essential emotional signals and make it very difficult for people to know what they are really feeling. And because the pharmacological effects of drugs impair mental functioning, they can reinforce the patient’s sense of helplessness and dependence upon chemicals – even when those chemicals are preventing them from full recovery.

People who are depressed have lost touch with their hopes and dreams. Yet they wouldn’t be depressed if they did not still have a vision for a better life. If drugs are used to obliterate the feelings of discontent or suffering, the connection to that vision for a better life may be lost.

One might legitimately wonder, then, whether it is wise to attempt to treat such complex human and social problems with chemicals. Such a treatment strategy can only be useful if the goal is to perpetuate the status quo, to continue with “business as usual” at all costs, rather than addressing the psychosocial problems that are at the root of the discontent.

The message that drugs can cure our problems has profound consequences. Individual human beings with their unique life histories and personal characteristics have been reduced to biochemical entities and in this way the reality of human experience and suffering is denied (Moncrief 2008). People have come to view themselves as “victims of their own biology”, rather than as autonomous individuals with the power to make positive changes in their lives.

At another level such an exclusive focus on drug treatment allows governments and institutions to ignore the social and political reasons why so many people feel discontented with their lives. This is not surprising, of course. Both governments and corporations stand to benefit from maintaining the status quo and are often threatened by social change.

The “disease-centered” model of depression is presented as objective, unassailable fact, but it is instead an ideology (Moncrieff 2008). All forms of ideology convey a partial view of human experience and activities that is motivated by a particular interest; in this case, the interest of multinational pharmaceutical companies. The best selling drugs today are those that are taken indefinitely. This has fueled the drug companies’ efforts to label depression as a chronic, lifelong disease in spite of epidemiological studies which indicate that, even when untreated, depressive episodes tend to last no longer than nine months.

In her article called “Disease Mongering in Drug Promotion“, Barbara Mintzes describes the effort of pharmaceutical companies to “widen the boundaries of treatable illness in order to expand markets for those who sell and deliver treatments”. This phenomenon is known as “disease mongering”, and involves several tactics including the introduction of new, questionable diagnoses; the promotion of drugs as the first line of defense for problems not previously considered medical; the expansion of current definitions of mental illness; and the inflation of disease prevalence rates.

In a blatant example of the last strategy, pharmaceutical companies have estimated in their promotional literature that up to one-third of people worldwide have a mental illness. This ridiculous (and in my opinion, transparent) claim is not supported anywhere in the scientific literature. Peer-reviewed studies put the figure at significantly less than 5%.

It should be obvious that drug companies would be the first to benefit from such grossly overstated estimates of the prevalence of depression. In fact, executives in the pharmaceutical industry have even admitted as much. Thirty years ago Henry Madsen, the CEO of Merck, made some very candid comments as he approached his retirement. Suggesting he’d rather Merck to be more like chewing gum maker Wrigley’s, Gadsen said that it had “long been his dream to make drugs for healthy people.”

Sadly, Madsen’s dream has been realized with the advent of not only antidepressants, but also statins, antacids and other drugs sold to essentially healthy people. These medications are now the top-selling drugs around the world. (Madsen’s sense of morality may have been skewed, but he certainly was a visionary businessman.)

The field of psychiatry has largely collaborated with the pharmaceutical industry in defining intense and painful emotions as “disorders”. Diagnoses like “panic disorder” and “clinical depression” give a medical aura to powerful emotions and make them seem dangerous, pathological, unnatural or out of control. In an astute observation of this state of affairs, psychiatrist Steven Sharfstein remarked in the March, 2006 issue of Psychiatric News that the biopsychosocial model of depression has been replaced by the “bio-bio-bio” model.

It has now become common practice for psychiatrists to prescribe drugs on their very first visit with a patient, and to tell that patient that they will likely need to take drugs for the rest of their lives. Such a prognosis is offered in spite of the fact that no attempt has been made whatsoever to try proven, non-drug treatment alternatives such as psychotherapy and exercise!

The increasing rates of depression and poor long-term treatment outcomes clearly indicate that the current drug-centered strategy is not effective. For real progress to be made the psychological, social, economic and political roots of depression must be addressed. This will require a coordinated effort on the part of patients, physicians, communities and politicians. It will not be easy, because we are fighting deeply entrenched beliefs about the “biochemical” nature of depression as well as a $500 billion dollar pharmaceutical industry that is not likely to willingly give up the $20 billion in sales represented by antidepressants.

There is no doubt that the systemic changes I am describing are far more difficult to implement than administering a drug. Nevertheless, we must begin if we hope to heal ourselves, our culture and our world.

In the final article of the series, I will present proven non-drug alternatives for treating depression. Stay tuned!

Please remember to always seek the guidance of a qualified psychiatrist when attempting to withdraw from psychoactive drugs. It is very dangerous to stop taking the drugs abruptly or to begin the withdrawal process without supervision. Psychiatrist Peter Breggin is considered to be one of the foremost experts in psychiatric drug withdrawal, and he has written a book (linked to below) for helping patients wean off of drugs. If you are considering stopping your medication, I recommend you read this book and discuss it with your doctor.

Recommended books

  • Your Drug May Be Your Problem: How and Why to Stop Taking Psychiatric Medications, by Peter Breggin

One of my favorite researchers, Chris Masterjohn, has just launched a new blog called “The Daily Lipid” where he writes about fats, cholesterol and health. Chris is pursuing a Ph.D. in Molecular and Cell Biology and is one of the most knowledgeable contemporary writers on cardiovascular health that I’m aware of. With his permission, I am cross-posting the first two articles on his blog – which you should definitely consider adding to your blogroll!

pregnant woman

Statins for pregnant women?

Statin manufacturers, the sycophantic researchers they pay, and the shameless hucksters who sell them are always up to no good, but their recent attempts to market them to pregnant women are simply horrifying.

According to a recent news article published in Mail online, researchers from liverpool believe that taking statins during pregnancy might help women avoid caesarean sections by promoting more robust uterine contraction. They hope to begin human trials in three to five years.

Somehow, the author of this article failed to react with the shock and horror appropriate to the situation — which should be the same shock and horror with which we would react to the suggestion that pregnant women should take thalidomide to avoid morning sickness.

Back in 2004, a report in the New England Journal of Medicine showed that the use of statins in the first trimester of pregnancy was associated with birth defects, especially severe central nervous system defects and limb deformities. In fact, 20 out of 52 women exposed to statins gave birth to offspring with such defects, which represents a birth defect rate of 38 percent, nearly 20 times the background rate of birth defects!

Even before this report was published, researchers already knew that statins caused birth defects in animal experiments, and the FDA already required the drugs to carry a label warning pregnant women to stay away from them. The article linked to above stated the following:

“FDA took this action because it was recognized that fetal cholesterol synthesis was essential for development, and because animals given statins during pregnancy had offspring with a variety of birth defects,” [one of the study's authors] said.

Less than a year later, Merck and Johnson & Johnson jointly asked the FDA for permission to market an over-the-counter statin. One of the concerns about the proposal was the risk to pregnant women. USA Today reported:

The FDA classifies Mevacor and other statins as pregnancy category X, which means they are not supposed to be taken by pregnant women. Not only have category X drugs been linked to fetal abnormalities in animal or human studies, but the FDA also has declared that the benefits of taking them do not outweigh potential risks.

According to the same article, Merck made a disturbing admission:

“Of course, there will be women who take it off-label,” acknowledges Merck executive Edwin Hemwall, referring to the use of non-prescription Mevacor by women under 55.

And what could prompt women to use statins during pregnancy against recommendations? Certainly a news article declaring that statins might prevent the need for caesarean sections and their associated complications could prompt some women to do so.

So what ground-breaking research made these Liverpool researchers so confident that taking drugs associated with twenty times the normal rate of major birth defects during pregnancy might be a good idea that they put out a press release declaring this confidence to the public before any trials were even under way?

Well, according to the article:

Tests have already shown that raising levels of cholesterol interferes with womb tissue’s ability to contract.
Really. Raising levels of cholesterol. You might wonder how they accomplished that. Did they use cholesterol-raising drugs? I don’t know of any drugs that do that. Did they use egg yolks, or the dreaded dietary villain — gasp — saturated fats?

No, the story is quite different.

The apparent basis for this ridiculous statin cheerleading is a 2004 study published by researchers from the University of Liverpool in the American Journal of Physiology — Cell Physiology entitled “Increased cholesterol decreases uterine activity: functional effects of cholesterol alteration in pregnant rat myometrium.”

Rather than feeding anything to pregnant women or pregnant rats, the researchers took pregnant rats and killed them. So the first thing we can say is that statins might help you deliver a baby if your doctor kills you first.

Then they extracted the uterine tissue and either extracted cholesterol from it with a chemical solvent called methyl beta-cyclodextrin, or enriched it either with cholesterol mixed with this solvent or with LDL (which they didn’t measure for oxidation prior to use). Then they added drugs to induce contraction under either cholesterol-depleted or cholesterol-enriched conditions, and found that contraction was greater under cholesterol-depleted conditions.

So now we know that — wait, what is it we know?

Well, quite clearly, we don’t know anything that we can have any confidence has any physiological relevance at all. That is, except the fact that statins cause birth defects in animals, and they increase the rate of birth defects in humans by nearly twenty times, primarily by causing severe defects of the central nervous system and limb deformities.

To add to that, we also know that the vast majority of humans conceived with Smith-Lemli-Opitz Syndrome (SLOS), a genetic inability to synthesize enough cholesterol, die of spontaneous abortion in the first 16 weeks of gestation. Those who live long enough to be born suffer from mental retardation, autism, facial and skeletal malformations, visual dysfunctions and failure to thrive.

Statins for pregnant women? I don’t think so.

Article written by Chris Masterjohn

Statins for 8-year old children?

child with drug

The American Academy of Pediatrics recently announced new recommendations for giving cholesterol-lowering drugs to children as young as eight years old. They also recommend giving low-fat milk to infants as young as one year old.

The New York Times published several articles on this, first announcing the recommendation the day the academy made it, then describing the backlash of saner doctors and other members of the public against it, and finally editorializing that while they were first “appalled” at the recommendation, after reading the report they were more dismayed at the state of our children’s health.

Concerning this frightful state of children’s health, the Times reported the following:

“We are in an epidemic,” said Dr. Jatinder Bhatia, a member of the academy’s nutrition committee who is a professor and chief of neonatology at the Medical College of Georgia in Augusta. “The risk of giving statins at a lower age is less than the benefit you’re going to get out of it.”

Dr. Bhatia said that although there was not “a whole lot” of data on pediatric use of cholesterol-lowering drugs, recent research showed that the drugs were generally safe for children.

An epidemic of what? High cholesterol? Not according to the academy’s report, which states that cholesterol levels in children declined between 1966 and 1994 and stayed the same between 1994 and 2000.

No, we are in an epidemic of obesity. As the Times reported:

But proponents say there is growing evidence that the first signs of heart disease show up in childhood, and with 30 percent of the nation’s children overweight or obese, many doctors fear that a rash of early heart attacks and diabetes is on the horizon as these children grow up.

Is there any evidence that statins lead to weight loss? If there is, I am not aware of it.

The point is immaterial, because the academy doesn’t claim to have any evidence for its position in the first place. For example, its report states the following:

Also, data supporting a particular level of childhood cholesterol that predicts risk of adult CVD do not exist, which makes the prospect of a firm evidence-based recommendation for cholesterol screening for children elusive.
And further down:

It is difficult to develop an evidence-based approach for the specific age at which pharmacologic treatment should be implemented. . . . It is not known whether there is an age at which development of the atherosclerotic process is accelerated.

In other words, they don’t know what level of cholesterol is risky and at what age it starts posing a risk, but they will nevertheless assume that there is some level that does start to pose a risk at some age and they will thus have to make a guess just what that level and what that age is.

The report discusses evidence that the “metabolic syndrome” and the “recent epidemic of childhood obesity” are tied to the risk of diabetes and heart disease and evidence that even modest weight loss at a level of five to seven percent is sufficient to prevent diabetes. Yet somehow instead of making a recommendation about how to more effectively lose weight the authors derive from this data a much less logical but much more profitable conclusion that 8-year-olds should be put on statins.

As to the recommendation to feed infants low-fat milk, the Times reported the following:

The academy also now recommends giving children low-fat milk after 12 months if a doctor is concerned about future weight problems. Although children need fat for brain development, the group says that because children often consume so much fat, low-fat milk is now appropriate.

This is rather remarkable, because the academy attributed the drop in childhood cholesterol levels to the successes of the anti-fat, anti-cholesterol campaign that began in the 1950s. But now children no longer need milkfat because they are getting plenty of fat. Well which is it? Are they getting more fat now or less fat?

Of course milkfat is also a source of choline, along with liver and egg yolks, which is essential to brain development.

But even this misses the point. Cholesterol is essential to brain development!

One of the first articles I added to my section on the functions of cholesterol was an article entitled “Learning, Your Memory, and Cholesterol.” It discusses the evidence uncovered eight years ago that cholesterol is the limiting factor for the formation of synapses, which are the connections between neurons that allow learning and memory to take place.

Lowering brain levels of cholesterol can be detrimental at any age beacause of this, but the consequences for children — whose brains are still developing at a much more rapid rate — could be much more dire.

No doubt, most researchers and medical doctors mean well and are honestly trying to help our children. But surely someone in these drug companies must know that cholesterol is necessary for brain development, and that cholesterol-lowering drugs reduce mental performance in adults. Surely they must know that if we raise our next generation of children on statins during the critical periods of brain development, we may raise a whole generation with compromised intelligence.

And if that’s the case, are they trying to dumb us down? Sometimes it seems like that’s the case.

Article written by Chris Masterjohn

steak and veggiesA study was just published in the New England Journal of Medicine on July 17th comparing the effectiveness and safety of three different weight loss diets. 322 moderately obese subjects were assigned to one of three diets: low-fat, restricted-calorie; Mediterranean, restricted-calorie; or low-carbohydrate, non-restricted calorie.

The rate of adherence to the study diet was 95% at year one and 85% at year two. Among the 272 participants who completed the intervention, the mean weight losses were 3.3 kg for the low-fat group, 4.6 kg for the Mediterranean-diet group, and 5.5 kg for the low-carbohydrate group.

Perhaps more significantly, the relative reduction in the ratio of total cholesterol to HDL was 20% in the low carbohydrate group while only 12% in the low-fat group. Among the 35 subjects with diabetes, changes in fasting plasma glucose and insulin levels were more favorable among those assigned to the Mediterranean diet than among those assigned to the low-fat diet.

Unfortunately, the bias against saturated fat and animal products that is still so prevalent in the mainstream (in spite of the lack of evidence to support it) prevailed in this study. The research team advised those following the low-carb diet to “choose vegetarian sources of fat and protein” and moderate their consumption of saturated fats and meat.

This suggests that the low-fat dieters may have consumed a substantial portion of their calories as fat in the form of omega-6 polyunsaturated fatty acids. Excess intake of omega-6 fatty acids contributes to a host of problems including heart disease, diabetes, and cancer; but even more relevant to this study and its results is the fact that omega-6 fatty acids can cause increased water retention. And as everyone knows, increased water retention equals increased weight.

This certainly causes me to wonder how much more dramatic the results of this study might have been if the low-carb subjects were encouraged to significantly restrict their consumption of omega-6 fats (which cause water retention, and thus weight gain) and replace them with saturated fats (which do not cause water retention). What is remarkable is that in spite of the consumption of omega-6 fats, the low-carb group still lost more weight than both the low-fat and Mediterranean groups. That’s a strong endorsement for the benefits of a low-carb diet for weight loss.

The low-carb and Mediterranean (to a lesser degree) diet also had other benefits beyond promoting weight loss and improving cholesterol measures. The level of high-sensitivity C-reactive protein decreased significantly only in the Mediterranean and ow-carb group, with the low-carb group again showing the greatest decrease (29% vs. 21%). C-reactive protein is a measure of inflammation that has been positively correlated with heart disease in recent studies. Once again, one must wonder if the reduction would have been even greater in the low-carb group had the subjects been told to restrict their intake of omega-6 fats, which are known to promote inflammation.

Another interesting finding is that although caloric intake was only restricted in the low-fat and Mediterranean diet groups, the low-carb group also ended up eating fewer calories during the diet. Many people who follow a low-carb, high protein/high fat diet find that they spontaneously eat less because additional protein, and in particular fat, leads to greater levels of satiety (satisfaction).

One limitation of the study is that it relied on self-reported dietary intake (this is true of almost every dietary study except those performed in tightly controlled conditions, such as an inpatient facility). However, the study was somewhat unique in that it was conducted in a workplace at a research center with an on-site medical clinic. It also had several other strengths. The drop-out rate was exceptionally low for a study of this kind; all participants started simultaneously; the duration was relatively long (2 years); the study group was relatively large; and the monthly measurements of weight remitted a better understanding of the weight-loss trajectory than other studies.

depressed personThis week’s article in my continuing series on depression and antidepressants will examine the physiological, psychological and social consequences of antidepressant use.

Although these drugs are generally considered to be safe by the media and amongst medical professionals and patients, a close look at the evidence suggests otherwise. Antidepressants have serious and potentially fatal adverse effects, cause potentially permanent brain damage, increase the risk of suicide and violent behavior in both children and adults, and increase the frequency and chronicity of depression. Chronic use of antidepressants also promotes dependency on drugs rather than empowering people to make positive life changes, and places a tremendous burden on healthcare systems in the U.S. and abroad – but I will discuss those issues in next week’s article.

Physiological side effects

The adverse effects of antidepressants include movement disorders, agitation, sexual dysfunction, improper bone development, improper brain development, gastrointestinal bleeding, and a variety of other lesser known problems. These are not rare events, but the most significant harm comes only after months or years of use, which leads to the false impression that antidepressants seem quite safe.

More than half of those beginning an antidepressant have one of the more common side effects (Brambilla et al. 2005).

While some side effects may not carry serious health risks, others do. Gastrointestinal bleeding can become a life-threatening condition, and improper bone development in children is a serious problem that can lead to increased skeletal problems and frequent bone fractures as they age. It has been shown that serotonin exposure in young mice impairs their brain’s cerebral development (Esaki et al. 2005), and many researchers believe that the use of SSRI medications in pregnant mothers and young children may predispose children to emotional disorders later in life (Ansorge et al. 2004).

Another problem with the side effects caused by antidepressants that is often not discussed is the likelihood that additional medications will be prescribed to control them. It is well-known that Prozac produces anxiety and agitation, so physicians often prescribe a sedative (typically a benzodiazapene) along with it. Since recent studies have shown that antidepressants cause gastrointestinal bleeding, doctors are starting to prescribe acid-inhibiting drugs such as Nexium to prevent this side effect. These drugs also inevitably cause side effects, which may lead to the prescription of even more drugs. (This is not uncommon, as I pointed out in last week’s article.)

Psychological side effects

Perhaps the best known psychological side effect of SSRIs is “amotivational syndrome”, a condition with symptoms that are clinically similar to those that develop when the frontal lobes of the brain are damaged. The syndrome is characterized by apathy, disinhibited behavior, demotivation and a personality change similar to the effects of lobotomy (Marangell et al. 2001, p.1059). All psychoactive drugs, including antidepressants, are known to blunt our emotional responses to some extent.

Clinical studies of SSRIs report that agitation is a common side effect. When Yale University’s Department of Psychiatry analyzed the admissions to their hospital’s psychiatric unit, they found that 8.1% of the patients were “found to have been admitted owing to antidepressant mania or psychosis” (Preda et al. 2001). Agitation is such a common side effect with SSRIs that the drug companies have consistently sought to hide it during clinical trials by prescribing a tranquilizer or sedative along with the antidepressant. Studies by Eli Lilly employees found that between 21% and 28% of patients taking Prozac experienced insomnia, agitation, anxiety, nervousness and restlessness, with the highest rates among people taking the highest doses (Beasley et al. 2001).

From their inception, antidepressants have been recognized as having a worrisome capacity to incite changes between episodes of depression (characterized by dysphoria, insomnia, low energy, poor concentration, reduced appetite and diminished libido) and episodes of mania (characterized by euphoria, increased activity, rapid speech, racing thoughts, diminished need for sleep, hypersexuality and diminished impulse control).

Several reports suggest that SSRIs are associated with movement disorders such as akathisia, Parkinson’s disease, dystonia (acute rigidity), dyskinesia (abnormal involuntary choreic movements) and tardive dyskiniesia (Gerber & Lynd 1998).

These movement disorders are serious enough on their own. However, what is even more alarming is the potential for akathisia to induce aggression and suicide. Akathisia, a condition of inner restlessness or severe agitation, is the most commonly occurring movement disorder associated with psychoactive drug use. Akathisia-related violence receives specific attention in the Diagnostic and Statistical Manual of Mental Disorders (DSM). Akathisia has been shown to increase violent behavior and suicide, and antidepressants are known to cause akathisia.

Suicide

After years of foot-dragging and thousands of excess suicides, the FDA finally admitted that “two to three children out of every hundred” could be expected to develop suicidal thoughts or actions as a result of antidepressant therapy (Harris 2004). The risk of suicide events for children receiving SSRIs has been three times higher than placebo. (Healy 2005). Amazingly, no bans or restrictions have been placed on their use in children in the U.S.

While the increased risk of suicide in children has become better known, most people are unaware that a similar risk exists for adults. When adult antidepressant trials were re-analyzed to compensate for erroneous methodologies, SSRIs have consistently revealed a risk of suicide (completed or attempted) that is two to four times higher than placebo (Healy 2005).

Turning short-term suffering into long-term misery

A growing body of research supports the hypothesis that antidepressants worsen the chronicity, if not severity, of depressive features in many subjects. Antidepressant therapy is often associated with the poorest outcomes. In a large, retrospective study in the Netherlands of more than 12,000 patients, antidepressant exposure was associated with the worst long term results. 72-79% of the patients who relapsed received antidepressants during their initial episode of depression. In contrast, only one of the patients who did not relapse received no antidepressants during or following the initial episode. (Weel-Baumgarten 2000)

Longitudinal (long-term) follow-up stuides show very poor outcomes for people treated for depression in both hospital and outpatient settings, and the overall prevalence of depression is rising despite increased use of antidepressants (Moncrieff & Kirsch 2006).

Epidemiological observations have long held that most episodes of depression end after three to six months. However, almost half of all Americans treated with antidepressants have remained on medication for more than a year (Antonuccio et al. 2004).

Long-term effects of antidepressants

Antidepressants have been shown to produce long-term, and in some cases, irreversible chemical and structural changes to the body and brain.

The administration of Prozac and Paxil raises cortisol levels in human subjects (Jackson 2005, p.90). Given the fact that elevated cortisol levels are associated with depression, weight gain, immune dysfunction, and memory problems, the possibility that antidepressants may contribute to prolonged elevations in cortisol is alarming to say the least.

In a study designed to investigate the anatomic effects of serotonergenic compounds, researchers at Thomas Jefferson University found that high-dose, short-term exposure to SSRIs in rats was sufficient to produce swelling and kinking in the serotonin nerve fibers (Kalia 2000). Research performed by a different team of investigators demonstrated a reduction in dendritic length and dendritic spine density, and in contrast to the previous study, these changes did not reverse even after a prolonged recovery period. The results were interpreted to suggest that chronic exposure to SSRIs may arrest the normal development of neurons.

I want to emphasize that what I’ve covered here is only the beginning of the story when it comes to the adverse effects of antidepressants. There are volumes of published research and many books which present this information with much more detail. I recommend Peter Breggin’s landmark “Brain Disabling Treatments in Psychiatry” and Grace Jackson’s “Rethinking Psychiatric Drugs” as resources if you are interested in pursuing this further.

pile of pills I just came across a recently published study which revealed that SSRIs (the most popular class of antidepressants) can cause gastrointestinal bleeding. The first thing I always do when reading a study is check to see who the authors are, where they receive funding from and who the sponsor is.

So you can imagine my surprise when I learned that this study, which casts antidepressants in an unfavorable light, was sponsored by a large pharmaceutical company (AstraZeneca).

Could drug companies be experiencing a change of heart? A pang of conscience? Could this mark a new era of integrity and honesty in the reporting of results from drug trials?

Not so much.

Being the skeptic that I am, I thought for a moment about why a drug company would sponsor and then publish a study investigating the side effects of antidepressant when the results are so clearly negative? We know from my previous article on conflicts of interest in the medical field that drug companies are under no obligation to publish study results – and often they do not when the results are unfavorable.

One reason came immediately to my mind: what if that company happened to manufacture a drug that could be used to counter the side effects antidepressants? And what if their study not only demonstrated the side effect antidepressants, but also the effectiveness of their drug in mitigating or treating that side effect?

Turns out that’s exactly what’s happening here. AstraZeneca is the manufacturer of Nexium, one of the most popularly prescribed medications for heartburn. Nexium works by inhibiting the production of acid in the stomach.

Now, check out how the study was designed. There were three groups. The first group was people taking SSRIs only. The second group was people taking SSRIs and NSAIDs (ibuprofen, aspirin, etc.) and other anti-inflammatory drugs known to be harmful to the stomach lining. The third group took SSRIs along with acid-suppressing agents (agents like Nexium, for example).

People taking the SSRIs were more likely to have G.I. bleeding than people on placebo, and those taking both SSRIs and anti-inflammatory drugs were even more likely to bleed than people on SSRIs alone.

But guess what? Acid suppressing agents (like, um, let’s say… Nexium) were associated with a reduced risk of upper GI bleeding in those taking SSRIs.

We can see where this is leading, right? The solution to the G.I. bleeding caused by SSRIs is not to stop taking the SSRIs. The solution is to take another drug! In this case, a drug that is manufactured by the company who sponsored the study.

Unfortunately, this vicious cycle of medication use is very common. A common scenario might be someone takes an SSRI for depression, but it causes anxiety. So the doctor prescribes something for anxiety. Unfortunately, many medications for anxiety also cause constipation. But there’s a pill for that too, which the doctor also prescribes. Then the patient finds they’re getting some acid reflux (a side effect of some of the medicines for constipation), so the doctor prescribes an acid-suppressing agent.

You might be laughing (or crying) as you read this, but I am not exaggerating. This is very often how it works, especially in the elderly who now take an average of 6-8 medications every day.

And this is bound to continue to happen as long as research is primarily sponsored by pharmaceutical companies. The author of the study, Dr. García-Rodríguez, has received “unrestricted research grants from Pfizer Inc., AstraZeneca and Novartis Pharmaceuticals Group”.

There’s no way to prove that Dr. Garcia-Rodriguez’s work is being unduly influenced by his close connection with drug companies. But common sense, as well as many published scientific studies, indicates that this is very likely. For example, several studies have shown that researchers who produce data that is contrary to the interests of the pharmaceutical industry risk legal, professional, or even personal attack – directly or indirectly financed by the industry. (Bosley, 2002; Healy, 2002; Monbiot, 2002).

Fortunately, many influential leaders are calling for changes to be made to the way medical research is performed and distributed. But they are facing the opposition of a $500 billion dollar industry with more lobbyists than there are members of Congress. It’s not going to be easy.

corn kernelsThis week I’d like to bring your attention to three articles I came across on the web which illustrate the utter madness of mainstream medicine and nutrition.

The first article, “Beware of New Media Brainwashing About High Fructose Corn Syrup“, appeared on Mercola.com, a health advocacy site run by Dr. Joseph Mercola which I recommend. I agree with Dr. Mercola on most things, and even when we don’t agree the differences are relatively minor.

In his article Mercola warns consumers that The Corn Refiners Association is spending $20 to $30 million dollars on an advertising campaign to “rehabilitate” the reputation of high fructose corn syrup (HFCS), claiming that the product is “no worse for you than sugar.”

HFCS is now the #1 source of calories for children in the U.S., a staggering fact when research has clearly linked HFCS to obesity, diabetes, metabolic syndrome, high triglycerides, liver disease and more. On top of that, HFCS is almost always made with genetically modified corn.

Head on over to Mercola.com to read the rest of the article and learn why you and your children should be avoiding HFCS as much as possible. HFCS is found primarily in processed foods (in everything from hamburger buns to soda), so if you follow my general recommendation of eating a whole-foods diet you should have no trouble avoiding it.

The second article, “8-Year-Olds on Statins? A New Plan Quickly Bites Back“, was published in the New York Times on July 8. It describes new guidelines issued by the American Academy of Pediatrics recommending that statin drugs be prescribed to kids as young as 8 years old!

While some doctors applauded the idea (which is incomprehensible to me), others were “incredulous”. Why are they incredulous? Because there is absolutely no evidence suggesting that treating children with statins will prevent heart attacks or reduce mortality from heart disease. Furthermore, there are no data on the possible side effects from taking statins for 40 or 50 years. Since statins have caused cancer in several animal studies, there is no reason to assume that this is not a risk in humans – especially with such long-term use of the drugs.

If you’re not familiar with the dangers of statin drugs, I suggest you read my recent article “The Truth About Statin Drugs“. Not only are statins nowhere near as effective as claimed, they have serious adverse effects and risks – including death.

What’s more, statins have been neither studied nor approved for use with children. In other words, the American Pediatric Association wants to perform an uncontrolled experiment with statin drugs and our children. This is completely unacceptable in light of what we already know about these drugs.

This is yet another obvious example of how the massive conflicts of interest in the medical field, which I described in a previous article, cloud the judgment of otherwise well-meaning physicians and health organizations.

Head over to the New York Times to read the rest of the article.

The third article, “Popular Fish, Tilapia, Contains Potentially Dangerous Fatty Acid Combination” which appeared on ScienceDaily.com, revealed that farm-raised tilapia has very low levels of beneficial omega-3 fatty acids and, even worse, very high levels of omega-6 fatty acids.

This is particularly troublesome because tilapia has become one of the most highly consumed fish in the U.S. (mostly due to its low price), and that trend is expected to continue through 2010.

Researchers have found that tilapia has higher levels of omega-6 fatty acids than doughnuts. That’s scary.

The health risks of excessive amounts of omega-6 fatty acids in the diet are well established. In short, they are significant contributors to both inflammation and oxidative damage in the body. Inflammation and oxidative damage are major risk factors for heart disease, diabetes, cancer and many other diseases.

Wild-caught oily fish, on the other hand, contain a favorable ratio of omega-3 to omega-6 fatty acids and may actually protect against inflammation and oxidative damage?

So what’s the problem with tilapia, you ask? The problem is that they are raised on a “fish farm” where they are fed inexpensive corn-based feeds which contain short chain omega-6 fatty acids that the fish convert and store in their tissues. While this practice has kept the price of tilapia low, it has also transformed it into a toxic food.

Repeat after me: fish don’t eat corn. Fish don’t eat corn. Fish don’t eat corn.

(Cows don’t normally eat chicken parts, gummi bears and garbage, either; but they do in commercial feedlots where most of the meat in the U.S. is produced. I’ll save that for another day, though.)

What all of these articles share in common is 1) further evidence of the rampant conflicts of interest in our medical care system, 2) the complete lack of an objective, independent regulatory body that can protect consumers from the malfeasance of Big Pharma and Big Agrobusiness, 3) the general departure from common sense and traditional wisdom when it comes to health care and nutrition.

It’s absolute madness.

dollar signIn a recent post, I discussed the consequences of the massive conflicts of interest that exist between researchers, doctors and the pharmaceutical industry in the U.S. and abroad.

On June 8th the New York Times published an article underscoring these consequences and illuminating the risks that inevitably come with financial ties between researchers and drug companies.

The article revealed that Dr. Joseph Biederman, a world-renowned child psychiatrist at Harvard, accepted at least $1.6 million in consulting fees from drug makers from 2000 to 2007 but did not disclose any of this income to university officials. By failing to report this income, Dr. Biederman and colleagues may have violated both federal and university research rules designed to prevent conflicts of interest.

Dr. Biederman is one of the most influential researchers in child psychiatry. Although many of his studies are small and often financed by pharmaceutical companies, his work has nevertheless directly contributed to a controversial 40-fold increase from 1994 to 2003 in the diagnosis of pediatric bipolar disorder and a concurrent rise in the use of powerful antipsychotic medicines in children.

We know from my previous post that it has been shown that studies funded by pharmaceutical companies are more likely to show positive results for the drug. We also know that the veracity of clinical trials which are the basis of approval of new drugs by the FDA has been called into question in recent studies because of three major flaws: conflicts of interest on the part of investigators (like Biederman); inappropriate involvement of research sponsors (drug companies) in study design and management; and publication bias in disseminating results (if a study has negative results, the drug company doesn’t publish it).

When a researcher like Dr. Biederman is paid millions by a drug company to study it’s product, we must wonder whether we can expect his work to be objective and accurate. But when that researcher repeatedly lies about the money he received, the integrity of his work should be in serious doubt.

In one revealing example, Dr. Biederman reported no income from Johnson & Johnson for 2001 in a disclosure report filed with Harvard University. When asked to check again, he said he received $3,500. But Johnson & Johnson told Congressional investigators that Mr. Biederman was paid $58,169 in 2001.

The consulting arrangements of Dr. Biederman’s entire research group at Harvard were already controversial because of the researcher’s advocacy of unapproved (”off-label”) uses of psychiatric medicines in children. Dr. Biederman and his colleagues have promoted the aggressive diagnosis and treatment of childhood bipolar disorder with antipsychotic drugs – although these drugs have never been approved for such use. In fact, neuroleptic drugs have not been approved for use in children at all.

As a result of Dr. Biederman’s promotion of both the diagnosis and treatment for childhood bipolar disorder, antipsychotic drug use in children has exploded. Roughly half a million children and teenagers were given at least one prescription for an antipsychotic in 2007, including 20,500 under 6 years of age, according to Medco Health Solutions, a pharmacy benefit manager.

The dramatic increase in antipsychotic prescriptions in children has occurred despite the lack of evidence that these medication improve children’s lives over time. On the contrary, it is well known that children are susceptible to the weight gain and metabolic problems caused by the drugs. Children typically gain twice as much weight in the first six months on atypical neuroleptic drugs (risperidone, olanzapine, etc.) as they should through normal growth, adding an average of 2 to 3 inches to their waistline. This is mostly abdominal fat, which also increases their risk of diabetes and heart disease.

There is also some evidence which suggests that these drugs may cause permanent changes to the structure and function of the brain (Breggin 1997). In other words, they cause brain damage.

The research of Dr. Biederman’s group, which has served as the basis for the rise in bipolar diagnoses and antipsychotic use in children, has been widely criticized by other psychiatrists and researchers.

The studies published by Dr. Biederman’s group were so small and “loosely” designed that they were largely inconclusive. In some studies testing antipsychotic drugs, the group defined improvement as a decline of 30 percent or more on a scale called the Young Mania Rating Scale, which is well below the 50 percent change that most researchers use as the standard.

Controlling for bias in these types of studies is particularly important, given that the scale is subjective and depends on reports from physicians, parents and children.

More broadly, psychiatrists have said that revelations of undisclosed payments from drug makers to leading researchers are especially damaging for psychiatry.

“The price we pay for these kinds of revelations is credibility, and we just can’t afford to lose any more of that in this field,” said Dr. E. Fuller Torrey, executive director of the Stanley Medical Research Institute, which finances psychiatric studies. “In the area of child psychiatry in particular, we know much less than we should, and we desperately need research that is not influenced by industry money.”

I couldn’t have said it better myself.

Bad Behavior has blocked 1416 access attempts in the last 7 days.