Search Results

Your search for pain returned the following results.

man on bench

Today’s article is the sixth in an ongoing series on antidepressants and depression. It’s long, so you might want to print it out or go grab a cup of tea. If you are visiting the blog for the first time, or you haven’t had a chance to read the previous articles, you might find it helpful to do so before diving into this one.

The treatment of depression with drugs is based on the enormous collective delusion that psychiatric drugs act by correcting a chemical imbalance in the brain. As a result, a large percentage of the population has been convinced to take drugs in order to deal with the problems of daily life. Everything from break-ups to job difficulties to worries about the future have been transformed into “chemical problems”.

The myth that depression is caused by a chemical imbalance has permeated public consciousness, changing the way we view our lives and ourselves. We have become, in the words of sociologist Nicholas Rose, a society of “neurochemical selves”, recoding our moods and ills in terms of the supposed functioning of our brain chemicals and acting on ourselves in light of this belief.

This is reflected in the growing market for non-prescription products claiming to “enhance serotonin levels” in health food shops and on the Internet, and the cascade of claims that everything from chocolate to exercise makes you feel good because it “balances brain chemicals”. It also largely explains the 1300% growth between 1990 and 2000 in prescriptions of selective serotonin reuptake inhibitors (SSRIs), the most popular class of antidepressant drugs.

Yet, as I have explained in a previous article, there is no evidence to support the notion that depression is associated with an abnormality or imbalance of serotonin (or any other brain chemical), or that antidepressants work by reversing such a problem. Moreover, recent meta-analyses (Kirsh et al. 2008; Kirsh et al. 2004) suggest that antidepressants have only a small advantage over placebo, and that this advantage is most likely clinically meaningless. It has never been demonstrated that antidepressants act in a specific, disease-centered manner, nor have antidepressants ben shown to be superior to other drugs with psychoactive properties (Moncrieff & Cohen, 2006).

In spite of the complete lack of evidence supporting their use, one still often hears the familiar refrain “yes, but drugs are necessary in some cases!” This statement may in fact be true, but not because drugs have been demonstrated to be effective for certain types of depression or with certain patients. Instead, drugs may be necessary in a society where traditional social support structures which play a therapeutic role have completely broken down.

Studies have shown that most individuals with a healthy social support network are able to easily handle major stressors in life. When that network is underdeveloped or non-existent, it is far more likely that depression will occur (Wade & Kendler, 2000).

It has been observed, for example, that schizophrenia and other mental disorders occur less frequently and have a much more favorable prognosis in so-called “Third World” countries than in the West (Sartorious et al 1986). The influence of culture has been mentioned as an important determinant of differences in both the course and outcome of mental illness.

In developing countries strong connections between family members, kin groups and the local community are more likely to be intact. In addition, cultural, religious and spiritual beliefs in these societies provide a context in which symptoms of depression and other mental illness can be understood outside of the label of medical disease or pathology. Possession and rites of passage are two examples of such contexts.

In the West, however, these traditional support structures have been replaced by new cultural norms that do not offer support or therapeutic value to people experiencing mental distress. Among the socio-cultural factors identified by researchers as having a negative influence in Western societies are: extreme nuclearization of the family and therefore lack of support for mentally ill members of the kin group; covert rejection and social isolation of the mentally ill in spite of public assertions to the contrary; immediate sick role typing and general expectation of a chronic mental illness if a person shows an acute psychotic reaction; and the assumption that a person is insane if beliefs or behavior appear somewhat strange or “irrational”.

Therefore, in the West depression is far more likely to occur because of the breakdown of strong family and community support structures, the stigmatization of mental illness, the belief (perpetuated by drug companies) that all mental illness is “chronic”, and the lack of any cultural, religious or spiritual support for people who do not share the consensus view of reality. Statistics measuring the prevalence of depression around the world bear this out. According to the World Health Organization, if current trends continue, by the year 2020 depression will be the leading cause of disability in the West.

In contrast, in developing countries that have not yet fully adopted Western culture transient (i.e. temporary) psychotic reactions and brief depressive episodes are more common than chronic mental illness. When an individual begins to experience distress, the surrounding family and community respond with sympathy, support and traditional therapeutic resources. Surrounded by a rich support structure, the individual is able to return relatively quickly to healthy mental functioning - without drugs.

The cultural differences in the incidence of and response to mental illness suggests something that may be entirely obvious to you but has been largely forgotten in contemporary discussions about depression: that it cannot be properly defined or understood without considering the social context in which it occurs.

In other words, depression is both an individual and a social disease.

Unsurprisingly, epidemiological evidence has tied depression to poor housing, poverty, unemployment and precarious or stressful working conditions. Imagine, for example, a single parent working two low-paying jobs trying to support her child with no family or close friends nearby to help and little time to spend with them even if they were present. Or consider a child that spends most of his days in a school that doesn’t value his style of learning, eats a steady diet of sugar and processed food and lives with an alcoholic parent who is verbally and perhaps physically abusive. It makes perfect sense that both of these individuals could frequently feel sad, hopeless and even desperate. But are these individuals “depressed”?

Even if we agree that the intense feelings they are experiencing could be labeled as “depression”, perhaps a more relevant question might be this: is depression always a pathology? Or is it possible that much of what we call depression is simply a natural and entirely human response to certain circumstances in life?

This is exactly what Allan Horwitz and Jerome Wakefield argue in their book “The Loss of Sadness: How Psychiatry Transformed Normal Sorrow into Depressive Disorder“. The authors point out that the current epidemic of depression has been made possible by a change in the psychiatric definition of depression that allows the classification of normal sadness as a disease, even when it is not.

Horwitz and Wakefield define normal sadness as having three components: it is context-specific; it is of roughly proportionate intensity to the provoking loss/stimulus; and it tends to end roughly when the loss or situation ends, or else it gradually ceases as coping mechanisms adjust individuals to new circumstances.

The hypothetical examples I gave above of the single parent and the child living in an abusive home environment undoubtedly meet Horwitz & Wakefield’s criteria for “normal sadness”. The feelings occur in a specific context and are roughly proportionate to the circumstances. And though we can’t know this for sure since our example is hypothetical, one might assume that if the conditions of their lives were more favorable they may not feel so sad, hopeless and desperate. Nevertheless, in the West today both of these individuals would almost certainly be labeled as depressed and treated with psychoactive drugs.

While I appreciate the importance of Horwitz and Wakefield’s distinction between normal sadness and depression, I believe it is incomplete. In their framework, there must be some stimulus such as the death of a loved one, the loss of a job or the end of a relationship in order for someone to “escape” the depression label. Yet such events are not the only causes of discontent.

Regardless of economic status people in the West live in increasing isolation and alienation from each other, their communities and the natural world. Phone and email have replaced face-to-face interaction. The impersonality of big-box chain stores and strip mall outlets have replaced the intimacy and familiarity of the local corner store. The pace of life has become so fast that most people feel they are struggling just to get by. And even though we are far richer as a nation now, studies show that people today are not as happy as they were in the 1950s.

Sociologist Alain Ehrenberg has recently suggested that depression is a direct result of the new conceptions of individuality that have emerged in modern societies (Ehrenberg 2000). In societies that celebrate individual responsibility and personal initiative, the reciprocal of that norm of active self-fulfillment is depression - now largely defined as a pathology involving a lack of energy or an inability to perform the tasks required for work or relations with others. The continual incitements to action, to choice, to self-realisation and self-improvement act as a norm in relation to which individuals govern themselves, and against which differences are judged as pathologies.

Another way to speak of this change is as an increase in psychological stress. It is difficult to accurately compare stress levels today to those of the past, but sociologists like Juliet B. Schorr at Harvard University have observed that Americans (and likely people in all Western societies) are working longer hours, often with less pay, and have far less time for leisure. Since recent studies have identified a causal link between work stress and depression, one can safely make the assumption that the increase in work hours together with the decrease in leisure time could very well be contributing to the epidemic of depression.

Consider a middle-class individual living in an “exurban” housing tract 100 miles from their workplace. Each day they commute for two hours in each direction, fighting traffic all the way. Their job lacks any relevance or meaning to them and is simply done to make money and survive, without any joy or satisfaction. They have little control or agency at work and spend there day performing trivial tasks that do not challenge or engage them. They do not know their neighbors, they are disconnected from nature, and perhaps they have recently gone through a painful divorce.

If this person is experiencing apathy, sadness and a lack of enthusiasm for life, does that mean they are depressed? And even if we do label their condition as “depression”, can we truly understand or treat them successfully without addressing the circumstances (or root causes) of this person’s so-called depression?

There is little doubt that the people who seek treatment for depression are suffering. But should psychological and emotional suffering always be viewed as “something to get rid of”? Despite claims made by the companies who market antidepressant drugs, suffering cannot be pulled out of the brain like a splinter from the foot. Great religious and spiritual traditions from around the world view suffering as an avenue to greater understanding of oneself, life and God. Suffering can be viewed as a signal drawing our attention to issues in our life that need to be addressed.

If we simply use chemicals to diminish these signals and numb ourselves from their effects, we lose the opportunity to grow, evolve and heal. According to world-renowned psychiatrist David Healy, when strong feelings are suppressed by rejecting them or with drugs, people become “binded” to their own psychological or spiritual state. Psychiatric drugs blunt and confuse essential emotional signals and make it very difficult for people to know what they are really feeling. And because the pharmacological effects of drugs impair mental functioning, they can reinforce the patient’s sense of helplessness and dependence upon chemicals - even when those chemicals are preventing them from full recovery.

People who are depressed have lost touch with their hopes and dreams. Yet they wouldn’t be depressed if they did not still have a vision for a better life. If drugs are used to obliterate the feelings of discontent or suffering, the connection to that vision for a better life may be lost.

One might legitimately wonder, then, whether it is wise to attempt to treat such complex human and social problems with chemicals. Such a treatment strategy can only be useful if the goal is to perpetuate the status quo, to continue with “business as usual” at all costs, rather than addressing the psychosocial problems that are at the root of the discontent.

The message that drugs can cure our problems has profound consequences. Individual human beings with their unique life histories and personal characteristics have been reduced to biochemical entities and in this way the reality of human experience and suffering is denied (Moncrief 2008). People have come to view themselves as “victims of their own biology”, rather than as autonomous individuals with the power to make positive changes in their lives.

At another level such an exclusive focus on drug treatment allows governments and institutions to ignore the social and political reasons why so many people feel discontented with their lives. This is not surprising, of course. Both governments and corporations stand to benefit from maintaining the status quo and are often threatened by social change.

The “disease-centered” model of depression is presented as objective, unassailable fact, but it is instead an ideology (Moncrieff 2008). All forms of ideology convey a partial view of human experience and activities that is motivated by a particular interest; in this case, the interest of multinational pharmaceutical companies. The best selling drugs today are those that are taken indefinitely. This has fueled the drug companies’ efforts to label depression as a chronic, lifelong disease in spite of epidemiological studies which indicate that, even when untreated, depressive episodes tend to last no longer than nine months.

In her article called “Disease Mongering in Drug Promotion“, Barbara Mintzes describes the effort of pharmaceutical companies to “widen the boundaries of treatable illness in order to expand markets for those who sell and deliver treatments”. This phenomenon is known as “disease mongering”, and involves several tactics including the introduction of new, questionable diagnoses; the promotion of drugs as the first line of defense for problems not previously considered medical; the expansion of current definitions of mental illness; and the inflation of disease prevalence rates.

In a blatant example of the last strategy, pharmaceutical companies have estimated in their promotional literature that up to one-third of people worldwide have a mental illness. This ridiculous (and in my opinion, transparent) claim is not supported anywhere in the scientific literature. Peer-reviewed studies put the figure at significantly less than 5%.

It should be obvious that drug companies would be the first to benefit from such grossly overstated estimates of the prevalence of depression. In fact, executives in the pharmaceutical industry have even admitted as much. Thirty years ago Henry Madsen, the CEO of Merck, made some very candid comments as he approached his retirement. Suggesting he’d rather Merck to be more like chewing gum maker Wrigley’s, Gadsen said that it had “long been his dream to make drugs for healthy people.”

Sadly, Madsen’s dream has been realized with the advent of not only antidepressants, but also statins, antacids and other drugs sold to essentially healthy people. These medications are now the top-selling drugs around the world. (Madsen’s sense of morality may have been skewed, but he certainly was a visionary businessman.)

The field of psychiatry has largely collaborated with the pharmaceutical industry in defining intense and painful emotions as “disorders”. Diagnoses like “panic disorder” and “clinical depression” give a medical aura to powerful emotions and make them seem dangerous, pathological, unnatural or out of control. In an astute observation of this state of affairs, psychiatrist Steven Sharfstein remarked in the March, 2006 issue of Psychiatric News that the biopsychosocial model of depression has been replaced by the “bio-bio-bio” model.

It has now become common practice for psychiatrists to prescribe drugs on their very first visit with a patient, and to tell that patient that they will likely need to take drugs for the rest of their lives. Such a prognosis is offered in spite of the fact that no attempt has been made whatsoever to try proven, non-drug treatment alternatives such as psychotherapy and exercise!

The increasing rates of depression and poor long-term treatment outcomes clearly indicate that the current drug-centered strategy is not effective. For real progress to be made the psychological, social, economic and political roots of depression must be addressed. This will require a coordinated effort on the part of patients, physicians, communities and politicians. It will not be easy, because we are fighting deeply entrenched beliefs about the “biochemical” nature of depression as well as a $500 billion dollar pharmaceutical industry that is not likely to willingly give up the $20 billion in sales represented by antidepressants.

There is no doubt that the systemic changes I am describing are far more difficult to implement than administering a drug. Nevertheless, we must begin if we hope to heal ourselves, our culture and our world.

In the final article of the series, I will present proven non-drug alternatives for treating depression. Stay tuned!

Please remember to always seek the guidance of a qualified psychiatrist when attempting to withdraw from psychoactive drugs. It is very dangerous to stop taking the drugs abruptly or to begin the withdrawal process without supervision. Psychiatrist Peter Breggin is considered to be one of the foremost experts in psychiatric drug withdrawal, and he has written a book (linked to below) for helping patients wean off of drugs. If you are considering stopping your medication, I recommend you read this book and discuss it with your doctor.

Recommended books

  • Your Drug May Be Your Problem: How and Why to Stop Taking Psychiatric Medications, by Peter Breggin

Popularity: 4%

morgueThe popular perception that the U.S. has the highest quality of medical care in the world has been proven entirely false by several public heath studies and reports over the past few years.

The prestigious Journal of the American Medical Association published a study by Dr. Barbara Starfield, a medical doctor with a Master’s degree in Public Health, in 2000 which revealed the extremely poor performance of the United States health care system when compared to other industrialized countries (Japan, Sweden, Canada, France, Australia, Spain, Finland, the Netherlands, the United Kingdom, Denmark, Belgium and Germany).

In fact, the U.S. is ranked last or near last in several significant health care indicators:

  • 13th (last) for low-birth-weight percentages
  • 13th for neonatal mortality and infant mortality overall
  • 11th for postneonatal mortality
  • 13th for years of potential life lost (excluding external causes)
  • 12th for life expectancy at 1 year for males, 11th for females
  • 12th for life expectancy at 15 years for males, 10th for females

The most shocking revelation of her report is that iatrogentic damage (defined as a state of ill health or adverse effect resulting from medical treatment) is the third leading cause of death in the U.S., after heart disease and cancer.

Let me pause while you take that in.

This means that doctors and hospitals are responsible for more deaths each year than cerebrovascular disease, chronic respiratory diseases, accidents, diabetes, Alzheimer’s disease and pneumonia.

The combined effect of errors and adverse effects that occur because of iatrogenic damage includes:

  • 12,000 deaths/year from unnecessary surgery
  • 7,000 deaths/year from medication errors in hospitals
  • 20,000 deaths/year from other errors in hospitals
  • 80,000 deaths/year from nosocomial infections in hospitals
  • 106,000 deaths a year from nonerror, adverse effects of medications

This amounts to a total of 225,000 deaths per year from iatrogenic causes. However, Starfield notes three important caveats in her study:

  • Most of the data are derived from studies in hospitalized patients
  • The estimates are for deaths only and do not include adverse effects associated with disability or discomfort
  • The estimates of death due to error are lower than those in the Institute of Medicine Report (a previous report by the Institute of Medicine on the number of iatrogenic deaths in the U.S.)

If these caveats are considered, the deaths due to iatrogenic causes would range from 230,000 to 284,000.

Starfield and her colleagues performed an analysis which took the caveats above into consideration and included adverse effects other than death. Their analysis concluded that between 4% and 18% of consecutive patients experience adverse effects in outpatient settings, with:

  • 116 million extra physician visits
  • 77 million extra prescriptions
  • 17 million emergency department visits
  • 8 million hospitalizations
  • 3 million long-term admissions
  • 199,000 additional deaths
  • $77 billion in extra costs (equivalent to the aggregate cost of care of patients with diabetes

I want to make it clear that I am not condemning physicians in general. In fact, most of the doctors I’ve come into contact with in the course of my life have been competent and genuinely concerned about my welfare. In many ways physicians are just as victimized by the deficiencies of our health-care system as patients and consumers are. With increased patient loads and mandated time limits for patient visits set by HMOs, most doctors are doing the best they can to survive our broken and corrupt health-care system.

The Institute of Medicine’s report (”To Err is Human”) which Starfied and her colleagues analyzed isn’t the only study to expose the failures of the U.S. health-care system. The World Health Organization issued a report in 2000, using different indicators than the IOM report, that ranked the U.S. as 15th among 25 industrialized countries.

As Starfied points out, the “real explanation for relatively poor health in the United States is undoubtedly complex and multifactorial.” Two significant causes of our poor standing is over-reliance on technology and a poorly developed primary care infrastructure. The United States is second only to Japan in the availability of technological procedures such as MRIs and CAT scans. However, this has not translated into a higher standard of care, and in fact may be linked to the “cascade effect” where diagnostic procedures lead to more treatment (which as we have seen can lead to more deaths).

Of the 7 countries in the top of the average health ranking, 5 have strong primary care infrastructures. Evidence indicates that the major benefit of health-care access accrues only when it facilitates receipt of primary care. (Starfield, 1998)

One might think that these sobering analyses of the U.S. health-care system would have lead to a public discussion and debate over how to address the shortcomings. Alas, both medical authorities and the general public alike are mostly unaware of this data, and we are no closer to a safe, accessible and effective health-care system today than we were eight years ago when these reports were published.

Recommended links

  • Is US Health Really the Best in the World?

Popularity: 100%