The greater the doubt, the greater the awakening; the smaller the doubt, the smaller the awakening. No doubt, no awakening.
—C.-C. Chang, The Practice of Zen
You live through that little piece of time that is yours, but that piece of time is not only your own life, it is the summing-up of all the other lives that are simultaneous with yours. . . . What you are is an expression of History.
—Robert Penn Warren, World Enough and Time
In the late 1960s, during a year off between my first and second years of medical school, I became an accidental witness to a profound transition in the medical approach to mental suffering. I had landed a plum job as an attendant on a research ward at the Massachusetts Mental Health Center, where I was in charge of organizing recreational activities for the patients. MMHC had long been considered one of the finest psychiatric hospitals in the country, a jewel in the crown of the Harvard Medical School teaching empire. The goal of the research on my ward was to determine whether psychotherapy or medication was the best way to treat young people who had suffered a first mental breakdown diagnosed as schizophrenia.
The talking cure, an offshoot of Freudian psychoanalysis, was still the primary treatment for mental illness at MMHC. However, in the early 1950s a group of French scientists had discovered a new compound, chlorpromazine (sold under the brand name Thorazine), that could “tranquilize” patients and make them less agitated and delusional. That inspired hope that drugs could be developed to treat serious mental problems such as depression, panic, anxiety, and mania, as well as to manage some of the most disturbing symptoms of schizophrenia.
As an attendant I had nothing to do with the research aspect of the ward and was never told what treatment any of the patients was receiving. They were all close to my age—college students from Harvard, MIT, and Boston University. Some had tried to kill themselves; others cut themselves with knives or razor blades; several had attacked their roommates or had otherwise terrified their parents or friends with their unpredictable, irrational behavior. My job was to keep them involved in normal activities for college students, such as eating at the local pizza parlor, camping in a nearby state forest, attending Red Sox games, and sailing on the Charles River.
Totally new to the field, I sat in rapt attention during ward meetings, trying to decipher the patients’ complicated speech and logic. I also had to learn to deal with their irrational outbursts and terrified withdrawal. One morning I found a patient standing like a statue in her bedroom with one arm raised in a defensive gesture, her face frozen in fear. She remained there, immobile, for at least twelve hours. The doctors gave me the name for her condition, catatonia, but even the textbooks I consulted didn’t tell me what could be done about it. We just let it run its course.
TRAUMA BEFORE DAWN
I spent many nights and weekends on the unit, which exposed me to things the doctors never saw during their brief visits. When patients could not sleep, they often wandered in their tightly wrapped bathrobes into the darkened nursing station to talk. The quiet of the night seemed to help them open up, and they told me stories about having been hit, assaulted, or molested, often by their own parents, sometimes by relatives, classmates, or neighbors. They shared memories of lying in bed at night, helpless and terrified, hearing their mother being beaten by their father or a boyfriend, hearing their parents yell horrible threats at each other, hearing the sounds of furniture breaking. Others told me about fathers who came home drunk—hearing their footsteps on the landing and how they waited for them to come in, pull them out of bed, and punish them for some imagined offense. Several of the women recalled lying awake, motionless, waiting for the inevitable—a brother or father coming in to molest them.
During morning rounds the young doctors presented their cases to their supervisors, a ritual that the ward attendants were allowed to observe in silence. They rarely mentioned stories like the ones I’d heard. However, many later studies have confirmed the relevance of those midnight confessions: We now know that more than half the people who seek psychiatric care have been assaulted, abandoned, neglected, or even raped as children, or have witnessed violence in their families.1 But such experiences seemed to be off the table during rounds. I was often surprised by the dispassionate way patients’ symptoms were discussed and by how much time was spent on trying to manage their suicidal thoughts and self-destructive behaviors, rather than on understanding the possible causes of their despair and helplessness. I was also struck by how little attention was paid to their accomplishments and aspirations; whom they cared for, loved, or hated; what motivated and engaged them, what kept them stuck, and what made them feel at peace—the ecology of their lives.
A few years later, as a young doctor, I was confronted with an especially stark example of the medical model in action. I was then moonlighting at a Catholic hospital, doing physical examinations on women who’d been admitted to receive electroshock treatment for depression. Being my curious immigrant self, I’d look up from their charts to ask them about their lives. Many of them spilled out stories about painful marriages, difficult children, and guilt over abortions. As they spoke, they visibly brightened and often thanked me effusively for listening to them. Some of them wondered if they really still needed electroshock after having gotten so much off their chests. I always felt sad at the end of these meetings, knowing that the treatments that would be administered the following morning would erase all memory of our conversation. I did not last long in that job.
On my days off from the ward at MMHC, I often went to the Countway Library of Medicine to learn more about the patients I was supposed to help. One Saturday afternoon I came across a treatise that is still revered today:Eugen Bleuler’s 1911 textbook Dementia Praecox. Bleuler’s observations were fascinating:
Among schizophrenic body hallucinations, the sexual ones are by far the most frequent and the most important. All the raptures and joys of normal and abnormal sexual satisfaction are experienced by these patients, but even more frequently every obscene and disgusting practice which the most extravagant fantasy can conjure up. Male patients have their semen drawn off; painful erections are stimulated. The women patients are raped and injured in the most devilish ways. . . . In spite of the symbolic meaning of many such hallucinations, the majority of them correspond to real sensations.2
This made me wonder: Our patients had hallucinations—the doctors routinely asked about them and noted them as signs of how disturbed the patients were. But if the stories I’d heard in the wee hours were true, could it be that these “hallucinations” were in fact the fragmented memories of real experiences? Were hallucinations just the concoctions of sick brains? Could people make up physical sensations they had never experienced? Was there a clear line between creativity and pathological imagination? Between memory and imagination? These questions remain unanswered to this day, but research has shown that people who’ve been abused as children often feel sensations (such as abdominal pain) that have no obvious physical cause; they hear voices warning of danger or accusing them of heinous crimes.
There was no question that many patients on the ward engaged in violent, bizarre, and self-destructive behaviors, particularly when they felt frustrated, thwarted, or misunderstood. They threw temper tantrums, hurled plates, smashed windows, and cut themselves with shards of glass. At that time I had no idea why someone might react to a simple request (“Let me clean that goop out of your hair”) with rage or terror. I usually followed the lead of the experienced nurses, who signaled when to back off or, if that did not work, to restrain a patient. I was surprised and alarmed by the satisfaction I sometimes felt after I’d wrestled a patient to the floor so a nurse could give an injection, and I gradually realized how much of our professional training was geared to helping us stay in control in the face of terrifying and confusing realities.
Sylvia was a gorgeous nineteen-year-old Boston University student who usually sat alone in the corner of the ward, looking frightened to death and virtually mute, but whose reputation as the girlfriend of an important Boston mafioso gave her an aura of mystery. After she refused to eat for more than a week and rapidly started to lose weight, the doctors decided to force-feed her. It took three of us to hold her down, another to push the rubber feeding tube down her throat, and a nurse to pour the liquid nutrients into her stomach. Later, during a midnight confession, Sylvia spoke timidly and hesitantly about her childhood sexual abuse by her brother and uncle. I realized then our display of “caring” must have felt to her much like a gang rape. This experience, and others like it, helped me formulate this rule for my students: If you do something to a patient that you would not do to your friends or children, consider whether you are unwittingly replicating a trauma from the patient’s past.
In my role as recreation leader I noticed other things: As a group the patients were strikingly clumsy and physically uncoordinated. When we went camping, most of them stood helplessly by as I pitched the tents. We almost capsized once in a squall on the Charles River because they huddled rigidly in the lee, unable to grasp that they needed to shift position to balance the boat. In volleyball games the staff members invariably were much better coordinated than the patients. Another characteristic they shared was that even their most relaxed conversations seemed stilted, lacking the natural flow of gestures and facial expressions that are typical among friends. The relevance of these observations became clear only after I’d met the body-based therapists Peter Levine and Pat Ogden; in the later chapters I’ll have a lot to say about how trauma is held in people’s bodies.
MAKING SENSE OF SUFFERING
After my year on the research ward I resumed medical school and then, as a newly minted MD, returned to MMHC to be trained as a psychiatrist, a program to which I was thrilled to be accepted. Many famous psychiatrists had trained there, including Eric Kandel, who later won the Nobel Prize in Physiology and Medicine. Allan Hobson discovered the brain cells responsible for the generation of dreams in a lab in the hospital basement while I trained there, and the first studies on the chemical underpinnings of depression were also conducted at MMHC. But for many of us residents, the greatest draw was the patients. We spent six hours each day with them and then met as a group with senior psychiatrists to share our observations, pose our questions, and compete to make the wittiest remarks.
Our great teacher, Elvin Semrad, actively discouraged us from reading psychiatry textbooks during our first year. (This intellectual starvation diet may account for the fact that most of us later became voracious readers and prolific writers.) Semrad did not want our perceptions of reality to become obscured by the pseudocertainties of psychiatric diagnoses. I remember asking him once: “What would you call this patient—schizophrenic or schizoaffective?” He paused and stroked his chin, apparently in deep thought. “I think I’d call him Michael McIntyre,” he replied.
Semrad taught us that most human suffering is related to love and loss and that the job of therapists is to help people “acknowledge, experience, and bear” the reality of life—with all its pleasures and heartbreak. “The greatest sources of our suffering are the lies we tell ourselves,” he’d say, urging us to be honest with ourselves about every facet of our experience. He often said that people can never get better without knowing what they know and feeling what they feel.
I remember being surprised to hear this distinguished old Harvard professor confess how comforted he was to feel his wife’s bum against him as he fell asleep at night. By disclosing such simple human needs in himself he helped us recognize how basic they were to our lives. Failure to attend to them results in a stunted existence, no matter how lofty our thoughts and worldly accomplishments. Healing, he told us, depends on experiential knowledge: You can be fully in charge of your life only if you can acknowledge the reality of your body, in all its visceral dimensions.
Our profession, however, was moving in a different direction. In 1968 the American Journal of Psychiatry had published the results of the study from the ward where I’d been an attendant. They showed unequivocally that schizophrenic patients who received drugs alone had a better outcome than those who talked three times a week with the best therapists in Boston.3 This study was one of many milestones on a road that gradually changed how medicine and psychiatry approached psychological problems: from infinitely variable expressions of intolerable feelings and relationships to a brain-disease model of discrete “disorders.”
The way medicine approaches human suffering has always been determined by the technology available at any given time. Before the Enlightenment aberrations in behavior were ascribed to God, sin, magic, witches, and evil spirits. It was only in the nineteenth century that scientists in France and Germany began to investigate behavior as an adaptation to the complexities of the world. Now a new paradigm was emerging: Anger, lust, pride, greed, avarice, and sloth—as well as all the other problems we humans have always struggled to manage—were recast as “disorders” that could be fixed by the administration of appropriate chemicals.4 Many psychiatrists were relieved and delighted to become “real scientists,” just like their med school classmates who had laboratories, animal experiments, expensive equipment, and complicated diagnostic tests, and set aside the wooly-headed theories of philosophers like Freud and Jung. A major textbook of psychiatry went so far as to state: “The cause of mental illness is now considered an aberration of the brain, a chemical imbalance.”5
Like my colleagues, I eagerly embraced the pharmacological revolution. In 1973 I became the first chief resident in psychopharmacology at MMHC. I may also have been the first psychiatrist in Boston to administer lithium to a manic-depressive patient. (I’d read about John Cade’s work with lithium in Australia, and I received permission from a hospital committee to try it.) On lithium a woman who had been manic every May for the past thirty-five years, and suicidally depressed every November, stopped cycling and remained stable for the three years she was under my care. I was also part of the first U.S. research team to test the antipsychotic Clozaril on chronic patients who were warehoused in the back wards of the old insane asylums.6 Some of their responses were miraculous: People who had spent much of their lives locked in their own separate, terrifying realities were now able to return to their families and communities; patients mired in darkness and despair started to respond to the beauty of human contact and the pleasures of work and play. These amazing results made us optimistic that we could finally conquer human misery.
Antipsychotic drugs were a major factor in reducing the number of people living in mental hospitals in the United States, from over 500,000 in 1955 to fewer than 100,000 in 1996.7 For people today who did not know the world before the advent of these treatments, the change is almost unimaginable. As a first-year medical student I visited Kankakee State Hospital in Illinois and saw a burly ward attendant hose down dozens of filthy, naked, incoherent patients in an unfurnished dayroom supplied with gutters for the runoff water. This memory now seems more like a nightmare than like something I witnessed with my own eyes. My first job after finishing my residency in 1974 was as the second-to-last director of a once-venerable institution, the Boston State Hospital, which had formerly housed thousands of patients and been spread over hundreds of acres with dozens of buildings, including greenhouses, gardens, and workshops—most of them by then in ruins. During my time there patients were gradually dispersed into “the community,” the blanket term for the anonymous shelters and nursing homes where most of them ended up. (Ironically, the hospital was started as an “asylum,” a word meaning “sanctuary” that gradually took on a sinister connotation. It actually did offer a sheltered community where everybody knew the patients’ names and idiosyncrasies.) In 1979, shortly after I went to work at the VA, the Boston State Hospital’s gates were permanently locked, and it became a ghost town.
During my time at Boston State I continued to work in the MMHC psychopharmacology lab, which was now focusing on another direction for research. In the 1960s scientists at the National Institutes of Health had begun to develop techniques for isolating and measuring hormones and neurotransmitters in blood and the brain. Neurotransmitters are chemical messengers that carry information from neuron to neuron, enabling us to engage effectively with the world.
Now that scientists were finding evidence that abnormal levels of norepinephrine were associated with depression, and of dopamine with schizophrenia, there was hope that we could develop drugs that target specific brain abnormalities. That hope was never fully realized, but our efforts to measure how drugs could affect mental symptoms led to another profound change in the profession. Researchers’ need for a precise and systematic way to communicate their findings resulted in the development of the so-called Research Diagnostic Criteria, to which I contributed as a lowly research assistant. These eventually became the basis for the first systematic system to diagnose psychiatric problems, the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM), which is commonly referred to as the “bible of psychiatry.” The foreword to the landmark 1980 DSM-III was appropriately modest and acknowledged that this diagnostic system was imprecise—so imprecise that it never should be used for forensic or insurance purposes.8 As we will see, that modesty was tragically short-lived.
Preoccupied with so many lingering questions about traumatic stress, I became intrigued with the idea that the nascent field of neuroscience could provide some answers and started to attend the meetings of the American College of Neuropsychopharmacology (ACNP). In 1984 the ACNP offered many fascinating lectures about drug development, but it was not until a few hours before my scheduled flight back to Boston that I heard a presentation by Steven Maier of the University of Colorado, who had collaborated with Martin Seligman of the University of Pennsylvania. His topic was learned helplessness in animals. Maier and Seligman had repeatedly administered painful electric shocks to dogs who were trapped in locked cages. They called this condition “inescapable shock.”9 Being a dog lover, I realized that I could never have done such research myself, but I was curious about how this cruelty would affect the animals.
After administering several courses of electric shock, the researchers opened the doors of the cages and then shocked the dogs again. A group of control dogs who had never been shocked before immediately ran away, but the dogs who had earlier been subjected to inescapable shock made no attempt to flee, even when the door was wide open—they just lay there, whimpering and defecating. The mere opportunity to escape does not necessarily make traumatized animals, or people, take the road to freedom. Like Maier and Seligman’s dogs, many traumatized people simply give up. Rather than risk experimenting with new options they stay stuck in the fear they know.
I was riveted by Maier’s account. What they had done to these poor dogs was exactly what had happened to my traumatized human patients. They, too, had been exposed to somebody (or something) who had inflicted terrible harm on them—harm they had no way of escaping. I made a rapid mental review of the patients I had treated. Almost all had in some way been trapped or immobilized, unable to take action to stave off the inevitable. Their fight/flight response had been thwarted, and the result was either extreme agitation or collapse.
Maier and Seligman also found that traumatized dogs secreted much larger amounts of stress hormones than was normal. This supported what we were beginning to learn about the biological underpinnings of traumatic stress. A group of young researchers, among them Steve Southwick and John Krystal at Yale, Arieh Shalev at Hadassah Medical School in Jerusalem, Frank Putnam at the National Institute of Mental Health (NIMH), and Roger Pitman, later at Harvard, were all finding that traumatized people keep secreting large amounts of stress hormones long after the actual danger has passed, and Rachel Yehuda at Mount Sinai in New York confronted us with her seemingly paradoxical findings that the levels of the stress hormone cortisol are low in PTSD. Her discoveries only started to make sense when her research clarified that cortisol puts an end to the stress response by sending an all-safe signal, and that, in PTSD, the body’s stress hormones do, in fact, not return to baseline after the threat has passed.
Ideally our stress hormone system should provide a lightning-fast response to threat, but then quickly return us to equilibrium. In PTSD patients, however, the stress hormone system fails at this balancing act. Fight/flight/freeze signals continue after the danger is over, and, as in the case of the dogs, do not return to normal. Instead, the continued secretion of stress hormones is expressed as agitation and panic and, in the long term, wreaks havoc with their health.
I missed my plane that day because I had to talk with Steve Maier. His workshop offered clues not only about the underlying problems of my patients but also potential keys to their resolution. For example, he and Seligman had found that the only way to teach the traumatized dogs to get off the electric grids when the doors were open was to repeatedly drag them out of their cages so they could physically experience how they could get away. I wondered if we also could help my patients with their fundamental orientation that there was nothing they could do to defend themselves? Did my patients also need to have physical experiences to restore a visceral sense of control? What if they could be taught to physically move to escape a potentially threatening situation that was similar to the trauma in which they had been trapped and immobilized? As I will discuss in the treatment part 5 of this book, that was one of the conclusions I eventually reached.
Further animal studies involving mice, rats, cats, monkeys, and elephants brought more intriguing data.10 For example, when researchers played a loud, intrusive sound, mice that had been raised in a warm nest with plenty of food scurried home immediately. But another group, raised in a noisy nest with scarce food supplies, also ran for home, even after spending time in more pleasant surroundings.11
Scared animals return home, regardless of whether home is safe or frightening. I thought about my patients with abusive families who kept going back to be hurt again. Are traumatized people condemned to seek refuge in what is familiar? If so, why, and is it possible to help them become attached to places and activities that are safe and pleasurable?12
ADDICTED TO TRAUMA: THE PAIN OF PLEASURE AND THE PLEASURE OF PAIN
One of the things that struck my colleague Mark Greenberg and me when we ran therapy groups for Vietnam combat veterans was how, despite their feelings of horror and grief, many of them seemed to come to life when they talked about their helicopter crashes and their dying comrades. (Former New York Times correspondent Chris Hedges, who covered a number of brutal conflicts, entitled his book War Is a Force That Gives Us Meaning.13) Many traumatized people seem to seek out experiences that would repel most of us,14 and patients often complain about a vague sense of emptiness and boredom when they are not angry, under duress, or involved in some dangerous activity.
My patient Julia was brutally raped at gunpoint in a hotel room at age sixteen. Shortly thereafter she got involved with a violent pimp who prostituted her. He regularly beat her up. She was repeatedly jailed for prostitution, but she always went back to her pimp. Finally her grandparents intervened and paid for an intense rehab program. After she successfully completed inpatient treatment, she started working as a receptionist and taking courses at a local college. In her sociology class she wrote a term paper about the liberating possibilities of prostitution, for which she read the memoirs of several famous prostitutes. She gradually dropped all her other courses. A brief relationship with a classmate quickly went sour—he bored her to tears, she said, and she was repelled by his boxer shorts. She then picked up an addict on the subway who first beat her up and then started to stalk her. She finally became motivated to return to treatment when she was once again severely beaten.
Freud had a term for such traumatic reenactments: “the compulsion to repeat.” He and many of his followers believed that reenactments were an unconscious attempt to get control over a painful situation and that they eventually could lead to mastery and resolution. There is no evidence for that theory—repetition leads only to further pain and self-hatred. In fact, even reliving the trauma repeatedly in therapy may reinforce preoccupation and fixation.
Mark Greenberg and I decided to learn more about attractors—the things that draw us, motivate us, and make us feel alive. Normally attractors are meant to make us feel better. So, why are so many people attracted to dangerous or painful situations? We eventually found a study that explained how activities that cause fear or pain can later become thrilling experiences.15 In the 1970s Richard Solomon of the University of Pennsylvania had shown that the body learns to adjust to all sorts of stimuli. We may get hooked on recreational drugs because they right away make us feel so good, but activities like sauna bathing, marathon running, or parachute jumping, which initially cause discomfort and even terror, can ultimately become very enjoyable. This gradual adjustment signals that a new chemical balance has been established within the body, so that marathon runners, say, get a sense of well-being and exhilaration from pushing their bodies to the limit.
At this point, just as with drug addiction, we start to crave the activity and experience withdrawal when it’s not available. In the long run people become more preoccupied with the pain of withdrawal than the activity itself. This theory could explain why some people hire someone to beat them, or burn themselves with cigarettes. or why they are only attracted to people who hurt them. Fear and aversion, in some perverse way, can be transformed into pleasure.
Solomon hypothesized that endorphins—the morphinelike chemicals that the brain secretes in response to stress—play a role in the paradoxical addictions he described. I thought of his theory again when my library habit led me to a paper titled “Pain in Men Wounded in Battle,” published in 1946. Having observed that 75 percent of severely wounded soldiers on the Italian front did not request morphine, a surgeon by the name of Henry K. Beecher speculated that “strong emotions can block pain.”16
Were Beecher’s observations relevant to people with PTSD? Mark Greenberg, Roger Pitman, Scott Orr, and I decided to ask eight Vietnam combat veterans if they would be willing to take a standard pain test while they watched scenes from a number of movies. The first clip we showed was from Oliver Stone’s graphically violent Platoon (1986), and while it ran we measured how long the veterans could keep their right hands in a bucket of ice water. We then repeated this process with a peaceful (and long-forgotten) movie clip. Seven of the eight veterans kept their hands in the painfully cold water 30 percent longer during Platoon. We then calculated that the amount of analgesia produced by watching fifteen minutes of a combat movie was equivalent to that produced by being injected with eight milligrams of morphine, about the same dose a person would receive in an emergency room for crushing chest pain.
We concluded that Beecher’s speculation that “strong emotions can block pain” was the result of the release of morphinelike substances manufactured in the brain. This suggested that for many traumatized people, reexposure to stress might provide a similar relief from anxiety.17 It was an interesting experiment, but it did not fully explain why Julia kept going back to her violent pimp.
SOOTHING THE BRAIN
The 1985 ACNP meeting was, if possible, even more thought provoking than the previous year’s session. Kings College professor Jeffrey Gray gave a talk about the amygdala, a cluster of brain cells that determines whether a sound, image, or body sensation is perceived as a threat. Gray’s data showed that the sensitivity of the amygdala depended, at least in part, on the amount of the neurotransmitter serotonin in that part of the brain. Animals with low serotonin levels were hyperreactive to stressful stimuli (like loud sounds), while higher levels of serotonin dampened their fear system, making them less likely to become aggressive or frozen in response to potential threats.18
That struck me as an important finding: My patients were always blowing up in response to small provocations and felt devastated by the slightest rejection. I became fascinated by the possible role of serotonin in PTSD. Other researchers had shown that dominant male monkeys had much higher levels of brain serotonin than lower-ranking animals but that their serotonin levels dropped when they were prevented from maintaining eye contact with the monkeys they had once lorded over. In contrast, low-ranking monkeys who were given serotonin supplements emerged from the pack to assume leadership.19 The social environment interacts with brain chemistry. Manipulating a monkey into a lower position in the dominance hierarchy made his serotonin drop, while chemically enhancing serotonin elevated the rank of former subordinates.
The implications for traumatized people were obvious. Like Gray’s low-serotonin animals, they were hyperreactive, and their ability to cope socially was often compromised. If we could find ways to increase brain serotonin levels, perhaps we could address both problems simultaneously. At that same 1985 meeting I learned that drug companies were developing two new products to do precisely that, but since neither was yet available, I experimented briefly with the health-food-store supplement L-tryptophan, which is a chemical precursor of serotonin in the body. (The results were disappointing.) One of the drugs under investigation never made it to the market. The other was fluoxetine, which, under the brand name Prozac, became one of the most successful psychoactive drugs ever created.
On Monday, February 8, 1988, Prozac was released by the drug company Eli Lilly. The first patient I saw that day was a young woman with a horrendous history of childhood abuse who was now struggling with bulimia—she basically spent much of her life bingeing and purging. I gave her a prescription for this brand-new drug, and when she returned on Thursday she said, “I’ve had a very different last few days: I ate when I was hungry, and the rest of the time I did my schoolwork.” This was one of the most dramatic statements I had ever heard in my office.
On Friday I saw another patient to whom I’d given Prozac the previous Monday. She was a chronically depressed mother of two school-aged children, preoccupied with her failures as a mother and wife and overwhelmed by demands from the parents who had badly mistreated her as a child. After four days on Prozac she asked me if she could skip her appointment the following Monday, which was Presidents’ Day. “After all,” she explained, “I’ve never taken my kids skiing—my husband always does—and they are off that day. It would really be nice for them to have some good memories of us having fun together.”
This was a patient who had always struggled merely to get through the day. After her appointment I called someone I knew at Eli Lilly and said, “You have a drug that helps people to be in the present, instead of being locked in the past.” Lilly later gave me a small grant to study the effects of Prozac on PTSD in sixty-four people—twenty-two women and forty-two men—the first study of the effects of this new class of drugs on PTSD. Our Trauma Clinic team enrolled thirty-three nonveterans and my collaborators, former colleagues at the VA, enrolled thirty-one combat veterans. For eight weeks half of each group received Prozac and the other half a placebo. The study was blinded: Neither we nor the patients knew which substance they were taking, so that our preconceptions could not skew our assessments.
Everyone in the study—even those who had received the placebo—improved, at least to some degree. Most treatment studies of PTSD find a significant placebo effect. People who screw up their courage to participate in a study for which they aren’t paid, in which they’re repeatedly poked with needles, and in which they have only a fifty-fifty chance of getting an active drug are intrinsically motivated to solve their problem. Maybe their reward is only the attention paid to them, the opportunity to respond to questions about how they feel and think. But maybe the mother’s kisses that soothe her child’s scrapes are “just” a placebo as well.
Prozac worked significantly better than the placebo for the patients from the Trauma Clinic. They slept more soundly; they had more control over their emotions and were less preoccupied with the past than those who received a sugar pill.20 Surprisingly, however, the Prozac had no effect at all on the combat veterans at the VA—their PTSD symptoms were unchanged. These results have held true for most subsequent pharmacological studies on veterans: While a few have shown modest improvements, most have not benefited at all. I have never been able to explain this, and I cannot accept the most common explanation: that receiving a pension or disability benefits prevents people from getting better. After all, the amygdala knows nothing of pensions—it just detects threats.
Nonetheless, medications such as Prozac and related drugs like Zoloft, Celexa, Cymbalta, and Paxil, have made a substantial contribution to the treatment of trauma-related disorders. In our Prozac study we used the Rorschach test to measure how traumatized people perceive their surroundings. These data gave us an important clue to how this class of drugs (formally known as selective serotonin reuptake inhibitors, or SSRIs) might work. Before taking Prozac these patients’ emotions controlled their reactions. I think of a Dutch patient, for example (not in the Prozac study) who came to see me for treatment for a childhood rape and who was convinced that I would rape her as soon as she heard my Dutch accent. Prozac made a radical difference: It gave PTSD patients a sense of perspective21 and helped them to gain considerable control over their impulses. Jeffrey Gray must have been right: When their serotonin levels rose, many of my patients became less reactive.
THE TRIUMPH OF PHARMACOLOGY
It did not take long for pharmacology to revolutionize psychiatry. Drugs gave doctors a greater sense of efficacy and provided a tool beyond talk therapy. Drugs also produced income and profits. Grants from the pharmaceutical industry provided us with laboratories filled with energetic graduate students and sophisticated instruments. Psychiatry departments, which had always been located in the basements of hospitals, started to move up, both in terms of location and prestige.
One symbol of this change occurred at MMHC, where in the early 1990s the hospital’s swimming pool was paved over to make space for a laboratory, and the indoor basketball court was carved up into cubicles for the new medication clinic. For decades doctors and patients had democratically shared the pleasures of splashing in the pool and passing balls down the court. I’d spent hours in the gym with patients back when I was a ward attendant. It was the one place where we all could restore a sense of physical well-being, an island in the midst of the misery we faced every day. Now it had become a place for patients to “get fixed.”
The drug revolution that started out with so much promise may in the end have done as much harm as good. The theory that mental illness is caused primarily by chemical imbalances in the brain that can be corrected by specific drugs has become broadly accepted, by the media and the public as well as by the medical profession.22 In many places drugs have displaced therapy and enabled patients to suppress their problems without addressing the underlying issues. Antidepressants can make all the difference in the world in helping with day-to-day functioning, and if it comes to a choice between taking a sleeping pill and drinking yourself into a stupor every night to get a few hours of sleep, there is no question which is preferable. For people who are exhausted from trying to make it on their own through yoga classes, workout routines, or simply toughing it out, medications often can bring life-saving relief. The SSRIs can be very helpful in making traumatized people less enslaved by their emotions, but they should only be considered adjuncts in their overall treatment.23
After conducting numerous studies of medications for PTSD, I have come to realize that psychiatric medications have a serious downside, as they may deflect attention from dealing with the underlying issues. The brain-disease model takes control over people’s fate out of their own hands and puts doctors and insurance companies in charge of fixing their problems.
Over the past three decades psychiatric medications have become a mainstay in our culture, with dubious consequences. Consider the case of antidepressants. If they were indeed as effective as we have been led to believe, depression should by now have become a minor issue in our society. Instead, even as antidepressant use continues to increase, it has not made a dent in hospital admissions for depression. The number of people treated for depression has tripled over the past two decades, and one in ten Americans now take antidepressants.24
The new generation of antipsychotics, such as Abilify, Risperdal, Zyprexa, and Seroquel, are the top-selling drugs in the United States. In 2012 the public spent $1,526,228,000 on Abilify, more than on any other medication. Number three was Cymbalta, an antidepressant that sold well over a billion dollars’ worth of pills,25 even though it has never been shown to be superior to older antidepressants like Prozac, for which much cheaper generics are available. Medicaid, the government health program for the poor, spends more on antipsychotics than on any other class of drugs.26 In 2008, the most recent year for which complete data are available, it funded $3.6 billion for antipsychotic medications, up from $1.65 billion in 1999. The number of people under the age of twenty receiving Medicaid-funded prescriptions for antipsychotic drugs tripled between 1999 and 2008. On November 4, 2013, Johnson & Johnson agreed to pay more than $2.2 billion in criminal and civil fines to settle accusations that it had improperly promoted the antipsychotic drug Risperdal to older adults, children, and people with developmental disabilities.27 But nobody is holding the doctors who prescribed them accountable.
Half a million children in the United States currently take antipsychotic drugs. Children from low-income families are four times as likely as privately insured children to receive antipsychotic medicines. These medications often are used to make abused and neglected children more tractable. In 2008 19,045 children age five and under were prescribed antipsychotics through Medicaid.28 One study, based on Medicaid data in thirteen states, found that 12.4 percent of children in foster care received antipsychotics, compared with 1.4 percent of Medicaid-eligible children in general.29 These medications make children more manageable and less aggressive, but they also interfere with motivation, play, and curiosity, which are indispensable for maturing into a well-functioning and contributing member of society. Children who take them are also at risk of becoming morbidly obese and developing diabetes. Meanwhile, drug overdoses involving a combination of psychiatric and pain medications continue to rise.30
Because drugs have become so profitable, major medical journals rarely publish studies on nondrug treatments of mental health problems.31 Practitioners who explore treatments are typically marginalized as “alternative.” Studies of nondrug treatments are rarely funded unless they involve so-called manualized protocols, where patients and therapists go through narrowly prescribed sequences that allow little fine-tuning to individual patients’ needs. Mainstream medicine is firmly committed to a better life through chemistry, and the fact that we can actually change our own physiology and inner equilibrium by means other than drugs is rarely considered.
ADAPTATION OR DISEASE?
The brain-disease model overlooks four fundamental truths: (1) our capacity to destroy one another is matched by our capacity to heal one another. Restoring relationships and community is central to restoring well-being; (2) language gives us the power to change ourselves and others by communicating our experiences, helping us to define what we know, and finding a common sense of meaning; (3) we have the ability to regulate our own physiology, including some of the so-called involuntary functions of the body and brain, through such basic activities as breathing, moving, and touching; and (4) we can change social conditions to create environments in which children and adults can feel safe and where they can thrive.
When we ignore these quintessential dimensions of humanity, we deprive people of ways to heal from trauma and restore their autonomy. Being a patient, rather than a participant in one’s healing process, separates suffering people from their community and alienates them from an inner sense of self. Given the limitations of drugs, I started to wonder if we could find more natural ways to help people deal with their post-traumatic responses.