Ordinarily Well: The Case for Antidepressants

24

Trajectories

DESPITE THIS PROGRESS in understanding and using antidepressants, the turn-of-the-millennium years proved difficult for psychiatry as a profession.

The field had entered into an increased, and increasingly questionable, involvement with the pharmaceutical industry, and yet drug development had stalled. In 2002, The Lancet published an editorial titled “Just How Tainted Has Medicine Become?” Spending on prescription drugs had doubled between 1997 and 2001. As Pharma thrived, it seemed that the medical disciplines had been absorbed as subsidiaries. In the essay’s illustrative case, an editor of The British Journal of Psychiatry had been paid for his association with an “educational organisation” sponsored by the manufacturer of an antidepressant, Effexor—and he had just published a paper favorable to the drug.

Prozac’s popularity had come to be seen as a mixed blessing.

On the upside, for years public health advocates had pushed for primary-care doctors to recognize depression, and now they were treating it regularly. With well-tolerated medications available, patients were more likely to identify themselves as suffering from mood disorder and more willing to seek care. Memoirs multiplied. Surely stigma had been lessened.

But a survey initiated in the early 1990s found that antidepressant use quadrupled in the next ten years. That medication was being overprescribed became a commonplace. Drug companies played their part, with the airing of cartoonish direct-to-consumer ads and the hiring of svelte young women to serve as drug reps to aging male physicians.

Soon antidepressant prescriptions topped 100 million annually, with sales on the order of $10 billion. The boom altered the pharmaceutical industry’s relationship to psychiatry. Long disfavored, mental health researchers briefly became royalty. Budgets of university departments doubled and doubled again, via drug company funding. With cash came influence. Outcome research long left to academics now was managed by Pharma overseers, who put their stamp on the resulting journal articles as well. Pharmaceutical houses were accused of hiding data about unsuccessful outcome trials and negative drug effects—withdrawal syndromes and suicide attempts.

Concern grew about doctors’ receiving payments from industry. Academics were found to have skirted their universities’ rules for reporting income. Ordinary practitioners who prescribed in volume were invited to conferences at resorts.

I had felt relatively insulated from drug company influence. I gave no sponsored talks. When—rarely—I attended one, I viewed the graphs skeptically. Everyone knew which researchers were tight with Pharma. In 2008, the BMJ—the new incarnation of the British Medical Journal—would put me in exclusive company. The publication named one hundred or so “independent medical experts,” authorities it considered unbiased commentators, not under Pharma’s influence. I was one of six psychiatrists on the roster.

Still, there was no escaping shame and frustration. My profession had sullied itself—and to what end, beyond personal gain? Chemicals previously set aside had been brought off the shelf and tested, and some—Remeron, Pristiq, Viibryd, and other antidepressants—had passed muster. But they were largely “me, too” drugs, similar in their effects and mechanisms of action to antidepressants already in use. Since Prozac’s early days, there had been no breakthroughs.

Meanwhile, the depressed patients who responded easily had already been treated, and psychiatrists increasingly saw the remnant for whom there was no ready remedy. Daily, the field faced the limitations of its methods.

In this troubled atmosphere, Irving Kirsch’s next salvo landed. Through a Freedom of Information Act request, he had obtained files of trials on new antidepressants that pharmaceutical houses had performed for the Food and Drug Administration. Now, he could answer the leading complaint about “Hearing Placebo,” that it was based on an arbitrary collection of research. He had a complete sample, 100 percent of the FDA studies.

In 2002, Kirsch published his conclusions, under the title “The Emperor’s New Drugs.” He found the effects of antidepressants to be lower and the power of dummy pills higher than he had previously estimated them to be.

Kirsch had analyzed data on three antidepressants, Prozac, Effexor, and Serzone. Medication outperformed placebo by only two points on the Hamilton scale, the equivalent of losing one symptom. Placebo, to use Kirsch’s formulation, did 80 percent of what drugs did. A reanalysis in 2008 included trials of Paxil as well and found an effect size for the four antidepressants of 0.32, a small effect. To his credit, Kirsch included a reminder: if additivity did not apply, the medications might work better than the calculations suggested.

But Kirsch thought that the drugs’ reach was limited. A British health authority, the National Institute for Clinical Excellence, or NICE, had decided that Hamilton score differences of less than three and effect sizes of less than 0.5 were not clinically meaningful. In Kirsch’s analysis, antidepressants met the NICE minimum only for very severe depression. Other psychiatrists, including Jamie Horder of Oxford and Nassir Ghaemi of Tufts, later criticized Kirsch’s handling of the data. Their calculations showed efficacy for moderate levels of depression, too.

Still, the outcomes were weak, and Kirsch’s conclusions proved influential. It became commonplace for popular writing to suggest that antidepressants do not work at all—the conclusion I find dangerous. By and large, leaders in psychiatry embraced the Kirsch summary. They had to, if they had associated themselves with drug company research.

If the “Emperor” critique displayed any of the transgressive wit I had been inclined to credit Kirsch with, it was along these lines, challenging pharmacologists to accept that their pills are glorified placebos or acknowledge that much of their research was shoddy. I had no trouble embracing the second option. Many Pharma-sponsored studies are terrible—certainly for answering our How much? question. Because of the way that the FDA structures the approval process, the incentive is for drug companies to demonstrate that a new medication is minimally useful but very safe. One response to Kirsch’s analysis—the data show limited efficacy—is to say that the industry accomplished what it had set out to do.

Otherwise, I wasn’t sure that antidepressants had failed at the level that Kirsch suggested.

We’ll remember, from our discussion of Gene Glass’s work, that in meta-analysis full samples have a downside. Glass had risked letting phobia treatments dominate his results. In Kirsch’s analyses, a particular antidepressant, Serzone, played the role of exposure to snakes.

Serzone had been developed and tested because its influence on the brain’s handling of serotonin was distinctive. Its special chemistry might make Serzone less likely than SSRIs to cause diminished sex drive and other adverse effects. But the drug had floundered in outcome trials.

That pairing—repeated failed efforts, followed by drug approval—follows from the FDA’s ground rules. The FDA has long required companies to produce two favorable randomized trials. The manufacturer can initiate any number of studies in which the medication does poorly so long as, in time, two sets of good results emerge. Behind the policy is the assumption that outcome research is hard to conduct well. If a drug is shown to work and that efficacy is confirmed in a second trial, the agency will be inclined to offer physicians access to the treatment.

The FDA has other requirements. It mistrusts high dropout rates and would be reluctant to approve antidepressants associated with suicides or suicide attempts. To retain and protect research subjects, the industry builds social support into trials, making participation especially pleasant—never mind that placebo response rates skyrocket. The goal is to ensure high patient “adherence”—sticking with the project—and to cut the risk of bad events, until finally two trials arrive in which a drug’s effects are discernible over the background noise.

Drugs are patented in the early going, so companies are always racing the patent-expiration clock. Make your mistakes fast, is the attitude. Pharma has come to emphasize patient flow, which has meant abandoning universities for drug-testing mills. These for-profit sites run multiple experiments simultaneously, attracting steady streams of research subjects through nonstop marketing enhanced by inducements for participation. The patients are atypical, even untrustworthy, but numerous enough so that trials can be completed expeditiously.

The research submitted to the FDA, in other words, is not designed to demonstrate new drugs’ optimum efficacy. It is designed to produce two successful trials quickly in settings that retain patients and avert disastrous outcomes.

Perhaps wanting to feature Serzone as having few side effects, the manufacturer began its testing with inadequate doses. Even prescribed properly, Serzone looked like a weak antidepressant.

In this instance, because the FDA wanted to give doctors an extra option, access to a medicine that acted through distinctive means, uncharacteristically it looked past full-scale Hamilton scores to subscores (like Per Bech’s) and took account of factors such as differential dropout.

Although it gained FDA approval, Serzone did not hold up in the marketplace.

After an initial period of vigorous adoption, it slipped down the list of popular antidepressants. In 2003, a watchdog group, Public Citizen, petitioned the FDA to ban Serzone, claiming that it caused liver disease. Serzone’s manufacturer, Bristol-Myers Squibb, declined to defend its drug, although it remains available as a generic.

Kirsch’s summary calculations had more patients from Serzone trials than from trials of any other drug. As Serzone kept flunking the Hamilton, the pharmaceutical house had kept running studies. When you “analyze everything,” the medication that needed the most trials will dominate.

Looking at Kirsch’s second paper, if we drop the Serzone data and consider outcomes for the remaining antidepressants, then calculations like Horder’s would show about three Hamilton points’ worth of benefit overall, with Effexor and Paxil meeting the NICE standard easily and Prozac lagging.

This distinction—two points or three—may seem trivial, but in experts’ discussions it received plenty of ink. A report showing that the SSRIs satisfied one clinical-excellence criterion (but failed another) would hardly have made headlines.

As for the second benchmark, requiring effect sizes at the 0.5 level, NICE seemed to abandon it shortly after it was issued. A more usual target is 0.4, roughly equivalent to a number needed to treat between 4 and 5. In demanding more, NICE had been requiring antidepressants to test out better than do many drugs commonly used in medicine.

The three-Hamilton-point criterion is worth thinking about. A depressed patient can shed three Hamilton points by sleeping better or by improving marginally on scattered items, say, constipation, agitation, and depressed mood. Those advantages (of drug over placebo) might be welcome, but why would a health-standards group consider them sufficient?

The answer is that the averages obscure substantial benefits to the patients for whom drugs work. Three Hamilton points’ worth of difference, drug over placebo, corresponds to many additional responses in patients on antidepressants.

In the wake of Kirsch’s “Emperor” challenge, researchers studied how symptom improvement distributes itself in drug trials. One team included John Davis—I have mentioned his early work with Jonathan Cole—a senior psychiatrist who made BMJ’s list of unbiased medical experts.

In 2012, Davis and colleagues (the lead author was Robert Gibbons, of the University of Chicago) published a summary of the complete set of Eli Lilly–sponsored trials of Prozac. The group found an advantage of Prozac over placebo of only 2.6 Hamilton points. But looking beyond averages to people, the results were reassuring. Doctors needed to treat between four and five patients to get one additional response. Data on Effexor were along similar lines.

Not every result in the Gibbons analysis was this favorable, but the overall lesson was consistent: You don’t know what antidepressants do until you examine files from individual patients. A small average difference in Hamilton points can correspond to a favorable number needed to treat.

Overall, even in these halfhearted trials, patients on antidepressants tend to find relief. Further insight into the pattern of response came from research led by John Krystal, the Psychiatry Department chair at Yale. He had access to the entire collection of Eli Lilly’s full-dose tests on Cymbalta, a newer imipramine-like antidepressant that influences the brain’s use of both serotonin and norepinephrine. The Cymbalta trials had comparators—Prozac, Paxil, or Lexapro—so Krystal got to assess commonly prescribed SSRIs, too. He asked, Given a pill, either drug or placebo, what are the odds of a depressed person’s following one or another trajectory, toward health or toward continued dysfunction?

For participants on medication—whether Cymbalta or SSRIs—Krystal found two curves, response and nonresponse.

A quarter of patients were nonresponders. They did not get even the boost enjoyed by a typical patient in a placebo arm. Their Hamilton scores stayed flat, as if the medicine had interfered with their enjoying the contents of the grab bag, such as emotional support. Apparently, these patients had adverse reactions. Like members of David Healy’s research team, they felt crummy on a given antidepressant. Adverse responders do better on dummy pills and likely would do better on no medicine than on the antidepressant being tested.

These profound nonresponses—no change—weigh down average results in the drug arms of the outcome trials. Many nonresponders are dropouts.

Three-quarters of the participants on medication fit a “trajectory responder” curve. They got steadily better.

Most people on a response trajectory were responders in the conventional sense as well: their Hamilton score dropped by half. The rest traveled along a similar but flatter curve, losing symptoms at a lesser pace. The authors suggested that with longer follow-up, patients with less dramatic trajectory responses might improve more and reach the conventional response mark.

To review: participants on medication had two trajectories, nonresponse and response—and most people on the response path got substantially better.

The placebo group had a single trajectory. Patients on placebo improved slowly, falling ever farther behind patients on the favorable medication paths. That is not to say that no one recovered on placebo. Some people lost many symptoms, and some deteriorated. But the placebo results clustered, following a single curve, as if a single sort of thing was happening to people in the trials’ control arms.

Krystal’s method revealed the pattern we have encountered repeatedly. Typically, the placebo grab bag results in a loss of scattered symptoms. Medication gives different results. On antidepressants, a quarter of participants feel so uncomfortable that they don’t notice the support that participation in a trial can bring. Three-quarters progress steadily, with most making marked progress.

Importantly, Krystal sees what doctors see. On antidepressants, some patients improve steadily. Others do poorly and need to come off medication. Doctors don’t see averages; they see patients. In patients, the drugs work.

In retrospect, Krystal’s results add a dimension to the Dawes-Kirsch discussion of percentages. Kirsch had written that his results meant that “for a typical patient, 75% of the benefit obtained from the active drug would also have obtained from an inactive placebo.” We can see now that the percentage formulation risks seducing us into another sort of wrong thinking. If antidepressants put some patients on a highly favorable trajectory while placebos mostly lead to scattered symptom losses, then when someone recovers on medication—when a given person’s Hamilton scores drop decisively—we would not want to say that 75 percent of that change is due to placebo effects. First, since a quarter of patients on medication see no change, placebo duplicates only 60 percent of the improvement in those (many) remaining patients who do well on medication. And second, it simply doesn’t look as if placebos (which bring about small change often) have much to do with marked responses.

The trajectory model—like the reports on energy use in the brain—argues against additivity. Medication acts on its own. Presumably, patients who improve on it are not getting a placebo response plus extra help—they’re experiencing a reaction set in motion by the drug’s pharmacologic potency.

If so, clinicians are not being fooled. Failures announce themselves clearly; patients will say that they feel no better. As for successes, since practitioners give real antidepressants, their patients will not be on a placebo trajectory. When they improve, they will be on a (medication) response trajectory, at one or another slope of progress.

Irving Kirsch’s “Emperor” paper stimulated inquiries that the field should have undertaken long before, asking how average outcomes relate to individuals’ progress in drug trials. The results suggest that effect size, which Gene Glass had made a standard in mental health research, hides more than it reveals. But Glass had been right when he wrote that effect sizes in the 0.3 to 0.5 range might be reasonably equivalent and fair indicators of efficacy. Sometimes, antidepressants with effect sizes just above 0.3 help three-quarters of those who take them. In office practice, antidepressant responses might be closer than we imagine to what-you-see-is-what-you-get, with only occasional obfuscation by placebo effects.

Those results were not ones that Kirsch had championed, but the new clarity stood as a tribute to his ability to capture the field’s attention.



If you find an error or have any questions, please email us at admin@doctorlib.info. Thank you!