Ordinarily Well: The Case for Antidepressants

22

Two Plus Two

TO SEE CLEARLY, to determine how well antidepressants work, we want to subtract placebo outcomes from treatment outcomes. But what if we can’t do that either? Surprisingly, Robyn Dawes and Irving Kirsch were in agreement on this point: Subtraction can give a wrong result, causing us to underestimate how much antidepressants offer. The drugs may be doing more than our indicators, such as effect size, suggest.

Here we come to a detail in statistics that is unfamiliar even to most doctors. For subtraction to give an accurate account of how well a treatment works, the research results need to have a property called additivity. Additivity is so automatic an assumption that it emerges—becomes easy to grasp conceptually—only when it fails. The standard way to illustrate additivity problems is through thinking about experiments with alcohol. Vodka is the active medication. Tonic water is the placebo.

Say that we want to measure vodka’s ability to impair dexterity. The problem is, we don’t have vodka, only premixed vodka tonics. We also have tonic water. We will give participants tonic water without alcohol and measure their motor performance. Then, we will serve vodka tonics and remeasure. When we subtract the deterioration on tonic from the deterioration on vodka tonic, the difference will represent the inherent capacity of vodka to cause impairment.

We begin our test by warning participants that they may become drunk. Then we administer our placebo, tonic water.

Tonic is useful because quinine, the soda’s bitter ingredient, masks the taste of alcohol. Our introductory remarks suggest to participants that they may be getting tonic laced with vodka. The tonic induces expectancy—of intoxication. Despite the absence of alcohol, our research subjects feel and act tipsy, and their performance—quickly placing pegs in holes—is three points below normal on a scale that measures dexterity.

For the second arm of our experiment, we again imply that we may be supplying a strong cocktail, and we do serve a heavy vodka tonic. Our participants’ dexterity falls four points.

Okay: Alcohol plus expectancy produces four points of harm. Expectancy alone causes three points.

We take the result from the vodka-plus-expectancy arm and subtract the result from the expectancy-alone arm. Seeing the remainder, one point, we will conclude that alcohol has little inherent efficacy—little ability to impair motor skills.

Although the arithmetic is right, we will suspect that we have failed to capture vodka’s potential. We seem to have subtracted too much.

Luckily, in our imagined experimenting, we are missing something else besides straight vodka: We have no medical ethicists. We are free to engage in deception.

Now, we tell people that they are in a tonic-water taste test. In one arm of our trial, we do give tonic. We ask our participants to complete a survey displayed on a computer screen and sneakily measure their motor performance. People remain dexterous—unimpaired.

In our second arm, we serve heavy vodka tonics, as strong as those we poured in our first experiment. We discover that alcohol alone is highly impairing. Even with no expectancy—no belief that they are drinking alcohol—our research subjects fumble, because they are drunk. The scores fall not one point but almost the full four.

So, vodka plus heightened expectancy causes four points of impairment. Vodka alone causes almost four points of impairment.

Our first conclusion, that drinking vodka is harmless, was wrong. We had subtracted too much.

The problem is with additivity. The two causes of impairment, tonic-driven expectancy and alcohol’s direct brain effects, are not additive. Expectancy caused three points of change, and vodka alone caused (almost) four, but vodka-plus-expectancy did not cause seven. It caused four—and almost all of the clumsiness could be duplicated by vodka-drinking alone. We can’t subtract three points of expectancy effect from the four points of vodka-plus-expectancy because, whatever it might do on its own, the expectancy added little impairing power to the combination.

In the first experiment, tonic worked via expectancy. In both experiments, vodka worked via direct brain effects. Where interventions operate through different means—and especially where one intervention is powerful on its own—additivity may not apply.

Give enough vodka, and we can cause any degree of impairment, right up through blackouts. With high doses of alcohol, any subtraction will give a misimpression of liquor’s potential for harm.

Percentages are equally misleading. Considering an experiment parallel to the one I have suggested, Dawes wrote, “At the extreme, the claim that as someone passes out, a certain proportion of his or her problem is due to placebo effects would be met with ridicule. And it should be.” Of our first experiment, it remains true that you could get three-quarters of the alcohol effect through tonic-and-suggestion-induced expectancy. What is not true is that three-fourths of a vodka tonic’s effect is due to the tonic (or expectancy). Alcohol is preemptive. On its own, it does the work of expectancy and more.

Turning to depression treatment, say that for a given depressive syndrome, psychotherapy reduces Hamilton scores by five points. For the same condition, an antidepressant also gives five points of benefit. What do we imagine the combination, drug plus talk therapy, will do?

It would be unusual—unheard of—for the treatments to be fully additive. We will not see ten points of benefit.

When Gerry Klerman first tested interpersonal psychotherapy, he found that giving it along with Elavil did not confer protection more substantial than that observed with Elavil alone. Recently, an all-star team—it included researchers from the Penn-Vanderbilt group that had studied Paxil and neuroticism—looked at psychotherapy and vigorous prescribing as remedies for hard-to-treat depression. The result was similar. Although each intervention is known to be effective individually, for most patients the combination conferred little benefit beyond what they gained from medication alone. For some subgroups, medication on its own was more helpful than medication and cognitive therapy.

We can find examples to the contrary. But the common outcome in psychiatric research is, two and two do not equal four.

Like alcohol and tonic water, antidepressants and the placebo grab bag appear to act through different mechanisms. Studies of electrical activity in the brain suggest as much. According to outcome research from UCLA, in the prefrontal cortex, a brain region thought to be important in depression, patients who eventually respond to antidepressants show decreased activity in the early going. Eventual responders to the control condition—dummy pills plus low-level counseling—show increased activity. (Patients who experience no early brain-wave change tend to remain depressed.)

If these measurements are relevant—if they reflect the course of recovery from depression—then it’s not that placebo-plus-counseling moves the brain a certain distance and then the medication induces further progress. Both interventions cause change, but in opposite directions. The characteristic placebo-arm action (increased energy use in the prefrontal cortex) never appears in medicated patients and cannot account for the favorable response that they enjoy. When medicine works, it does so in its own way, via its inherent efficacy.

Similarly, contrasting patterns of brain activity have been found to predict responsiveness to antidepressants and psychotherapy.

Perhaps, as various experts have suggested, depression is a “stuck switch” problem. Strong direct intervention—counseling or medication—perturbs parts of the brain, freeing them to respond to circumstance. But the forms—the direction—of perturbation may differ.

Our understanding of local brain activity and what it means in depression is limited. Finally, no one knows what the contrasting response patterns mean. But in a suggestive way, the brain studies support the conclusion that medication-and-psychotherapy trials introduced: Placebo and antidepressant effects are unlikely to be additive. Much of what medication accomplishes, it achieves on its own.

In an outcome study, effectively we are testing a highly heterogeneous mixture, antidepressant-plus-grab-bag, to obtain a measure of change. The placebo arm, too, gives us a measure of change, some due to psychotherapy effects and some to expectancy, good weather, and patients’ farewell gifts.

How much shall we subtract?

Not the whole. The boost from minimal supportive psychotherapy? The medication covered much of that territory on its own.

In antidepressant trials, almost certainly, full additivity does not apply—and yet our calculations, including ones for effect sizes, assume it. Virtually every formal estimate of antidepressant efficacy arises from a premise, the right to subtract, that is unproven and likely wrong. Our estimates of drug efficacy run too low.

In much of medicine, additivity is not at issue. The control condition—bed rest, in the streptomycin trial—merely tracks the fluctuation of illness with time. Because mood disorders respond to both psychological and chemical influences, the testing of antidepressants is different. To his credit, Kirsch said as much. The conclusion in “Hearing Placebo”—that antidepressants have low efficacy—was, he wrote, “based on the assumption that drug and placebo effects are additive. The additive assumption is that the effect of the drug is limited to the difference between the drug response and the placebo response. Alternatively, these effects might not be additive. It is possible that the drug would produce the same effect, even if there were no placebo effect.” Kirsch was open to the possibility that antidepressants are like vodka in our vodka-tonic example. Drug-based change might preempt placebo effects, and not just those arising from emotional support. Even expectancy effects—classic placebo effects—might not be additive; medicine might work even if you did not know you were on it.

Elsewhere, Kirsch wrote:

It is also possible that antidepressant drug and placebo effects are not additive and that the true drug effect is greater than the drug/placebo difference … Alcohol and stimulant drugs, for example, produce at least some drug and placebo effects that are not additive …

If antidepressant drug effects and antidepressant placebo effects are not additive, the ameliorating effects of antidepressants might be obtained even if patients did not know the drug was being administered. If that is the case, then antidepressant drugs have substantial pharmacologic effects that are duplicated or masked by placebo.

Here again, Kirsch is saying that even if dummy pills elicit classic placebo effects, subtracting the resulting change from change that drugs produce may lead to error.

Kirsch presented reasons to believe that additivity might apply after all, but they were, to my reading, unconvincing or undercut by results in his own later research. And Kirsch had always seemed open to the other possibility—non-additivity. “Hearing Placebo” contained a challenge to the psychopharmacology community: Either its drugs did not work or, if additivity did not apply, its methods gave imprecise results.

Because standard psychiatric drug trials have psychotherapeutic components and because antidepressants sometimes preempt the effects of psychotherapy, we cannot count on additivity. This uncertainty presents a challenge for evidence-based psychiatry: Our controlled trials, conventionally analyzed, may not reflect reality. Despite our use of randomization, they are likely subject to a consistent confound, arising from a technical bias against antidepressants. We know that antidepressants work. We cannot say how well.



If you find an error or have any questions, please email us at admin@doctorlib.info. Thank you!