Ordinarily Well: The Case for Antidepressants

26

Interlude

Pitch-Perfect

THE CENTER SITS in a prosperous community just outside one of America’s second-tier cities. Across the city boundary, the usual urban tensions are on display. Middle-class migration to the suburbs and the loss of union jobs—we are in October of 2012—have left behind an underclass short on resources. Spotty gentrification downtown has not rescued the poor, but the exurbs are flourishing.

The center building is a glassed cube with views in one direction to prosperous businesses and in another to gracious homes. Decorated sparely—art photos, leather couches—its second-floor waiting room is a reassuring setting. The ambience strikes a balance between implications of comfort and competence, warmth and expertise.

The personnel are part of the picture. They have the thin, upscale multicultural look of students at a selective university, which is where most were a few years back. Like the decor, they radiate competence and caring—but caring within limits. Professionalism prevails.

In contrast, the clientele on the couches are overweight and modestly dressed. Most are unemployed or underemployed, which is why they can visit the center during the day. They live far from here, some in subsidized housing or homeless shelters.

These men and women are, for all that, valuable. Research subjects are hard to come by. The staff is intent on retaining them.

The for-profit clinical research center is a recent phenomenon. Both pharmaceutical houses and the Food and Drug Administration have turned demanding. Outcome trials are expensive. A comparison of drug and placebo in a hundred subjects will cost $10 million, and companies sometimes spend much more—approaching $100 million. No one leaves drug testing to amateurs.

The clinic I am visiting is influential. Most major trials today are multisite. If you are on an antidepressant approved by the FDA in recent decades, chances are that some of the testing for it was done here. The center enrolls more than a thousand new research subjects a year, two-thirds of them in studies of depression. More mental health visits take place here than at the local university or veterans’ hospital, in part because research entails more frequent attendance than does treatment under today’s austere standards of care.

The center serves as an auxiliary to the health-care system. Strapped financially, state facilities focus on grave impairment, so uninsured men and women who are slightly less ill will get their mental health care here. Perhaps this pattern will change under Obamacare; more likely, not.

Screening is thorough. Often, it is at the center that subjects will learn that they are pregnant. First diagnoses of cancer, heart disease, hepatitis, and HIV infection are common. Pickups of drug abuse may lead to treatment elsewhere. Sometimes it pays to clean up a subject, getting him off street drugs and into a Pharma protocol.

Since a dozen trials run simultaneously, there are economies of scale. When the clinic recruits for an anxiety study, the intake team may diagnose mania and refer the patient to a different trial, for bipolar disorder.

Overlap matters because subjects are precious. Between advertising (on daytime television), leafleting (at drop-in centers for the mentally ill), and event sponsorship (including meals at soup kitchens, via a local ministry), the clinic may spend a million dollars a year on recruitment, one thousand per new participant. This budget would be higher but for the clinic’s good reputation. It depends on professional contacts and word of mouth—on its history in the community.

Support begins with transportation to the center. Most research subjects arrive by van. The routes extend to poor rural areas, journeys of many miles. More trips are from the inner city. The van drivers, selected and trained to be personable, know the regulars—men and women who participate in multiple trials, one after the other. The drivers are raconteurs, listeners, and facilitators, coaxing their passengers to interact companionably. In rating sessions, when interviewers ask subjects what in the past week has given them pleasure, they may reply, The ride in.

The van is a social hub. Riding along, I hear a somnolent woman—unemployed, living in the projects—speak of her girl gang, regulars who have been through many studies of bipolar disorder. Some are in the van. The posse has seen Sheila quick and witty, so they tolerate her as she is now, slowed by the depressive phase of the illness. The ride structures Sheila’s day. It brings her from the shelter to the center and later to a welfare-to-work program.

Talk of misfortune is common. Center participants are exposed to theft, violence, and petty humiliation. The passengers function as a support group.

Much of the conversation is upbeat. One research subject reminisces about a sister who was raised at an orphanage and has just returned from a reunion with a foster parent. The driver offers parallel examples from the lives of people he knows. The discussion turns to sports. Music is a popular topic, and, perhaps oddly, high fashion.

I ask the passengers how they decided to join a study. Money was a factor. Although considered volunteers, subjects are paid between $40 and $75 for attending a rating session. Most trials begin with weekly visits, later cutting back to twice a month. Public support—welfare—in the center’s region runs to about $300 a month, so study participation may increase patients’ income by 50 percent or even double it. The extra money may represent the whole of a subject’s discretionary income.

Research review boards put caps on incentives, so as not to implicitly coerce people into undertaking a risk not in their interest. It is true that $50 would hardly compensate a middle-class worker for chunks of time taken out of the day. Intake visits can last six hours. A follow-up of ninety minutes would count as a brief rating session.

For the poor and depressed, the attraction is apparent. Here is money you can earn without confronting personal limitations that make holding a job difficult. The clinic’s drivers will come into the house and get you moving. No employer will do as much.

For the duration of a trial, participants enjoy higher income, richer social contacts, attention from doctors and nurses, access to transportation, time in an attractive setting, structured days, and a sense of purpose. In the bus, talk turns to cash gifts given to adult children. That’s a luxury the extra income affords, the ability to be generous. Even on placebo, these patients ought to get better.

Incentives are only part of the story. The van passengers are proud to do their part to advance medical science. They also say that they are getting good psychiatric care, that it’s built into the research.

For all that, the psychological careers of these participants are choppy. Once a study is over, they will not be able to afford the medicine that helped them, or it may be unavailable. These patients will relapse and then be candidates for future trials. If they haven’t relapsed, it might be in their interest to make out that they have. The weeks in drug studies may compose the best intervals in the year.

The same is true for participants who have been on dummy pills and responded to the benefits that center attendance provides, not least the opportunity to discuss pain, adversity, and private history.

Its strengths leave the research center vulnerable. It will retain subjects who benefit from social support. If they talk up the program and attract others with a similar bent, word of mouth will amplify this problem. A focus on recruitment and retention guarantees high rates of placebo response.

Twice in two days, I heard deeply depressed patients, men who spoke with excruciating deliberateness, say that they had not encountered anyone socially outside the center. The only pleasurable part of their week was the ratings interview.

Many riders struck me as capable and personable. Depression had made them downwardly mobile. Part of what the center offered was time in a prosperous setting that should have been their due.

Not all participants arrived by van, and not all were unemployed. Some middle-class patients intended to make a contribution to science. Some came because all prior treatments had failed.

Still, most participants had incentives to exaggerate depression ratings at the start of trials. Reading about demand characteristics, I had imagined that they were subtle, grounded as much in the urge to please authority figures as in the wish to be accepted for a study. In practice, the influence of the test setting on depression-scale answers was pronounced and self-evident. During interviews, I heard obvious distortions: fairly healthy people “endorsing” symptoms (saying they had them) in order to join a trial, and in the later going, seriously impaired people attesting to improvement, pretty clearly in hopes of pleasing the rater. Likely, some inaccuracy occurred in the other direction, too: some subjects clung to the sick role. Any drug response coexisted with ratings movement related to a trial’s place in the subject’s life.

And then there was adversity. Bad things happen to people with scant resources: muggings, evictions. Dispiriting events insert a random flux into research subjects’ progress. It is hard to measure treatment effects in a maelstrom.

For patients with disrupted lives, the clinical center is a point of stability. It inspires loyalty because it aspires to excellence. Mornings, while the vans are on the road, the doctors, nurses, pharmacists, technicians, and raters gather in a state-of-the-art meeting room. An administrator projects a spreadsheet on a screen. The array displays every patient’s interview and lab test results, with abnormal findings and missed appointments highlighted in red.

Where is Mr. Smith? Attending a funeral. His rater has sent condolences. She’ll phone again Wednesday. The protocol’s reevaluation deadline is Friday. We’re on it. He’ll make it in.

Mr. Doe’s liver enzyme levels are just above the study’s norm. Does another trial tolerate them? The primary-care doctor has been notified.

Miss Roe’s lab results show marijuana use. She is entering counseling and has vowed to stay clean. She’s out of the current study but will be retested and considered for one starting next month.

Years ago, I attended rounds at a justly celebrated “assertive community treatment program” that followed the chronically mentally ill via contact with their employers and landlords. I had not seen follow-through at this level. Nor had the mental health and general hospital clinics where I’d worked enjoyed the efficiency of the research center. This quality comes through to study participants—reliable attention.

Of course, the center’s work product is not health. It is successful trials, ones that detect daylight between a test drug and placebo if any is to be found.

Drug companies demand high patient retention; even 10 percent “attrition” may be unacceptable. Losing subjects means risking differential dropout, which destroys the ability to confirm the superiority of drug to placebo. Business and science require that the clinic atmosphere be congenial. The center honors that imperative, erring in the direction of support, but with awareness that too much coziness will send placebo responses through the roof.

The raters embody this delicate strategy. The best possess a capacity for empathy modulation that verges on the eerie. The young women (all raters are young women) offer warmth up to a point. Everyone is attractive, and no one is seductive. The community maintains itself. If a new hire does not fit in, the group will extrude her.

Once you’ve seen rating sessions, you no longer wonder where placebo responses come from. Drug studies can be tedious. One patient begins a visit by having his blood pressure taken. There is a discrepancy between two readings. The trial requires successive measurements in a narrow range. The protocol specifies a long interval before retesting. The rater fills the time with talk of the patient’s past work achievements. The patient, agitated at first, begins to assume the rater’s calm demeanor.

I see some stiff, mechanical encounters, focused on rating forms. But most interviews look like psychotherapy. That effect is especially evident with Allison, the center’s most admired rater. She is known for combining high retention rates with accurate ratings.

Allison never works her way down a checklist. She lets study participants reveal themselves. She starts by catching up, as with an old friend. What has become of the child who had trouble with the law?

To me, the ideal diagnostic interview resembles an assessment by a neonatologist in the delivery suite. She admires the newborn, fiddling with its fingers and toes—all while answering the mother’s anxious questions. Within minutes the doctor knows whether all is well. If not, she will have a diagnosis in mind or a plan for arriving at one.

Allison’s rating sessions are like that. She understands how it is to raise teenagers on a budget or care for ailing parents. Her manner suggests experience and tolerance.

One interviewee, Verna, struggles at work and blames others for her misfortunes. Because of a grievance with a superior, Verna has applied to change assignments. Unlike the disrespectful boss, Allison listens quietly, accepting Verna’s premises. Laying out complaints, Verna begins advising herself not to act impulsively, so as not to endanger her prospects. She sees justifications for staying put—clearer responsibilities, less chance of failure. She becomes better able to weigh the relative merits of the different postings.

Elements of the Hamilton emerge. Verna’s sleep has improved. She is eating better. Still, Verna’s willingness to socialize has not returned. Hearing the conversation, you might not know that depression ratings are at issue.

I congratulate Allison on a masterful performance. I believe that I know just how much Verna’s remaining impediments are due to depression and how much to stable character traits.

Allison sighs, “We’re stuck with her.” In the initial interview, Verna endorsed every Hamilton item and at high levels. Having entered the study with an impossibly elevated depression rating, she is bound to show improvement, whether on medication or placebo. If time spent with Allison helps, that change may further weaken the study’s ability to discern medication effects. Although welcome, Verna’s recovery will burden the trial.

When the designers of the NIMH collaborative trial referred to “minimal supportive psychotherapy,” clearly they had not seen Allison in action. Nothing about her influence was minimal. I’ve supervised colleagues and trainees, in psychiatry and social work, who had nothing like Allison’s knack for sizing up situations or setting people at ease. Many patients come from families where neglect or misunderstanding is the norm. Being listened to thoughtfully makes patients feel acknowledged, never mind that Allison’s motivations—accurate ratings, subject retention—are off to the side. Unless their depression is unbudgeable, research subjects interviewed by Allison will show improvement.

Through the van ride, contact with other enrollees, conversation with Allison, and the general structure of the center, the assessment process offers purpose, meaning, structure, companionship, attention, reassurance, respect, trust, insight, health care, financial support, and a safe and attractive physical environment. The interview takes what feels like generalized disaster and redefines it as discrete symptoms of illness. In antidepressant trials as they are run today, the contrast is not between dummy pills and active pills. It is between psychotherapy plus dummy pills and psychotherapy plus medication.

From our discussion of additivity we know that with this setup—where much of the placebo effect is a psychotherapy effect—conventional calculations fail. Because medication preempts some of what psychotherapy does, when we subtract placebo-arm results from drug-arm results, we will subtract too much.

In my residency, the group-therapy instructor had a mantra: No one’s the worse for a good experience. He meant that psychoanalytic interpretations were not all we offered. The research center provides a good experience with no need for the classic placebo effect, hopeful expectancy attached to a pill, to make a showing. When you see the center in action, you do not wonder why responses are high in control arms of trials or why drug efficacy is hard to demonstrate, even for antidepressants that test out as well as imipramine.

Implicit therapy has its insidious side. If a drug carried grave risk and if the van travelers were well informed about it, would they feel free to decline enrollment? Or is the center’s role in their lives so critical that they are, in effect, obliged to consent and, so, unable to protect themselves? It is easy to imagine disaster, medical and ethical.

The competence and skill that center workers bring to their tasks also raise questions. Why does this excellence attach to a commercial enterprise beholden to drug companies and not to our public health system? Why is affordable, competent care not widely available for this most treatable of mental illnesses, depression? The organization I had envisaged when I ran outpatient clinics at not-for-profit hospitals finds its incarnation here, in a center serving Pharma.

Set aside medication and psychotherapy: The depressed might benefit from emotional support given in the drop-in center or subsidized housing block. They might benefit from encouragement to get up in the morning, from rides to appointments, from a chance to socialize, from contact with service people who treat them with respect. If modest monetary incentives are compelling, why not use them to reward successful participation in job training? The candidate-drug-testing enterprise is built on failures in our social architecture.

Although it enrolled homeless people, the center I visited did not focus on attracting them. The risk of dropouts was considered too high. But for other “contract research organizations,” often small centers involved in addiction research or the early safety testing of antipsychotic drugs for schizophrenia, homeless shelters are a primary site for recruitment. I was seeing the less sleazy end of the commercial-drug-test spectrum, and it was queasy-making enough.

Carl Elliott, a colleague since Listening days, has written extensively and critically about for-profit drug testing and the selection of participants, saying, “It seems inevitable that the job will fall to people who have no better options.” In cancer care, in dementia care, the same is not true—a wide array of patients sign on to test unproven remedies. The research benefits from a different source of misfortune: for some conditions, there are no good treatments. With depression, the efficacy of our current medications insures that the poor will act as guinea pigs for new ones.

The center staff expressed awareness of the ethical quandaries attaching to their work. More often, they praised the clinic as a corrective to a flawed health-care system. Raters believe that drug development and testing are critical to progress in medicine, that the marketplace is the right context for that effort, and that it is fair to deliver the impression of caring, skilled treatment because assessment amounts to just that.

Without wanting to dismiss that argument entirely—I had seen the important role that the center played in subjects’ lives—I felt unease every moment I was there. I was witnessing a process that was disconcerting on both moral and scientific grounds.

My visit made me think that we had come to a dead end as regards the testing of new antidepressants. The curse of Roland Kuhn is in full effect. Cases of uncomplicated depression are hard to find. Because research subjects are rare, the very poor fill the gaps. Raters must be especially supportive. And so on. These difficulties increased my sympathy for the FDA’s position about what signals efficacy. If two favorable trials emerge in this context, you’re dealing with a highly useful treatment.



If you find an error or have any questions, please email us at admin@doctorlib.info. Thank you!