Miracle Cure: The Creation of Antibiotics and the Birth of Modern Medicine 1st Edition

NINE

“Disturbing Proportions”

The inaugural issue of the Saturday Review of Literature arrived on newsstands in August 1924, and for the next eighteen years the magazine was edited by Henry Seidel Canby, a professor at Yale University. Canby assembled an impressive group of literary critics to produce the weekly magazine, including the essayist and novelist Christopher Morley, and Mark Twain’s biographer Bernard DeVoto. But the Saturday Review is best remembered today as the brainchild of Norman Cousins, who took over as editor in 1942. The Cousins era, which lasted until his departure in 1971, represented the magazine’s high-water mark in circulation and influence, when it was the voice most attended to by America’s midcentury, middlebrow households.

Cousins regularly urged his staff to remember, “There is a need for writers who can restore to writing its powerful tradition of leadership in crisis,” and to that end, he and the magazine tirelessly advocated for the entire catalog of largely unreachable liberal objectives, from world government to nuclear disarmament. For actual impact on current events, though, the Saturday Review never exhibited a more significant bit of leadership-in-crisis writing than in a series that began in its January 3, 1959, issue.

The cover featured a photograph of Arthur Schlesinger, Jr., whose latest book, The Coming of the New Deal, was reviewed. The lead article, written by the magazine’s science editor, John Lear, was “Taking the Miracle Out of Miracle Drugs.” Its first line read, “Prescription of antibiotics without a specific cause for such treatment has reached disturbing proportions.”

The causes of that disturbance, as enumerated by one of Lear’s interviewees, Dr. Henry Kempe, head of pediatrics at the University of Colorado Medical School, were several. First was that reflexive prescription of antibiotics often hid the real disease from diagnosticians; during the ten days required for most antibiotic treatments, the patient would frequently grow worse, as the true cause of disease went unaddressed. Second, despite the generally well-tolerated character of most antibiotics, when millions of people are given them every day, thousands will exhibit symptoms of antibiotic poisoning—everything from vomiting to skin rashes. Third, antibiotics, as antagonists to all sorts of bacteria, often caused gastrointestinal upset by killing the “good” bacteria residing in the digestive tract.

But the biggest problem with the miracle drugs was that prescribing them for everything from head colds to migraines was breeding antibiotic resistance. Lear, writing that antibiotic-resistant strains of pathogens had been “known to medicine for almost five years,” actually understated the case significantly. Alexander Fleming, in his 1945 Nobel Prize Lecture, had warned, “It is not difficult to make microbes resistant to penicillin in the laboratory by exposing them to concentrations not sufficient to kill them,” and it wasn’t exactly a newsworthy observation even then. Between 1954 and 1958, as Lear documented, hospitals in the United States had experienced five hundred outbreaks of diseases caused by antibiotic-resistant pathogens—outbreaks that spread fast enough that they met the formal definition of local epidemics: disease episodes in which the daily number of new infections exceeds the number of cases resolved (though, being local, such epidemics were typically not the stuff of newspaper headlines).

The real point of interest, to Lear, was why the epidemics were appearing at all. Since physicians knew (or should have known) that antibiotics were next to useless against viruses, why did they persist in prescribing them for viral diseases? Although it’s possible that some physicians knew that most antibiotic activity occurred by disrupting the bacterial cell walls—walls that viruses, which are essentially free-floating bits of DNA, neither have nor need—did they fail to realize that this made viral disease essentially invulnerable to antibiotic treatment? Though doctors had been overshooting the mark on treating patients for decades—Oliver Wendell Holmes, in the same speech in which he recommended consigning the whole materia medica to the seafloor, recognized that “part of the blame of over-medication must, I fear, rest with the profession, for yielding to the tendency to self-delusion which seems inseparable from the practice of the art of healing”—Lear thought the answer was more obvious, and scandalous: advertising. “Established ethical drug companies, traditionally cautious in advancing claims for their medicines, are being jostled and jolted competitively in antibiotic sales by the Madison Avenue ‘hard sell’ of bulk chemical makers. . . .”

Lear wasn’t lacking for specifics. In a section of the article entitled “The Case of the Invisible Physicians,” he described a brochure produced by Pfizer, intended for doctors, promoting “the antibiotic formulation with the greatest potential value and the least probable risk . . . Sigmamycin . . . the antibiotic therapy of choice.” The promotional piece featured photos of eight business cards, each with the name of a physician. One was from Massachusetts, another from Oregon. The others—including a dermatologist, a urologist, and a pediatrician, in case the message about Sigmamycin’s wide range of effectiveness wasn’t getting through—practiced in Florida, Arizona, California, Illinois, Pennsylvania, and New York, where each one was an enthusiastic endorser of the drug.

Since the business cards included addresses and phone numbers, Lear tried to contact them. What he found were nonworking phone numbers and addresses from which his letters had been returned, either marked address, or addressee, unknown. The doctors and their testimonials were purest fiction, the “Madison Avenue ‘hard sell’” invention of a creative copywriter in the William Douglas McAdams agency.

Lear’s piece generated a huge number of letters to the Saturday Review, both admiring—“the most factual and intelligent article written on the subject to date”—and critical: “I don’t know a single physician who pays the slightest attention to drug ads.” Pfizer’s president, John McKeen, visited the magazine’s editorial offices, where he admitted that the brochure could have been misleading, and that the company had initiated procedures to prevent a recurrence. But the first article had been only a ranging shot. Lear’s second would be dead on target. “The Certification of Antibiotics” was the cover story of Saturday Review’s February 7 issue (in what was surely a coincidence, it appeared just underneath an article by Adlai Stevenson entitled “Politics and Morality”). Its subject was the same senior official in the Food and Drug Administration who had been visited by Albe Watkins in 1952, a bacteriologist named Henry Welch.

Welch had joined the FDA in 1938, as part of the agency’s expansion after the passage of the Food, Drug, and Cosmetic Act, and had risen through the ranks fairly quickly. In 1943, he was named to direct the Division of Penicillin Control and Immunology, and in 1951 was appointed director of the Division of Antibiotics, where he was responsible for the approval of new drugs.

At almost precisely the same time Welch took on his new responsibilities at the FDA, he was introduced to a Spanish psychiatrist named Félix Martí-Ibáñez. Martí-Ibáñez had been his country’s undersecretary of health and social service until 1939, when the nationalist victory in the Spanish civil war forced him to find other employment. He spent the 1940s working in the United States for a number of pharmaceutical companies, including Hoffmann-La Roche (where he served as medical advisor for international sales), Winthrop, and Squibb. The émigré psychiatrist therefore had a ringside seat for the antibiotic revolution, which he correctly viewed as a huge opportunity: not for creating research, but retailing it. The discovery of antibiotic therapy wasn’t just transforming the practice of medicine, but also the practice of medical communication. An unprecedented explosion in useful therapeutic knowledge—it’s worth remembering that penicillin, streptomycin, the various versions of tetracycline, chloramphenicol, and erythromycin had all been introduced between 1941 and 1948—was occurring at a rate that made existing channels far too slow. An organization that could promise rapid diffusion to the largest number of clinicians in the shortest amount of time would be supplying something of enormous value.

From the late 1950s on, most academic physicians regarded Martí-Ibáñez as a bit of a huckster, and his reputation hasn’t improved very much since. One thing he can’t be accused of, however, is hypocrisy. The Spanish psychiatrist made it clear in speeches and articles that the ideal source for financing the organization and diffusion of therapeutic knowledge was the pharmaceutical firms themselves. “Who better than the pharmaceutical industry,” he wrote, “could organize, coordinate, and integrate on an international scale the vast and increasing knowledge on antibiotics?”

Martí-Ibáñez had a business plan. In 1951, he put it into action. He and Welch joined forces to found a new journal, entitled Antibiotics and Chemotherapy, with an editorial board that included a who’s who of antibiotic research, including Florey, Waksman, and Alexander Fleming. Martí-Ibáñez, as president of MD Publications, would run the business side; Welch would be the editor.

Antibiotics and Chemotherapy was an immediate success with medical researchers, but its content was virtually all bench science, which limited its appeal. To serve the much larger audience interested in the clinical application of the new drugs, in 1955 Welch and Martí-Ibáñez launched another journal, which they named Antibiotic Medicine, changing the title a year later to Antibiotic Medicine and Clinical Therapy. The new journal was circulated free to physicians and other health professionals, as Martí-Ibáñez and Welch reasoned that delivering to that particular audience would make the publication an attractive place for pharmaceutical company advertising. It seems not to have occurred to anyone at the FDA that allowing their antibiotic division’s director to work for a for-profit journal supported entirely by the same pharmaceutical companies whose applications he was responsible for endorsing or rejecting was the very definition of a conflict of interest.

For John Lear, alarm bells sounded. When he interviewed Welch for his February 1959 article and asked him to confirm or deny the rumor that he derived substantial income from the journals, Welch replied, “Where my income comes from is my own business [but] I have no financial interest in MD Publications. . . . My only connection is as editor, for which I receive an honorarium.”

It would be some months before the size of that honorarium was fully understood. Antibiotic Medicine and Clinical Therapy paid Henry Welch 7.5 percent of all advertising revenue, and 50 percent of all sales of article reprints. In the four years between the journal’s introduction and John Lear’s exposé in Saturday Review, the two “honoraria” had paid Henry Welch nearly $250,000, about $2.24 million today. Welch, it was later learned, had told a number of colleagues that his FDA salary—$17,500 a year—was barely enough to pay his income tax. They thought he was kidding.

There is little evidence that Welch was, in the classic sense, corrupted by this. But whether or not he agreed to approve antibiotics produced by journal advertisers, pharmaceutical companies adopted a grateful, and generous, attitude toward MD Publications generally, and Welch personally. Parke-Davis, to mention only a single example, wrote a check for $100,000 for prepaid advertising in Antibiotic Medicine and Clinical Therapy during its first year of publication. When the journal folded in 1961, it still had $38,000 of Parke-Davis’s money, which the company graciously agreed to write off as a goodwill gesture. Half of the money still in the cash register, $19,000, was paid directly to Henry Welch.

Between 1955 and 1960, Pfizer paid $171,000 for reprints, earning Henry Welch more than $85,000. Even more incriminating: Lear discovered letters to Pfizer’s director of advertising in which Welch and Martí-Ibáñez pleaded with him to continue supporting Antibiotics and Chemotherapy. The money quote included the following: “The February issue of this publication will include an editorial by Dr. Welch reappraising the use of Nystatin [an antifungal drug derived from yet another of the fecund Streptomyces, which Bristol-Myers Squibb was promoting heavily] in conjunction with broad-spectrum antibiotics. This paper will furnish your people with excellent ammunition with which to counteract the exaggerated claims made for Nystatin” [emphasis added]. The quid pro quo—Antibiotics and Chemotherapy was about to criticize Nystatin in print; Nystatin’s competitor could pay to share that seemingly disinterested criticism with its sales force, and through them with America’s physicians—was implied, but clear.

It was an era that was no stranger to outrage over disgraceful behavior, as both the “payola scheme,” in which record companies paid disc jockeys for airplay, and the TV quiz show scandals were huge stories in 1959. Henry Welch was, to most people, small potatoes. But that didn’t mean no one noticed. As a direct result of John Lear’s articles, Congressman Emanuel Celler of New York insisted that Welch should be fired, and soon enough, Dr. Arthur Flemming, the secretary of the Department of Health, Education, and Welfare, demanded his resignation. Welch, who had already applied for disability retirement (and who had a very comfortable retirement, funded by his publishing empire, to look forward to), complied.

To this day, Henry Welch, even more than Félix Martí-Ibáñez, remains a poster boy for the dangers of corruption by pharmaceutical companies, so much so that Web sites with a conspiratorial tinge continue to invoke him as the original “shill for big pharma.” They have a point, but they miss the far more significant aspect of Welch’s importance to pharmaceutical development in the late 1950s.

Welch and Martí-Ibáñez sincerely believed in the broadest possible use of antibiotics, often in combination not just with other antibiotics, but with vitamins—in a typical example, Pfizer created a compound consisting of Terramycin and “stress formula” vitamins—to both treat and prevent infectious diseases. More to the point, Pfizer had researched, developed, and manufactured Sigmamycin, the combination of 167 milligrams of tetracycline and 83 milligrams of oleandomycin (a close relative of erythromycin) that had been endorsed by the eight nonexistent doctors John Lear had exposed in his January article.

By the mid-1950s, all of the most popular antibiotic therapies were “fixed-dose combination drugs”—mixed cocktails of erythromycin and penicillin, for example. The idea behind them was, on the surface, plausible enough, a long-standing belief in the synergistic combination of two therapies. As far back as 1913, Paul Ehrlich himself had recommended “a simultaneous and varied attack . . . directed at the parasites, in accordance with the military maxim, march in detachments, fight as a unit.” Forty-five years later, pharmaceutical companies had taken Ehrlich’s “if some is good, more must be better” recommendation to heart, and were producing no fewer than sixty-one fixed-dose combination antibiotics. Four of them contained five antibiotics apiece, eight had four, twenty used three, and “only” twenty-nine were humble enough to combine a mere two. When the drugs in combination were truly synergistic, this therapy worked fine. Sometimes, as with PAS and streptomycin, they were. And sometimes they weren’t. Two or more antibiotics could be antagonistic, as was already known to be the case with penicillin and Aureomycin.

Synergistic or not, fixed-dose combinations remained attractive to pharmaceutical companies. Working in a marketplace in which every competitor could sell generic versions of tetracycline or penicillin, even the most innovative firms saw the advantage in differentiating their branded products one from the other. And if they hadn’t yet figured it out on their own, Félix Martí-Ibáñez had been eager to educate them. In 1956, he wrote:

[I]t is particularly important to seek specialties which combine antibiotics with other drugs. Such combinations, if they can be justified medically, are a defense against the current price trend in penicillin and streptomycin. The broad spectrum antibiotics [i.e., the tetracyclines and chloramphenicol] may eventually suffer the same fate. This is the time, therefore, to seek products combining such drugs as Terramycin and Aureomycin with other useful therapeutic agents. . . .

The recipient of the letter wasn’t a clinician or researcher. It was Martí-Ibáñez’s good friend and fellow psychiatrist, the advertising mastermind at William Douglas McAdams, Arthur Sackler.

The challenge with fixed-dose combination antibiotics, however, was that while it was easy enough for an advertising copywriter to invent a catchy name and package for them, it was very hard indeed to convince physicians and hospitals that a particular fixed-dose combination wasn’t just different, but better. The rigor of well-designed randomized clinical trials made measuring the relative merits of each component of a two-part fixed-dose combination extremely uncertain. With three or more, it was virtually impossible. Further, unless the pathogen involved—and not just the bacterial species, but its biovar, or strain—was identified precisely, the fixed-dose combinations were as likely to harm as heal.

Fixed-dose combination therapies were also a challenge to evaluate effectively using the double-blind randomized clinical trials pioneered by Bradford Hill. In what seems an almost willful return to the preantibiotic era, fixed-dose combinations could only be trumpeted using case studies and, especially, testimonials.

Enter Henry Welch and Félix Martí-Ibáñez. In journal editorials and in speeches at the annual symposia on antibiotics that they hosted throughout the 1950s, they announced that the world had entered a “third era of antibiotic therapy” . . . (the first had been the narrow-spectrum antibiotics like penicillin; the second the broad-spectrum drugs like tetracycline). And if the only way to usher in the “third era”—neither of the two medical publishing innovators felt it necessary to note that the phrase had been provided by one of Arthur Sackler’s copywriters, who had coined it for the launch of Sigmamycin—was by replacing randomized clinical trials with personal experiences, then so much the worse for RCTs.

The battle lines had been drawn at the 1956 antibiotics symposium. On one side, Welch and Martí-Ibáñez argued, “The final verdict on the value of a new drug or a new therapy usually comes from one dependable source: the whole body of practicing physicians whose daily clinical experiences extend over many patients treated in actual conditions of practice over considerable periods of time. Medical practice itself provides the sole and ultimate verdict on the true value of a drug. . . .”

On the other side were Max Finland, the infectious disease specialist from Harvard Medical School who had been one of the early skeptics about Aureomycin; and Harry Dowling, who had been Finland’s protégé at Harvard’s Thorndike Memorial Library and was, in 1956, chair of the Department of Preventive Medicine at the University of Illinois Medical School. Finland and Dowling, pretty good phrasemakers themselves, sniffed that a physician who chose an antibiotic based on testimonials, rather than peer-reviewed randomized clinical trials, was engaging in “therapeutics by vote.”* And, just so no one missed the point that such votes were very easy to rig, Harry Dowling addressed the 1957 annual meeting of the AMA with a much-reprinted speech entitled “Twixt the Cup and the Lip” in which he attacked the use of the techniques “that had been used so successfully in the advertising of soaps and tooth pastes and of cigarettes, automobiles, and whiskey” to market drugs to doctors.

In some ways, the history of antibiotic discovery, from the days of Koch and Pasteur through Paul Ehrlich, Gerhard Domagk, Fleming, Florey, and Hodgkin, can be read as an exercise in epistemology. The great innovations, from Robert Burns Woodward’s elegant systems for elucidating chemical structure to Selman Waksman’s brute force technique for finding useful soil-dwelling bacteria, were as much about developing new methods for creating knowledge as they were about the knowledge itself. Likewise, no matter how much it looks like a battle over selling one’s soul to Pfizer or Merck, the conflict between Welch and Martí-Ibáñez on one side, and Finland and Dowling on the other, was really epistemological: What is the best way to know what actually cures disease?

The significance of this battle to the practice of medicine can hardly be overstated. Several hundred thousand American physicians and at least as many overseas were, for the first time, pushed and pulled by two opposing forces: First was their virtually complete dependence on pharmaceutical companies for information about the relative effectiveness of the drugs they prescribed. Second, their no less complete insistence on autonomy in deciding which ones to use. The resulting gap wasn’t about information per se, but rather about credibility: Which claims are trustworthy, which not, and, especially, what cognitive tools can be used to decide between the two?

It’s tempting to portray the conflict in simple terms. Doctors, after all, remain enormously respected in part because they are at least nominally obliged to place the patient’s good above any other consideration, while pharmaceutical companies, however large their contribution to human welfare, remain for-profit enterprises. There are complicating factors, of course. Physicians’ practices and hospitals are also businesses, after all; pharmaceutical company researchers are as motivated by the glory of discovery as by their stock options. Even more confounding was and remains the phenomenon that privileges personal knowledge over aggregated evidence, which manifests in doctoring in a particularly acute version. Physicians are famously confident in their clinical experience, inclined to trust the results from a dozen patients they’ve treated, even in the face of a study examining a thousand patients they’ve never seen. One possible solution was to evaluate drug effectiveness collectively, rather than individually. But the AMA had, in 1953, stopped issuing its “Seal of Approval” that permitted drugs to be advertised in JAMA, and dissolved its at least ostensibly regulatory Council on Pharmacy and Chemistry.

In the face of these cognitive and political challenges, the key epistemological question of the antibiotic revolution—how can we know what medical interventions actually work?—remained unanswered. Because of the increasing complexity of medical decision making, and the cognitive biases that accompany virtually all human behavior, it sometimes seems to be unanswerable. One question that the last century of medicine has answered, though, is how much reliance ought to be placed on the counsel of a single clinician; to epidemiologists and public health researchers, the scariest thing said by a physician is any sentence that begins with, “In my experience.”

The hearings that began in Washington, DC, almost exactly eleven months after John Lear’s first article in Saturday Review were not called in order to solve an epistemological crisis about drug efficacy. As one might guess from the charter of the subcommittee that conducted the hearings—a branch of the Senate Judiciary Committee responsible for oversight on antitrust and monopoly issues—the original intent was to review how pharmaceutical companies priced and marketed their products.

An investigation into drug pricing had been gestating for some time. In 1953, John Blair, a staff economist at the Federal Trade Commission, persuaded his bosses at the FTC to start an inquiry into the business of manufacturing and selling antibiotics. Though Blair was, to put it gently, no friend to large business organizations—in 1938, he had published a polemic entitled Seeds of Destruction: A Study in the Functional Weaknesses of Capitalism, a “none-too-happy picture of capitalism and its probable future”—his argument wasn’t especially ideological. His own physician and pharmacist had informed him that the price for each of the branded broad-spectrum antibiotics, whether Aureomycin, Terramycin, or Chloromycetin, was identical, and likely to stay that way. The reasons, Blair learned in his preliminary investigation, were the dizzying cross-licensing and cooperative marketing arrangements that followed the tetracycline peace treaty. Parke-Davis, for example, as Blair put it, “sold twenty of [the] fifty-one major drug products included in our study, but produced only one: Chloromycetin.”

The business seemed ripe for a broad antitrust investigation, but Blair was unable to convince the commission to proceed. Instead, he took an oblique approach. In 1956, the FTC began work on an exhaustive Economic Report on Antibiotics Manufacture. When it was published in June 1958, it revealed that antibiotics had made the pharmaceutical industry the country’s most profitable business, with overall profit margins of nearly 11 percent after taxes—twice as much as the average U.S. corporation. For those firms lucky enough to produce broad-spectrum antibiotics, margins were as high as 27 percent. The seed that had been planted by the OSRD’s penicillin project fifteen years before had produced some extremely rich fruit.

The day after the report was issued, the FTC issued a complaint alleging collusion in the marketing of broad-spectrum antibiotics, and questioning the whole structure of the tetracycline patent and cross-licensing agreements. Even so, it took another year, and John Lear’s series in the Saturday Review, before the subject attracted the attention of anyone outside the bureaucratic world of antitrust regulation.

In September 1959, when Senator Estes Kefauver of Tennessee announced that the Antitrust and Monopoly Subcommittee would hold hearings on America’s drug business, he was already one of the country’s best-known politicians. He had become a television and newsreel star as the chairman of the Senate Special Committee to Investigate Organized Crime in Interstate Commerce, which riveted the country in 1950 and 1951: Every time a mobster like Frank Costello or Joey Adonis, among the more colorful figures ever to appear before a congressional investigation, invoked Fifth Amendment protections against self-incrimination, the man sitting opposite was the senior senator from the Volunteer State. Kefauver, a candidate for the Democratic presidential nomination in 1952 and its nominee for vice president in 1956, was a New Dealer with impeccable liberal credentials—in 1956, he was one of only three southern senators to refuse to sign the prosegregation Southern Manifesto (the others were Lyndon Johnson and Albert Gore, Sr.)—and a reflexive hostility toward big business of any variety. In short, he was the pharmaceutical industry’s worst nightmare: a combination of liberal populism and intellectual range, with an understanding of both old-fashioned political theater and the power of modern media.

Even worse (or better): The first person Kefauver hired to join the subcommittee’s staff was John Blair.

On December 7, 1959, in the Old Senate Office Building, the first witnesses were sworn in. For the next ten months, they would arrive, be interrogated in turns by friendly and unfriendly staff and committee members, and depart. As promised, the first subject on offer was pricing: what Kefauver and Blair saw as a shocking distance between the cost of manufacturing a particular drug and its price to consumers. It wasn’t just that the prices themselves were what Kefauver thought to be egregiously high. For the Antitrust and Monopoly Subcommittee, what mattered weren’t high prices as such, but the suspicion that the prices were being artificially inflated by a conspiracy in restraint of free trade. Since the demand for drugs was determined not by patients, but by a physician’s prescription pad, the place to look for such a conspiracy was in the unique marketing practices of the pharmaceutical industry. “The drug industry,” as Kefauver put it, “is unusual in that he who buys does not order, and he who orders does not buy.”

By 1959, the pharmaceutical industry was developing and marketing considerably more than the antibiotics that had jump-started its growth the preceding decade. As a result, the first witnesses called by the subcommittee gave testimony on other, newer, wonder drugs. One of the first to be examined was the corticosteroid prednisone, an immunosuppressant used to treat diseases like colitis and multiple sclerosis, in which the symptoms are frequently caused by the immune system’s own inflammatory response. Francis C. Brown, the president of the Schering Corporation, which introduced the drug in 1955 under the name Meticorten, was blindsided by the initial line of questioning: Why, he was asked, was his company charging some seven hundred times more for a dose of prednisone than it cost to manufacture it? Despite attempts to explain that the price of a drug had to reflect its fixed development expenses as well as its marginal manufacturing cost, Brown had already lost the public relations battle. The front page of the New York Times for December 8 read, “SENATE PANEL CITES MARK-UP ON DRUGS RANGING TO 7,079%.”

And so it went, for month after month. The subcommittee’s chief counsel, Rand Dixon, stated for the record that the Upjohn Company used only 14 cents worth of raw materials to make a drug it sold for $15, a markup of “about 10,000 percent.” Corticosteroids gave way to tranquilizers—the term had only recently been coined to describe Miltown, the brand name for the mild sedative meprobamate, and the world’s first blockbuster psychotropic drug—to arthritis medications, to antidiabetic drugs. Physicians and hospital directors went on record accusing pharmaceutical firms of “brainwashing” tactics and “perverted marketing attitudes.”

Meanwhile, the subcommittee’s Republicans, led by Senator Everett Dirksen of Illinois,* counterpunched, asking why, if pharmaceutical companies were marking up prices several thousandfold, they were still only managing to achieve profit margins of less than 15 percent.

By the spring of 1960, however, the subcommittee’s concerns had expanded from pricing and marketing strategies to patent and trademark reform, particularly regarding the ways in which branded drugs—especially the proprietary fixed-dose combination drugs like Sigmamycin—were being offered the same sort of intellectual property protections as compounds like streptomycin, even though they weren’t required to show that they were truly novel or even effective. The stage was set for the main event: antibiotics.

The stars of the hearings’ climax ought to have been Henry Welch and Félix Martí-Ibáñez, both of whom had demanded an opportunity to appear and clear their names. Kefauver had taken the two up on their offers, notifying them that they would be given a chance to do so at the hearing scheduled for May 17, 1960. Neither appeared, pleading illnesses that stubbornly, and suspiciously, hung on until the hearings ended in September.

Their absence had little impact on the hearings’ theatrics. At one point, Rand Dixon asked Dr. Perrin Long, a pioneer in the use of sulfa drugs, “Do you think a cost of $17 [for antibiotics] to the average mother or father every time their child has a cold is down to a point where it can be reached even by the needy?” The question made headlines; Dr. Long’s most relevant response—that an antibiotic would be useless at any price for treating a cold—went unsaid. Harry Loynd of Parke-Davis, called to account for both the marginal cost for the active ingredient in Chloromycetin and the battle over the way it had been promoted during the aplastic anemia scare, was a perfect foil for Kefauver. Loynd had never learned to hide his contempt for politicians, and came off not as a no-nonsense executive without the time to suffer fools gladly, but as an arrogant stuffed shirt. And, worse, an evasive one, who fought with Kefauver over every comma in every document, even to the point of arguing whether he had “seen” or simply “been aware” of an ad for Chloromycetin that seemed to underplay its risks.

The twenty-one months that began with John Lear’s first article for the Saturday Review on January 3, 1959, and ended with the close of the Kefauver hearings on September 14, 1960, were as earthshaking, in their own way, as the two years following the transatlantic trip of Howard Florey and Norman Heatley in 1941. Though specific drugs and pharmaceutical companies had for years been the objects of criticism from academic physicians like Maxwell Finland and gadflies like Edgar Elfstrom, in 1959, the public at large remained almost uniformly optimistic about the era of the wonder drugs, in which miracles—from penicillin to the Salk vaccine—had appeared, it seemed, almost every day. By the end of 1960, they would never be quite so enthusiastic again. The most dispiriting news about the wonder drugs, it turned out, wasn’t that they were overpriced; it was that no one knew whether they were really effective. Haskell Weinstein, formerly the medical director of one of Pfizer’s subsidiaries, had revealed the “very prevalent misconception” that the FDA was required to document efficacy. “As a physician I blush with shameat the quality of some of the ‘studies’ done by some of my physician brethren.”

When the Kefauver hearings closed in September 1960, the subcommittee had remarkably little to say about drug pricing or pharmaceutical monopolies. It recommended, instead, expanding the statutory authority of the FDA, which had largely been frozen since 1938. Henceforth, the agency should require proof of efficacy, as well as safety, of all new drugs, and “apply certification procedures to all antimicrobial agents for use in infectious diseases.” In April 1961, Kefauver introduced Senate Bill 1522 along essentially the same lines.

Testimony on the bill would occupy the next seven months. The FDA was—no surprise—a big supporter. More surprising: So were many of the largest pharmaceutical firms, though not Parke-Davis; Harry Loynd was still smarting from his treatment by the senior senator from Tennessee. On the other hand, the American Medical Association opposed Kefauver’s bill most strenuously because their membership was violently hostile to the idea that drug efficacy could be known by anyone other than the individual physician. Unstated was the concern that the restrictions on pharmaceutical advertising that Kefauver had included in the bill would fall most heavily on JAMA.*

Despite widespread public endorsements, the support of strong majorities in both houses of Congress, and even the pharmaceutical industry itself, SB1522 seemed doomed to die the death of a thousand cuts in committee. And so it might have, but for a scandal more gruesome, and more notorious, than the Elixir Sulfanilamide and aplastic anemia scares combined.

In 1962, Frances Oldham Kelsey had been balancing the costs and benefits of the antibiotic revolution for nearly twenty-five years. In 1938, with the ink barely dry on her doctorate in pharmacology from the University of Chicago, she performed the animal studies that revealed the extent of the damage caused by Massengill’s Elixir Sulfanilamide. Twelve years later, she became an MD as well as a PhD; and, in August 1960, Dr. Kelsey joined the Food and Drug Administration as one of only seven full-time drug reviewers. Her first assignment was an application from Richardson-Merrell, Inc., a drug wholesaler in Cincinnati then best known for the menthol-infused petrolatum known as Vicks VapoRub. The drug for which the company had applied for U.S. marketing rights, to be called Kevadon, had achieved enormous popularity in western Europe as a sedative: one that was safer than barbiturates, and, in addition, was effective as an antinausea drug. The German pharmaceutical firm Chemie Grünenthal—a postwar company that got its start selling penicillin under the Allied occupation—had developed and sold it, first by prescription, then direct to consumers, as Contergan. In the United Kingdom, Distillers Limited marketed it as Distaval. Its generic name was thalidomide.

In 1960, pharmaceutical companies were able to contact FDA reviewers directly as often as they pleased, and Richardson-Merrell pressed Dr. Kelsey for a quick approval to their application, which they expected in a matter of months, given the widespread use of the drug in Europe. In fact, the FDA automatically granted approval for new drugs if a reviewer failed to act on applications within sixty days . . . but action was understood to include any request for additional information. And Frances Kelsey definitely wanted more information. She asked for clinical and animal studies on toxicity. She requested data on the drug’s effect on pregnant women, given that it was being touted as a safe cure for morning sickness. Richardson-Merrell supplied testimonials. They sent more letters, and what purported to be the best research on the new product. Kelsey called it “an interesting collection of meaningless pseudoscientific jargon apparently intended to impress chemically unsophisticated readers.” Drug company representatives visited Kelsey in her office dozens of times. Richardson-Merrell’s executives went over Kelsey’s head, to George Larrick, the commissioner of Food and Drugs. Larrick—the inspector who had mobilized the entire FDA field force to track down virtually every drop of Elixir Sulfanilamide in 1937—backed his reviewer. Every sixty days, Frances Kelsey sent another letter informing Richardson-Merrell that their application still awaited approval.

And so it went, until November 29, 1961, when Chemie Grünenthal sent Richardson-Merrell the first reports of phocomelia—“seal limb,” a birth defect that caused stunted arms and legs, fused fingers and thumbs, and death; mortality rates for the condition approached 50 percent. Phocomelia wasn’t, in the 1960s, unknown. But it had been an extremely rare genetic disorder, with fewer than a thousand reported cases worldwide. No longer. Eight West German pediatric clinics had reported no cases of phocomelia between 1954 and 1959. In 1959, they reported 12. In 1960, there were 83. In 1961, 302. No one needed a sensitive statistical test to tease out the cause.* The mothers of the malformed infants had all taken thalidomide.

By the time the drug was removed from sale at the end of 1961, hundreds of thalidomide babies were struggling for life. Just as horrifying: Tens of thousands of expectant mothers who had taken the sedative spent the last months of their pregnancies consumed by a completely rational fear of how they would end. By the time the last exposed mothers gave birth, the total number of phocomelic infants exceeded ten thousand. Thanks entirely to Frances Kelsey’s stubbornness, fewer than thirty of them had been born in the United States.

The reason for even that small number was that Richardson-Merrell had recruited physicians for “investigational use” of the drug prior to FDA approval, which was not only permissible but condoned by the existing 1938 Food, Drug, and Cosmetic Act. As a result, when the company withdrew its application at the end of 1961, the long tail of thalidomide risk hadn’t yet been reached. Kelsey, very much aware of this, sent the company a letter asking whether any quantity of Kevadon/thalidomide was still in the hands of physicians. The company was unable to provide anything but an embarrassingly incomplete answer; it had distributed more than 2.5 million thalidomide pills to more than a thousand doctors in the United States and had utterly failed to maintain adequate records of who, when, and how much. Most of the expectant mothers in the United States who had been given the sedative by their physicians hadn’t even been told that the drug was experimental.

Despite the tragic stories of victims, and the embarrassing revelations about the holes in the approval process—in 1960 alone, the FDA had received thousands of applications, and it was only by great good luck that Frances Kelsey was the one to whom the Kevadon application had been assigned—thalidomide didn’t really become a scandal until July 15, 1962, when Morton Mintz of the Washington Post published a front-page story, with the headline: “‘HEROINE’ OF FDA KEEPS BAD DRUG OFF MARKET.” Its first sentence read:

This is the story of how the skepticism and stubbornness of a Government physician prevented what could have been an appalling American tragedy, the birth of hundreds or indeed thousands of armless and legless children.

The Post story generated hundreds of comments and opinion pieces throughout the country. On August 8, 1962, Frances Kelsey was honored with the President’s Award for Distinguished Federal Civilian Service; in the words of Senator Kefauver, she had exhibited “a rare combination of factors: a knowledge of medicine, a knowledge of pharmacology, a keen intellect and inquiring mind, the imagination to connect apparently isolated bits of information, and the strength of character to resist strong pressures.” Within weeks, SB1522 was taken off life support, and on August 23 the House and Senate passed the Kefauver-Harris Amendments (the bill had been introduced in the House of Representatives by Oren Harris of Arkansas). On October 10, 1962, Public Law 87-781, an “Act to protect the public health by amending the Federal Food, Drug, and Cosmetic Act to assure the safety, effectiveness, and reliability of drugs,” was signed into law by President John Kennedy. Standing behind him for the traditional signing photo was Frances Oldham Kelsey.

Kefauver-Harris wasn’t the first major piece of federal legislation to recognize that the world of medicine had been utterly transformed since 1938. In 1951, Senator Hubert Humphrey of Minnesota and Representative Carl Durham of North Carolina—both, not at all coincidentally, had been pharmacists before entering political life—cosponsored another amendment that drew, for the first time, a clear distinction between prescription drugs and those sold directly to patients.

Credit: National Institutes of Health/National Library of Medicine

Frances Oldham Kelsey (1914–2015) receiving the President’s Award for Distinguished Federal Civilian Service from President John F. Kennedy

Until the 1950s, the decision to classify a drug as either a prescription drug, requiring physician authorization, or as what is now known as an over-the-counter medication, was entirely at the discretion of the drug’s manufacturer. This was one of the longer-lasting corollaries of the nineteenth-century principle that, because of the sanctity of consumer choice, people had an inalienable right to self-medicate. As a result, the decision to classify a drug as prescription only was just as likely to be made for marketing advantage as safety considerations: An American drug company could, and did, decide that prices could be higher on compounds that were sanctioned by physicians. Predictably, therefore, the same compound that Squibb made available by prescription only could be sold over-the-counter by Parke-Davis.

After Humphrey-Durham, any drug that was believed by the FDA to be dangerous enough to require supervision or likely to be habit forming, or any new drug approved under the safety provision of the 1938 act, would be available only by prescription; further, the drug and any refills were required to carry the statement “Federal law prohibits dispensing without prescription.” All drugs that could be sold directly to consumers, on the other hand, had to include adequate directions for use and appropriate warnings, which is why even a bottle of ibuprofen tells users to be on the lookout for the symptoms of stomach bleeding.

Humphrey-Durham was intended to protect pharmacists from prosecution for violating the many conflicting and ambiguous laws about dispensing drugs. By the end of the 1940s, American pharmaceutical companies were selling more than 1,500 barbiturates, all basically the same, but the regulations governing them barely deserved to be called a patchwork. Thirty-six states required prescriptions; twelve didn’t. Fifteen either prohibited refills or allowed them only with a prescription. And while some pharmacists viewed this as a loophole through which carloads of pills could be driven—one drug- store in Waco, Texas, dispensed more than 45,000 doses of Nembutal, none of them by prescription—others were arrested for what amounted to little more than poor record keeping. Even where pharmacies weren’t attempting to narcotize entire cities, risks didn’t vanish. A Kansas City womanrefilled her original prescription (for ten barbiturate pills) forty-three times at a dozen different pharmacies before she was discovered in her home, dead, partially eaten by rats.

In its original draft, the sponsors of Humphrey-Durham had tried to provide more than just a “clear-cut method of distinguishing between ‘prescription drugs’ . . . and ‘over-the-counter drugs.’” In the second draft of the bill, the FDA administrator, “on the basis of opinions generally held among experts qualified by scientific training and experience to evaluate the safety and efficacy of such drug,” was charged with deciding whether a drug was unsafe or ineffective without professional supervision.

By the time the amendment was signed, however, any language about effectiveness had been negotiated away. In 1951, there was no constituency, either among patients, physicians, or pharmaceutical companies, urging the FDA to evaluate effectiveness. However, what had been untenable in 1951 became the law of the land in 1962. New drugs, finally, would have to be certified not merely as safe, but as effective.

The new law didn’t restrict itself to new compounds. The 1962 amendment also required the FDA to review every drug that had been introduced between 1938 and 1963 and assign it to one of six categories: effective; probably effective; possibly effective; effective but not for all recommended uses; ineffective as a fixed combination; and ineffective. Chloramphenicol, for example, was designated as “probably effective” for meningeal infections, “possibly effective” for treatment of staph infections, and, because of the risk of aplastic anemia, “effective but . . .” for rickettsial diseases like typhoid fever and plague. The review process, known as DESI (for Drug Efficacy Study Implementation), began in 1966, when the FDA contracted with the National Research Council to evaluate four thousand of the sixteen thousand drugs that the agency had certified as safe between 1938 and 1962.*Nearly three hundred were removed from the market.

In 1963, Frances Kelsey was named to run one of five new branches in the FDA’s Division of New Drugs, the Investigational Drug Branch (now known as the Office of Scientific Investigations). She was tasked with turning the vague language of the Kefauver-Harris Amendments into a rule book. The explicit requirements of the law weren’t actually all that explicit. It required “substantial evidence” of effectiveness that relied on “adequate and well-controlled studies” without actually defining the term. Like the 1938 act, which called for only “adequate tests by all methods reasonably applicable,” the amendments didn’t specify any particular criterion for evaluating either safety or efficacy. It was a statement of goals, not strategies.

Determining which strategies would be most effective was the next step. Though Bradford Hill’s streptomycin trials of 1946 had demonstrated the immense hypothesis-testing value of properly designed randomized experiments, ten years later nearly half of the so-called clinical trials being performed in the United States and Britain still didn’t even have control groups. Though one pharmaceutical company executive after another had appeared before the Kefauver investigators to claim that the huge sums invested in clinical research justified high drug prices, they were spending virtually all of their research dollars on the front end of the process: finding likely sources for antibiotics, for example, then extracting, purifying, synthesizing, and manufacturing them. The resources devoted to discovering whether they actually worked outside the lab were minuscule by comparison: essentially giving away free samples to physicians and collecting reports of their experience. As Dr. Louis Lasagna, head of the Department of Clinical Pharmacology at Johns Hopkins, had told the Kefauver committee, controlled comparisons of drugs were “almost impossible to find.”

Frances Kelsey wasn’t any more inclined to accept the status quo than she was to believe the “meaningless pseudoscientific jargon” that Richardson-Merrell had offered in support of their thalidomide application. In January 1963, even before she was named to head the Investigational Drug Branch, Kelsey presented a protocol for reviewing what was now termed an “Investigational New Drug.” The new system would require applicants for FDA approval to present a substantial dossier on any new drug along with their initial application. Each IND, in Kelsey’s proposed system, would need to provide information on animal testing, for example—not just toxicity, but effectiveness. Pharmaceutical companies would be obliged to share information about the proposed manufacturing process, and about the chemical mechanism by which they believed the new drug offered a therapeutic benefit. And, before any human tests could begin, applicants would have to guarantee that an independent committee at each institution where the drug was to be studied would certify that the study was likely to have more benefits than risks; that any distress for experimental subjects would be minimized; and that all participants gave what was just starting to be known as “informed consent.”*

The truly radical transformation, however, was what the FDA would demand of the studies themselves. Kelsey’s new system specified three sequential investigative stages for any new drug. The first, phase 1 clinical trials, would be used to determine human toxicity by providing escalating doses to a few dozen subjects in order to establish a safe dosage range. Compounds that survived phase 1 would then be tested on a few hundred subjects in a phase 2 clinical trial, intended to discover whether the drug’s therapeutic effect—if any—could be shown, statistically, to be more than pure chance. The final hurdle set out by the 1963 regulation, a phase 3 trial, would establish the new drug’s value in clinical practice: its safety, effectiveness, and optimum dosage schedules. Phase 3 trials would, therefore, require larger groups, generally a few thousand subjects, tested at multiple locations. At the latter two stages, but especially the third, the FDA gave priority to studies that featured randomization, along with experimental and control “arms.” If the new drug was intended to treat a condition for which no standard treatment yet existed, it could be compared ethically against a placebo. If, as was already the case for most infections and an increasing number of other diseases, a treatment already existed, studies would be obliged to test for “non-inferiority,” which is just what it sounds like: whether the effectiveness of the new treatment isn’t demonstrably inferior to an existing one. In either case, the reviewers at the FDA would be far more likely to grant approval if the two arms in an approved study were double- blinded, with neither the investigators nor subjects aware of who was in the experimental or control groups.

In February 1963, the commissioner of Food and Drugs approved Kelsey’s three-tiered structure for clinical trials. The process of pharmaceutical development would never be the same. It marked an immediate, though temporary, shift of power from pharmaceutical companies to federal regulators. Within weeks of the announcement of the new regulations, virtually every drug trial in the country, from the Mayo Clinic to the smallest pharmaceutical company, was reclassified into one of the three allowable phases. It allowed Frances Kelsey a remarkably free hand in exercising her authority to grant or withhold IND status; to her critics, this led to any number of cases in which she withheld classification based on nothing but a lack of faith in a particular investigator, or her judgment that the proposed drug was either ineffective or dangerous.*

The new requirements, which would remain largely unchanged for at least the next fifty years, permanently altered the character of medical innovation.

The method of validating medical innovation using randomized control trials had given the world of medicine a way of identifying the sort of treatments whose curative powers weren’t immediately obvious to clinicians (and, just as important, identifying those that seemed spectacular, but weren’t). Until 1963, however, RCTs had been a choice. The three phases of the newly empowered FDA made them a de facto requirement. Frances Kelsey’s intention was to use the objectivity of clinical trials to simultaneously protect the public and promote innovative therapies. It is unclear whether she understood the price.

One of the underrated aspects of the wave of technological innovation that began with the first steam engines in the eighteenth century—the period known as the Industrial Revolution—was a newfound ability to measure the costs and benefits of even tiny improvements, and so make invention sustainable. Just as improvements in the first fossil-fueled machines could be evaluated by balancing the amount of work they did with the amount of fuel they burned, even small benefits of new drugs and other therapies could be judged using the techniques of double-blinding and randomization. Since almost all potential improvements are by definition small, medicine generally, and the pharmaceutical industry in particular, now had a method for sustaining innovation. No longer would progress wait on uncertain bursts of genius; discovery could now be systematized and even industrialized.

However, there was a giant difference between the methods used to compare mechanical inventions and medical or pharmaceutical treatments. Engineers don’t need to try a new valve on a hundred thousand different pumps to see whether it improves on an existing design. But so long as the RCT was the gold standard for measuring improvement in a drug (or any health technology), small improvements in efficacy would require larger, more time-consuming, and costlier trials. By the arithmetic alone, the value of a treatment that is so superior to its predecessor that it saves ten times more people is apparent after only a few dozen tests. One that saves 5 percent more can require thousands. The smaller the improvement, the more expensive the testing would become.

This changed the calculus of discovery dramatically. Selman Waksman’s technique for finding new drugs—sifting through thousands of potential candidates in order to find a single winner—had already virtually destroyed the belief that a brilliant or lucky scientist, working alone (or, more likely, in a relatively small laboratory in a university or hospital), might find a promising new molecule. But demonstrating that it worked would, thanks to Frances Kelsey and Bradford Hill, make the process exponentially more expensive, and riskier. The same economies of scale that had been necessary for the manufacture of the first antibiotics were now required for finding and testing all the ones that would follow. Perversely, the Kefauver hearings, initiated and stage-managed by liberal politicians with no love for big business, had led inexorably to the creation of one of the largest and most profitable industries on the planet.

Engineers calculate failure rates—sometimes they’re known as “failure densities”—to describe phenomena like the increasing probability over time that one of the components of an engine drivetrain will crack up. Pension companies use similar-looking equations to calculate life spans. Medical researchers use them to derive the survival probabilities of patients given different treatment regimens.

Another way of applying the arithmetic of failure rates is to use it to predict the number of promising “new molecular entities” that will actually prove out as therapeutically useful. A pharmaceutical company that identifies a thousand compounds with some potential for the treatment of a disease like Alzheimer’s, for example, and knows that the failure rate in preliminary testing was somewhere between 95 and 99 percent, could guess that between ten and fifty compounds might survive to the next round. Moreover, if the failure rate were constant, then the likelihood of success would increase over time; the longer you spend looking for the next miracle drug, the closer you are to finding it.

The relevance for the riskiness of drug development is fairly clear. If pharmaceutical research were characterized by a constant failure rate—even a very high one—it might be expensive, but not particularly risky: At any given moment, the probability of a successful outcome would be known. If, on the other hand, the failure rate were fundamentally variable, then years will be spent on completely fruitless searching.

Drug development, from proof of concept (sometimes called “phase 0”) through phase 3 clinical trials, has never exhibited anything resembling a constant failure rate. This means that it has inevitably grown riskier over time. Many close observers of the phenomenon have argued that this is a reason for transferring the risk of pharmaceutical innovation to society at large, by increasing government support both for basic biomedical research and for testing the products of that research.

There’s no intrinsic reason why government agencies (or not-for-profit institutions like universities) are fundamentally incapable of funding, or even managing, the phased testing system that Frances Kelsey developed at the FDA more than fifty years ago. However, their involvement would not fundamentally alter the relationship between risks and rewards. For more than a century, society has farmed out the risk of pharmaceutical development, testing, and manufacturing to the institutions willing to undertake it, but they only do so when the potential rewards are large. Inveighing against pharmaceutical company greed just camouflages this unavoidable truth.

The machine of pharmaceutical innovation—one that wouldn’t exist, would never even have been built, but for the antibiotic revolution—is decidedly imperfect, full of inefficiencies and side effects, and incredibly costly to run and maintain. Paul Ehrlich’s side chains, or Dorothy Crowfoot Hodgkin’s X-ray crystallography, or even Norman Heatley’s jury-rigged distillation apparatus were the result of motivated intellectual effort in search of some reward. The motivations haven’t always been particularly noble; Pasteur’s hatred of Germany for the Franco-Prussian War and Ernst Chain’s lust for a Nobel Prize come to mind. Only a very jaundiced observer, though, would think the bargain wholly bad. Winston Churchill famously observed, “It’s been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time. . . .”

The pharmaceutical industry is rightly criticized for spending millions to acquire knowledge that it then uses to find more expensive treatments for existing conditions. Or even to “medicalize” conditions in order to create a market for a new treatment.

On the other hand, consider HAART.

Though the disease was first discovered in 1981 (and given the name AIDS a year later), the virus responsible wasn’t identified until 1983. Even before then, Burroughs Wellcome’s U.S. subsidiary, with its long history of investigating obscure diseases, was researching the new and horrifying disease that killed its hosts by destroying their immune systems and so exposing them to hitherto rare syndromes like the cancer known as Kaposi’s sarcoma and, even more relevant, retroviruses like HIV. In 1983, one of the company’s biochemists, Jane Rideout, started investigating the chemical properties of an antibacterial compound known as azidothymidine: AZT for short.

Simultaneously, other researchers at Burroughs Wellcome had pioneered a dramatically different way of testing new molecules for effectiveness; instead of Selman Waksman’s time-honored trial-and-error method of testing large numbers of promising chemical compounds, the new technique—which would win a Nobel Prize for its inventors—required identifying a chemical that the target pathogen needs to reproduce, and replacing it with an analogue that attracts the pathogen while sabotaging its reproduction. Rideout realized that AZT was a near-perfect analogue for a chemical required for HIV . . . and Burroughs Wellcome agreed. Only three years after the human immunodeficiency virus was first identified—a pace that recalls the original antibiotic revolution—the Burroughs Wellcome company introduced AZT as the first effective anti-AIDS drug.* A decade later, when Merck received FDA approval for the antiviral drug Indinavir, and a separate set of approvals were granted to drugs known as NRTIs (nucleoside reverse transcriptase inhibitors), the combination therapy known as HAART, for highly active antiretroviral therapy, transformed HIV from a death sentence to a chronic, and treatable, condition.

HIV-positive patients are scarcely alone in their debt to pharmaceutical innovation. Tens, perhaps hundreds, of millions of victims of a thousand diseases from leukemia to river blindness are alive and thriving entirely because of a drug breakthrough. For them, and especially for the literally uncountable number of people whose bacterial infections, from strep throat to typhus to anthrax, were cured by a ten-day regimen of antibiotics, the bargain probably seems an extraordinarily one-sided one. Like Anne Miller and Patricia Thomas, they were, and are, living, breathing evidence that Joseph Lister’s dream came true.



If you find an error or have any questions, please email us at admin@doctorlib.info. Thank you!