EIGHT
After Selman Waksman and Albert Schatz’s discovery, every pharmaceutical company on the planet began obsessively collecting dirt from as many exotic locations as possible in order to improve the odds of finding the next wonder drug in situ. The method was replicated again and again, because it worked.
The organism that produced the crude exudate that would become the world’s next great antibiotic was discovered by Abelardo Aguilar, a Filipino physician employed by Eli Lilly, who found a promising soil sample in the country’s Iloilo province and, in 1949, sent it to James McGuire at Eli Lilly’s Indianapolis headquarters for testing. The sample contained yet another of the ridiculously fecund Streptomyces, this time S. erythreus: the source for erythromycin, the first of the macrolide antibiotics, compounds that were effective against the same Gram-positive pathogens as penicillin, though via a different mechanism: The macrolides act by inhibiting the way the pathogens make critical proteins, rather than by corroding their cell walls.
Lilly was, compared to upstarts like Pfizer and Merck, very much an old-line American drug company. The firm had been founded in 1876 by Colonel Eli Lilly as “the only House in the West devoted exclusively to the Manufacture and Sale of PHARMACEUTICAL GOODS,” which, at the time, largely consisted of botanicals and herbals containing components with memorable names like Bear’s Food, Scullcup, and Wormseed. His grandson, also named Eli Lilly, graduated from the Philadelphia College of Pharmacy in 1907 (the same year Paul Ehrlich first described a drug that would attack disease-causing microbes without killing their hosts as a “magic bullet”) and joined the company as the de facto director of production shortly thereafter. The younger Lilly was, in some ways, a midwestern version of George Merck: a disciplined and successful twentieth-century industrialist whose enthusiasm for his corporation’s larger mission seems to have been as uncomplicated, sincere, and sentimental as the poetry of his fellow Hoosier James Whitcomb Riley. During the Second World War, when the company was furiously producing plasma for American troops overseas, he famously observed that he “didn’t think it was the right thing for anybody to make any profit on blood which has been donated.”
It has been easy for Lilly’s biographers to emphasize his wide though wonky interests. An early enthusiast for the time-motion studies of Frederick Winslow Taylor, Lilly was also, in no particular order, an amateur archaeologist with a special interest in the Native American cultures of his much-loved home state of Indiana; a sophisticated art collector, largely of Chinese paintings and pottery; a compulsive writer of childish rhymes; a devotee of uplifting self-improvement manuals and the music of Stephen Foster; and, for decades, the patron of choice for leaders of now-forgotten academic fads.* He was also, during his lifetime, one of the half dozen most generous American philanthropists. If he did nothing else than initiate a meeting with Frederick Banting, J. J. R. Macleod, and Charles Best at the University of Toronto in 1922, his legacy would be secure. Lilly persuaded them to join in what was then a groundbreaking partnership in developing their discovery—insulin—into a commercial product, which the company launched as Illetin in 1923. For doing well by doing good, it’s hard to top; as late as 1975, Eli Lilly still supplied the lifesaving compound to three-quarters of the entire American market.
Insulin was scarcely Lilly’s only great innovation prior to the 1950s. Through the 1940s, the company’s labs produced the sedative Tuinal and Merthiolate, a widely used antiseptic. Sales rose from $13 million in 1932 to $115 million in 1948 (a year in which Eli Lilly thought the profits—21.7 percent—were “unreasonably high”). But while the company was part of the penicillin consortium, and was, for a time, the number one distributor selling Merck’s version of streptomycin, they were not one of the major players in the first wave of the antibiotic revolution—not until Abelardo Aguilar’s samples arrived in Indianapolis.
Three years later, McGuire applied for a patent on the new drug, which he named erythromycin, a “novel compound having antibiotic properties.” It would take decades before the compound could be successfully synthesized; Robert Burns Woodward (who would be credited, posthumously, with solving the problem) wrote in 1956, “Erythromycin, with all our advantages, looks at present quite hopelessly complex,” but that did nothing to dissuade Lilly from producing it using the tried-and-true fermentation method. In 1953, Lilly started selling the drug under the name Ilotycin.
Erythromycin was, and would remain, a powerful weapon in the battle against infectious disease. But narrow-spectrum antibiotics like Ilotycin were never going to attract the level of enthusiasm of the new broad-spectrum drugs like the tetracyclines. Broad-spectrum antibiotics accounted for large percentages—as much as half—of the profits of Pfizer, Abbott, Bristol Laboratories, Squibb, and Upjohn. Those five companies, all of whom were selling versions of tetracycline, were splitting about two-thirds of the total market for broad-spectrum drugs.
The other third? In early 1950, a few months before Pfizer’s John McKeen joined with Arthur Sackler to transform drug advertising forever, McKeen offered the marketing rights for the oxytetracycline compound to one of his competitors. Their president, however, turned him down, believing that Terramycin was a direct competitor to his own blockbuster broad-spectrum antibiotic. The drug was Chloromycetin, and the company Parke-Davis.
Parke-Davis was then one of America’s oldest and largest manufacturers of ethical drugs, compounds that were subject to patent—and, therefore, confusingly, the opposite of “patent medicines”—clearly labeled, and prescribed by physicians. The company’s origins date back to 1866, when Hervey Coke Parke, a onetime copper miner and hardware store owner, joined the Detroit drug business of Dr. Samuel Pearce Duffield. Duffield, like New York’s Edward R. Squibb, had started a business to serve the needs of the Grand Army of the Republic during the Civil War: distilling alcohol and selling “ether, sweet spirits of nitre [ethyl nitrate, in a highly alcoholic mixture; the spirits were used to treat colds and flu], liquid ammonium [sic], Hoffman’s [sic] anodyne [ether and alcohol, used as a painkiller], mercurial ointment, etc.” In 1867, a twenty-two-year-old salesman named George Solomon Davis became the firm’s third partner; and, when Duffield retired in 1871, the company was incorporated as Parke-Davis and Company. Parke was its first president; Davis its general manager.
Almost immediately, Parke-Davis established a reputation for seeking out medicines in exotic corners of the globe. In 1871 alone, the company financed expeditions to Central and South America, Mexico, the Pacific Northwest, and the Fiji islands. In January 1885, George Davis read Sigmund Freud’s infamous article, “Über Coca,” in which the young Viennese neurologist (not yet the father of psychoanalysis) wrote:
The psychic effect of cocaïnum muriaticum in doses of 0.05–0.10g consists of exhilaration and lasting euphoria, which does not differ in any way from the normal euphoria of a healthy person. . . . One is simply normal, and soon finds it difficult to believe that one is under the influence of any drug at all.
Davis immediately dispatched Henry Rusby, a doctor and self-described “botanist and pharmacognosist,” to South America to make “a critical study of the different varieties of coca.”
Rusby’s expedition, “involving four thousand miles of travel by canoe and raft on the Madeira and Amazon rivers, and occupying eleven months of suffering and danger escaping narrowly from death,” became part of the company’s founding myth, and the beginning of its prosperity. Shortly after his return, the company was using cocaine in dozens of different Parke-Davis products, including coca-leaf cigarettes, wine of coca, and cocaine inhalants. (Davis even hired Freud himself to perform a comparison of Parke-Davis’s cocaine products against Merck’s.)
Cocaine made the company, but wasn’t its last success story. Within twenty years, it had introduced fifty new botanically based drugs to the (still very informal) United States Pharmacopeia. One of them, Damiana et Phosphorus cum Nux—damiana is a psychoactive shrub that grows wild in Texas and Mexico; nux is nux vomica, or strychnine—was guaranteed to “revive sexual existence.” Others, marketed as “Duffield’s Concentrated Medicinal Fluid Extracts,” included ingredients like aconite, belladonna, ergot (the cause of St. Anthony’s Fire), arsenic, and mercury. All were extremely pure—the firm’s motto was Medicamenta Vera: “True Medicines”—but are also reminders of the danger in believing “natural” equals “safe.” Virtually every page in the catalog of Parke-Davis medications included a compound as hazardous as dynamite, though far less useful.
By the early twentieth century, the company had expanded into areas slightly less dependent on Indiana Jones–like adventuring. In the late 1890s, they were selling their version of Emil Behring’s diphtheria antiserum; in 1900, a Parke-Davis chemist, the Japanese-born, Glasgow-educated Jokichi Takamine, isolated adrenaline (also known as epinephrine), which the company marketed as Adrenalin, a drug whose ability to constrict blood vessels made it invaluable to surgeons, especially eye surgeons. The company expanded nationally and internationally, opening offices in Canada, Britain, Australia, India, and, in 1902, opened the country’s first full-scale pharmaceutical research laboratory, blocks from their Detroit headquarters. In 1938, Parke-Davis introduced Dilantin, the first reliable treatment for epilepsy; in 1946, Benadryl, the first effective antihistamine, which had been developed by a onetime University of Cincinnati chemist named George Rieveschl, who left academia for a research position at Parke-Davis.
Davis and Parke’s successors never lost a taste for treasure hunting, which might explain why they were uninterested in Pfizer’s offer. They had a broad-spectrum antibiotic of their own.
The development of Parke-Davis’s signature antibiotic began at roughly the same moment in time that the Office of Scientific Research and Development was assembling the participants in the penicillin project. In July 1943, Oliver Kamm, Parke-Davis’s director of research, met Paul Burkholder, the Eaton professor of botany at Yale. Six months later, Parke-Davis agreed to fund his research.
It was only a little more than a year before the company’s investment in Burkholder—and its long-standing presence in South America—paid off. In April 1945, a month before the surrender of Germany, Derald George Langham,*a plant geneticist simultaneously teaching at the University of Caracas and working as a Parke-Davis consultant, sent Burkholder a crate full of bottles containing compost he had collected from the farm of an émigré Basque farmernamed Don Juan Equiram. Hundreds of different soil-dwelling bacteria were isolated from the sample. Most of them were familiar, as was true, too, of the more than seven thousand samples Burkholder received in a single year. Culture A65, however, was different: an entirely new species, a cousin to Waksman’s actinomycetes. Burkholder named it Streptomyces venezuelae and proceeded to subject it to more or less the same tests that Waksman and Schatz had performed on their soil dwellers: samples of S. venezuelae were placed in vertical strips on an agar-containing Petri dish, while colonies of pathogenic bacteria were aligned horizontally. From the warp and weft, it was hoped, a new antibiotic-producing organism would be woven.
Burkholder sent a colony of S. venezuelae to John Ehrlich at Parke-Davis.
By the time Ehrlich joined the company in December 1944, he had already collected degrees in phytopathology, mycology, and forest pathology; had worked for the Bartlett Tree Expert Company as an arborist; and served as deputy director of the penicillin program at the University of Minnesota, where he had led the team irradiating variants of the Penicillium mold. At Parke-Davis, he had recruited the company’s entire sales force as field researchers, issuing them plastic bags in which they were told to collect soil samples.* Thousands came in, from golf courses, flower gardens, and riverbeds, but until Burkholder’s package arrived, none had yielded anything particularly interesting.
Culture A65 was far more than interesting. Quentin Bartz, one of the company’s research chemists, isolated the active ingredient using a proprietary technique developed at Parke-Davis that could rapidly reduce thousands of promising molecules to a few dozen. Bartz mixed cultures of A65 and water with fourteen different solvents, each at different levels of acidity. He then removed the water and solvent, filtered what was left (which told him the size of the molecule), and checked whether it adhered to a specific substance (which told him its likely structure). In March 1946, he had a crystalline substance that was effective against not just Gram-positive pathogens, but Gram-negative pathogens as well. It was well tolerated, highly potent against pathogens that were unaffected by either penicillin or streptomycin, and, as an unexpected bonus, could be taken orally, rather than by injection. The chemists at Parke-Davis gave it a nickname: “the Little Stranger.”
By February 1947, they had even better news. Chemist Mildred Rebstock had derived the structure of the A65 molecule, whose key component was a ring of the organic compound nitrobenzene. Nitrobenzene had been used for decades as a precursor to the aniline dyes that had been so important to the original sulfanilamides like Prontosil. And nitrobenzene wasn’t just familiar; it was simple. Parke-Davis had found a molecule that was far less structurally complex than penicillin, or streptomycin, or erythromycin. This suggested that, unlike its predecessors (or its immediate successor, chlortetracycline), it had the potential to be synthesized rather than grown in fermentation tanks. If so, it could be produced at considerably less expense and, more important, far greater consistency. By November, Rebstock delivered a completely synthetic and active version of the molecule, which was generically known as chloramphenicol. The company named it Chloromycetin.
Each of the golden age antibiotics, from their introduction to today, is most closely associated from its moment of discovery with a hitherto untreatable disease. As penicillin performed its first miracles on septicemia, and streptomycin was the long-awaited cure for tuberculosis, Chloromycetin (or chloramphenicol) was greeted enthusiastically mostly because of its activity against insect-borne bacterial diseases, particularly typhus.
Epidemic typhus is a most adept and subtle killer. Victims unlucky enough to encounter a louse carrying a colony of the Gram-negative bacteria known as Rickettsia prowazekii* typically infect themselves: The lice carry bacteria in their digestive systems and excrete them when they defecate. Humans scratch the lice bites, thus sneaking the pathogen-carrying feces past their skin and into their bloodstreams. The result is the appearance of flu-like symptoms within days: fevers, chills, and aches. A few days later, a rash appears on the victim’s torso and rapidly spreads to arms and legs. Then, if the immune system fails to destroy it, the disease progresses to acute meningoencephalitis: an inflammation that simultaneously attacks both the membranes surrounding the brain and spinal cord, and the brain itself, causing delirium and light sensitivity, eventually leading to coma. If untreated, typhus kills between 10 and 60 percent of those infected.
Typhus has been a scourge of humanity since at least the fifteenth century, and very likely for many centuries before. Epidemics were common throughout early modern Europe, especially in conditions where large numbers of susceptible hosts were placed in the path of lice, such as prisons and among armies on campaign.* During the Thirty Years’ War, typhus killed as many as one German in ten. A little less than two centuries later, it killed more soldiers in Napoleon’s Grande Armée during the retreat from Moscow than the Russian army. A century after that, a typhus epidemic in the new Soviet Union produced more than twenty million cases, and at least two million fatalities.
The U.S. Army had a long history of concern about the impact of epidemic typhus. The Army Medical Corps dusted more than a million Neapolitan civilians with lice-killing powder enriched with DDT in 1943 out of fear of a typhus outbreak, and the fear didn’t vanish at the end of the war. So when the army learned that Parke-Davis had a promising rickettsial antibiotic under development, they were eager to put it through its paces. From late 1946 to early 1947, Dr. Joseph Smadel of the Department of Virus and Rickettsial Diseases at Walter Reed Army Hospital ran A65 through a series of animal experiments, followed by clinical trials. In December 1947, he and two other Walter Reed physicians dosed themselves over a ten-day period with pills of the newly named Chloromycetin in order to discover whether the drug was excreted safely and completely, and more to the point, whether a stable concentration could be maintained in the body. Fortunately for Parke-Davis (and even more so for the physicians themselves), the drug passed both tests with flying colors.
While the Walter Reed doctors were self-testing, Chloromycetin was also getting a field test, one that, given Parke-Davis’s history, was taking place in South America. In late November 1947, one of the company’s clinical investigators, Dr. Eugene Payne, had arrived in Bolivia, which was then suffering through a typhus epidemic that was killing between 30 and 60 percent of its victims. Payne brought virtually all the chloramphenicol then available in the world (about 200 grams, enough to treat about two dozen patients) and set up a field hospital in Puerto Acosta. Twenty-two patients, all Aymara Indians, were selected for treatment, with another fifty as controls. The results were very nearly miraculous. In hours, patients who had started the day with fevers higher than 105° were sitting up and asking for water. Not a single treated victim died. Out of the fifty members of the control group, only thirty-six survived, a mortality rate of nearly 30 percent.
It was the first of many such field tests. In January 1948, Smadel and a team from Walter Reed recorded similar success in Mexico City. Two months later, they did the same in Kuala Lumpur. Along the way, they discovered that Chloromycetin was effective against the North American rickettsial disease known as Rocky Mountain spotted fever (which can kill more than 20 percent of untreated victims) and the chlamydial disease known variously as parrot fever, or psittacosis. They also found, more or less accidentally*—a patient with typhuslike symptoms turned out to have typhoid instead—that Chloromycetin cured it as well.
The model that had been pioneered by penicillin, and refined for streptomycin and the tetracyclines, was now a well-oiled industrial machine: Microbiologists make a discovery, chemists refine it, and physicians demonstrate its effectiveness in animals and humans. It was time to gear up for industrial production of the new miracle drug. Though the Rebstock experiments had shown how to synthesize the drug (and an improved method had been patented by other Parke-Davis chemists), the drug was still being produced through 1949 both by fermentation and synthesis, the former in a 350,000-square-foot building containing vats originally built to cultivate streptomycin and penicillin.
The process had come a considerable way since those early experiments at the Dunn, and even Pfizer’s converted Brooklyn ice factory. A rail line was built directly to the plant and raw materials arrived on a siding, just as if the railroad was delivering steel for an automobile factory. Every week Parke-Davis’s workforce unloaded tanker cars full of nutrients like wheat gluten, glycerin, and large quantities of salt; also sulfuric acid, sodium bicarbonate, amyl acetate, and deionized water. The S. venezuelae cultures that they were intended to feed were produced in separate laboratories, where they were stored until needed in sterilized earth, cultured on demand, suspended in a solution of castile soap, and held in refrigerators.
The feeding process was just as industrial. Nutrient solution was poured into seven 50-gallon steel tanks plated with nickel and chromium as anticorrosives, and then sterilized by heating to 252°. Streptomyces venezuelae was then injected, the stew agitated using the same washing-machine technique pioneered at the Northern Lab only a few years before, and held at a controlled temperature of 86° for twenty-four hours. The whole mix was then transferred to 500-gallon tanks, and then to 5,000-gallon tanks—each one seventeen feet high and nearly eight feet in diameter, there to ferment.
Following fermentation, the broth was filtered to remove the no-longer-needed S. venezuelae bacteria, thus reducing 5,000 gallons of fermentation broth into 900 gallons of amyl acetate, itself evaporated down to 40 gallons, which was separated and condensed into 2 gallons of solution, from which the antibiotic crystals could—after more than three weeks, and involving hundreds of Parke-Davis chemists, engineers, and technicians—be extracted.
On December 20, 1948, Parke-Davis submitted New Drug Application number 6655 to the Food and Drug Administration, asking that they approve chloramphenicol, and allow the company to bring it to market. On January 12, 1949, the FDA granted the request, authorizing it as “safe and effective when used as indicated.” On March 5, 1949, Collier’s magazine hailed it as “The Greatest Drug Since Penicillin.” By 1951, chloramphenicol represented more than 36 percent of the total broad-spectrum business, and Parke-Davis had it all to itself. The Detroit-based company had become the largest pharmaceutical company in the world, with more than $55 million in annual salesfrom Chloromycetin alone.
This was an enviable position. But also a vulnerable one.
—
“Blood dyscrasia” is an umbrella term for diseases that attack the complex system by which stem cells in the human bone marrow produce red and white blood cells: erythrocytes, leukocytes, granulocytes, and platelets. Dyscrasias can be specific to one sort—anemia is a deficiency in red blood cells, leukopenia in white—or more than one. Aplastic anemia, a blood dyscrasia first recognized by Paul Ehrlich in 1888, refers to depletion of all of them: of every cellular blood component. The result is not just fatigue from a lack of oxygen distribution to cells, or rapid bruising, but a complete lack of any response to infection. Aplastic anemia effectively shuts down the human immune system.
During the first week of April 1951, Dr. Albe Watkins, a family doctor practicing in the Southern California suburb of Verdugo Hills, submitted a report to the Los Angeles office of the FDA. The subject was the Chloromycetin-caused (he believed) aplastic anemia in his nine-year-old son James, who had received the antibiotic while undergoing kidney surgery, and several times thereafter. On April 7, the LA office kicked it up to Washington, and the agency took notice.
Meanwhile Dr. Watkins, a veteran of the Coast Guard and the U.S. Public Health Service, was making Chloromycetin his life’s work: writing to the Journal of the American Medical Association and to the president and board of directors of Parke-Davis. His passion was understandable; in May 1952, James Watkins died. Dr. Watkins closed his practice and headed east on a crusade to bring the truth to the FDA and AMA. In every small town and medium-sized city in which he stopped, he called internists, family physicians, and any other MD likely to have prescribed Chloromycetin, carefully documenting their stories.
Albe Watkins was the leading edge of a tidal wave, but he wasn’t alone. In January 1952, Dr. Earl Loyd, an internist then working in Jefferson City, Missouri, had published an article in Antibiotics and Chemotherapy entitled “Aplastic Anemia Due to Chloramphenicol,” which sort of tipped its conclusion. Through the first half of 1952, dozens of clinical reports and even more newspaper articles appeared, almost every one documenting a problem with Chloromycetin. Many of them all but accused Parke-Davis of murdering children.
To say this was received with surprise at Parke-Davis’s Detroit headquarters is to badly understate the case. During the three years that Chloromycetin had been licensed for sale, it had been administered to more than four million people, with virtually no side effects.
In the fall of 1952, Albe Watkins made it to Washington, DC, and a meeting with Henry Welch, the director of the FDA’s Division of Antibiotics. Dr. Watkins demanded action. He was trying to kick down a door that had already been opened; Welch had already initiated the first FDA-run survey of blood dyscrasias.
The survey’s findings were confusing. Detailed information on 410 cases of blood dyscrasia had been collected, but it wasn’t clear that chloramphenicol was the cause of any of them. In half the cases—233—the disease had appeared in patients who had never taken the drug. In another 116, additional drugs, sometimes five or more, had been prescribed. Only 61 of the victims took chloramphenicol only, and all of them were, by definition, already sick. The researchers had a numbers problem: Aplastic anemia is a rare enough disease that it barely shows up in populations of less than a few hundred thousand people. As a result, the causes of the disease were very difficult to identify in the 1950s (and remain so today).
Finally, just to further complicate cause and effect, chloramphenicol-caused aplastic anemia, if it existed at all, wasn’t dose dependent. This was, to put it mildly, rare; ever since Paracelsus, medicine had recognized that “the dose makes the poison.” It not only means that almost everything is toxic in sufficient quantities; it also means that virtually all toxic substances do more damage in higher concentrations. This dose-response relationship was as reliable for most causes of aplastic anemia as for any other ailment. Benzene, for example, which is known to attack the bone marrow, where all blood cells are manufactured, is a reliable dose-related cause of aplastic anemia; when a thousand people breathe air containing benzene in proportions greater than 100 parts per million, aplastic anemia will appear in about ten of them. When the ratio of benzene to air drops below 20 parts per million, though, the incidence of the disease falls off dramatically: only one person in ten thousand will contract it.
Not chloramphenicol, though. A patient who was given five times more of the drug than another was no more likely to get aplastic anemia. Nor was the drug, like many of the pathogens it was intended to combat, hormetic—that is, it wasn’t beneficial in small doses and only dangerous in higher ones. The effect was almost frustratingly random. Some people who took chloramphenicol got aplastic anemia. Most didn’t. No one knew why.
Even so, Chester Keefer of Boston University, the chairman of the Committee on Chemotherapeutics and Other Agents of the National Research Council during the Second World War (and the man who had been responsible for penicillin allocation), “felt that the evidence was reasonably convincing that chloramphenicol caused blood dyscrasias [and that] it was the responsibility of each practicing physician to familiarize himself with the toxic effects of the drug.” In July, after recruiting the NRC, a branch of the National Academy of Sciences, to review the findings, FDA Deputy Commissioner George Larrick phoned Homer Fritsch, an executive vice president at Parke-Davis, to tell him, “We can’t go on certifying that the drug is safe.”
Fritsch might have been concerned that the FDA was preparing to ban Chloromycetin. He needn’t have worried, at least not about that. At the FDA’s Ad Hoc Conference on Chloramphenicol, virtually every attendee believed the drug’s benefits more than outweighed its risks. Even Maxwell Finland, who had found the early reports on Aureomycin to be overly enthusiastic, endorsed chloramphenicol’s continued use. The Division of Antibiotics recommended new labeling for the drug, but no restrictions on its distribution. Nor did it recommend any restriction on the ability of doctors to prescribe it as often, and as promiscuously, as they wished. The sacrosanct principle of noninterference with physician decisions remained.*
If this sounds like a regulatory agency punting on its responsibility, there’s a reason. Even with the reforms of 1938, which empowered the FDA to remove a product from sale, the authority to do so had rarely been used. Instead, the agency response, even to a life-threatening or health-threatening risk, was informational: to change the labeling of the drug. In 1953, the FDA issued a warning about the risk of aplastic anemia in the use of chloramphenicol, but offered no guidelines on prescribing.
The result was confusion. A 1954 survey by the American Medical Association, in which 1,448 instances of anemia were collected and analyzed, found “no statistical inferences can be drawn from the data collected.” And, in case the message wasn’t clear enough, the AMA concluded that restricting chloramphenicol use “would, in fact, be an attempt to regulate the professional activities of physicians.”
Except the “professional activities of physicians” were changing so fast as to be unrecognizable. The antibiotic revolution had given medicine a tool kit that—for the first time in history—actually had some impact on infectious disease. Physicians were no longer customizing treatments for their patients. Instead, they had become providers of remedies made by others. Before penicillin, three-quarters of all prescriptions were still compounded by pharmacists using physician-supplied recipes and instructions, with only a quarter ordered directly from a drug catalog. Twelve years later, nine-tenths of all prescribed medicines were for branded products. At the same moment that their ability to treat patients had improved immeasurably, doctors had become completely dependent on others for clinical information about those treatments. Virtually all of the time, the others were pharmaceutical companies.
This isn’t to say that the information coming from Parke-Davis was inaccurate, or that clinicians didn’t see the drug’s effectiveness in their daily practice. Chloromycetin really was more widely effective than any other antibiotic on offer: It worked on many more pathogens than penicillin and had far fewer onerous side effects than either streptomycin, which frequently damaged hearing, or tetracycline, which was hard on the digestion.*Chloramphenicol, by comparison, was extremely easy on the patient, with all the benefits and virtually none of the costs of any of its competitors.
Nonetheless, because of the National Research Council report, and the consequent labeling agreement, the company’s market position took a serious tumble. Sales of Chloromycetin, which accounted for 40 percent of the company’s revenues and nearly three-quarters of its profits, fell off the table. Parke-Davis had spent $3.5 million on a new plant in Holland, Michigan, built exclusively to make the drug; in the aftermath of the report, the plant was idled. The company had to borrow money in order to pay its 1952 tax bill. In September 1953, Fortune magazine published an article that described the formerly dignified company as “sprawled on the public curb with an inelegant rip in its striped pants.”
Parke-Davis attempted to take the high road, publishing dozens of laudatory studies and estimated the risk of contracting aplastic anemia after taking Chloromycetin at anywhere from 1 in 200,000 to 1 in 400,000.* But the company was fighting with the wrong weapons. In any battle between clinical reports of actual suffering and statistical analyses, the stories were always going to win, especially when the disease in question tends to strike otherwise healthy children and adolescents. The papers, articles, and newspaper reports of the day reveal how many of the discoveries of aplastic anemias were specific or anecdotal: Dr. Louis Weinstein of Massachusetts gave a speech before his state medical society in which he revealed he’d heard of—heard of—forty cases. The Los Angeles Medical Association reported on two cases, one fatal. Albe Watkins, when he made his famous visit to Henry Welch at the FDA, had collected only twelve documented cases.
Even the objective statistics were problematic. In 1949, the year chloramphenicol was approved for sale, the most reliable number of reported cases of aplastic anemia was 638. Two years later—after millions of patients had received the antibiotic, but before Albe Watkins began his crusade—the number was 671. That increased to 828 in 1952, but most of the 23 percent increase in a single year was almost certainly due to heightened awareness of a disease most physicians—including Albe Watkins—had never encountered before. Even more telling: The increase in blood dyscrasias where chloramphenicol was involved was no greater than where it wasn’t. That is, aplastic anemia was on the increase with or without chloramphenicol.
Even though Chloromycetin had such a tenuous cause-and-effect relationship with aplastic anemia, its competitors had no relationship at all. As a result, Parke-Davis was compelled to face the uncomfortable fact that Terramycin and Aureomycin had a similar spectrum of effectiveness, were produced by equally respected companies, and, rightly or wrongly, weren’t being mentioned in dozens of newspaper articles and radio stories as a killer of small children.
What really put Parke-Davis in the FDA’s crosshairs weren’t the gory newspaper headlines, or, for that matter, the NRC study. It was the company’s detail men.
The etymology of “detail man” as a synonym for “pharmaceutical sales rep” can’t be reliably traced back much earlier than the 1920s. Though both patent medicine manufacturers and ethical pharmaceutical companies like Abbott, Squibb, and Parke-Davis employed salesmen from the 1850s on, their job was unambiguous: to generate direct sales. As such, they weren’t always what you might call welcome; in 1902, William Osler, one of the founders of Johns Hopkins and one of America’s most famous and honored physicians, described “the ‘drummer’ of the drug house” as a “dangerous enemy to the mental virility of the general practitioner.”
When Osler wrote that, however, he was describing a model that was already on its way out. Though doctors in the latter half of the nineteenth century frequently dispensed drugs from their offices (and so needed to order them from “drummers”), by the beginning of the twentieth they were far more likely to supply their patients through local pharmacies. Pharmaceutical companies, in response, directed their sales representatives to “detail” them—that is, to provide doctors with detailed information about the company’s compounds. By 1929, the term was already in wide circulation; an article in the Journal of the American Medical Association observed, “in the past, when medical schools taught much about drugs and little that was scientific about the actions of drugs [that is, doctors never learned why to choose this drug rather than that one], physicians were inclined to look to the pharmaceutic [sic] ‘detail man’ for instruction in the use of medicines.”
At the time, there were probably only about two thousand detail men in the United States. By the end of the 1930s, there were more of them, but the job itself hadn’t changed all that much. In 1940, Fortune magazine wrote an article (about Abbott Laboratories) that described the basic bargain behind detailing. In return for forgoing what was, in 1940, still a very lucrative trade in patent medicines, ethical drug companies were allowed a privileged position in their relationships with physicians. They didn’t advertise to consumers; their detail men didn’t take orders. They weren’t anything as low-rent as “salesmen.”
Or so they presented themselves to the physicians whose prescription pads were the critical first stop on the way to an actual sale. In the same year as the Fortune article, Tom Jones (a detail man for an unnamed company) wrote a book of instructions for his colleagues, in which he cheerfully admitted, “Detailing is, in reality, sales promotion, and every detail man should keep that fact constantly in mind.”
With the antibiotic revolution of the 1940s, the process of detailing, and the importance of the detail man, changed dramatically. A 1949 manual for detail men (in which they were described as “Professional Service Pharmacists”) argued, “The well-informed ‘detail-man’ is one of the most influential and highly respected individuals in the public-health professions. His niche is an extremely important one in the dissemination of scientific information to the medical, pharmaceutical, and allied professions. . . . He serves humanity well.”
He certainly provided a service to doctors. In 1950, about 230,000 physicians were practicing in the United States, and the overwhelming majority had left medical school well before the first antibiotics appeared. This didn’t mean they hadn’t completed a rigorous course of study. The 1910 Flexner Report—a Carnegie Foundation–funded, American Medical Association–endorsed review of the 155 medical schools then operating in the United States—had turned medical education into a highly professional endeavor.* But while doctors, ever since Flexner, had been taught a huge number of scientific facts (one of the less than revolutionary recommendations of the report was that medical education be grounded in science), few had really been taught how those facts had been discovered. Doctors, then and now, aren’t required to perform scientific research or evaluate scientific results.
Before the first antibiotics appeared, this wasn’t an insuperable problem, at least as it affected treating disease. Since so few drugs worked, the successful practice of medicine didn’t depend on picking the best ones. After penicillin, streptomycin, and chloramphenicol, though, the information gap separating pharmaceutical companies from clinicians became not only huge, but hugely significant. Detail men were supplied with the most up-to-date information on the effectiveness of their products—not only company research, but also reprints of journal articles, testimonials from respected institutions and practitioners, and even FDA reports. Doctors, except for those in academic or research settings, weren’t. In 1955, William Bean, the head of internal medicine at the University of Iowa College of Medicine, wrote, “A generation of physicians whose orientation fell between therapeutic nihilism and the uncritical employment of ever-changing placebos was ill prepared to handle a baffling array of really powerful compounds [such as the] advent of sulfa drugs, [and] the emergence of effective antibiotics. . . .”
The detail man was there to remove any possibility of confusion. And if, along the way, he could improve his employer’s bottom line, all the better. As the same 1949 manual put it, “The Professional Service Pharmacist’s jobis one of scientific selling in every sense of the word. . . . He must be a salesman first, last, and always.”
In general, doctors in clinical practice thought the bargain a fair one. Detail men were typically welcomed as pleasant and well-educated information providers, who, incidentally, also provided free pens, lunches, and office calendars in quantity.* Parke-Davis, in particular, hired only certified pharmacists for their own detailing force, and it was said that a visit from one of them was the equivalent of a seminar in pharmacology.
In 1953, when the Chloromycetin story blew up, Harry Loynd was fifty-five years old, and had spent most of his adult life selling drugs, from his first part-time job at a local drugstore to a position as pharmacist and store manager in the Owl Drug Company chain. He joined Parke-Davis as a detail man in 1931, eventually rising to replace Alexander Lescohier as the company’s president in 1951. He was aggressive, disciplined, autocratic, impatient with mistakes, and possessed of enormous energy.
However, unlike his predecessor or most of his fellow industry leaders, Loynd had little use for the medical profession. At one sales meeting, surrounded by his beloved detail men, he told them “If we put horse manure in a capsule, we could sell it to 95 percent of these doctors.” And when he said, “sell” he didn’t mean “advertise.” Ads in magazines like JAMA were fine, in their place; they were an efficient way of reaching large numbers of physicians and other decision makers. But advertising wasn’t able to build relationships, or counter objections, or identify needs. For that, there was nothing like old-fashioned, face-to-face selling. Parke-Davis wouldn’t use the clever folks at the William Douglas McAdams ad agency. Loynd was a salesman through and through and believed that Parke-Davis’s sales force wasn’t just a source of its credibility to doctors, but its biggest competitive advantage.
Even before the FDA announced its labeling decision, Loynd was spinning it as a victory, issuing a press release that said—accurately, if not exhaustively so—“Chloromycetin has been officially cleared by the FDA and the National Research Council with no restrictions [italics in original] on the number or the range of diseases for which Chloromycetin can be administered. . . .” Doctors all over the country received a letter using similar language, plus the implication that other drugs were just as complicit in cases of aplastic anemia as Chloromycetin. Most important: The Parke-Davis sales force was informed, apparently with a straight face, that that National Research Council report was “undoubtedly the highest compliment ever tendered the medical staff of our Company.” Parke-Davis would use its detail men to retake the ground lost by its most important product.
Loynd’s instinct for solving every problem with more and better sales calls was itself a problem. When management informs its sales representatives that they are the most important people in the entire company—Loynd regularly told his detail men that the only jobs worth having at Parke-Davis were theirs . . . and his—they tend to take it to heart. Though the company did all the expected things to get its detail men to tell doctors about the risks of Chloromycetin, even requiring every sales call to end with the drug’s brochure open to the page that advised physicians that the drug could cause aplastic anemia, there was only so much that could be done to control every word that came from every sales rep’s mouth. Detail men were salesmen “first, last, and always,” and more than 40 percent of their income came from a single product. Expecting them to emphasize risks over benefits was almost certainly asking too much.
The FDA, which was asking precisely that, was infuriated. The agency’s primary tool for protecting public safety was controlling the way information was communicated to doctors and pharmacists. They could review advertising and insist on specific kinds of labeling. They could do little, though, about what the industry—and especially Parke-Davis—regarded as its most effective communication channel: detail men. It’s difficult to tell whether the FDA singled out Parke-Davis for special oversight. In one telling example, a San Francisco physician accused two of the company’s detail men of promoting Chloromycetin using deceptive statements at a meeting set up at the FDA’s regional office. But there’s no doubt that Parke-Davis believed it to be true.
For the next five years, the company walked the narrow line between promoting its most important product and being the primary source of information about its dangers. By most measures, they did it extraordinarily well. Sales recovered—production of the drug peaked at more than 84,000 pounds in 1956—even as it had to survive a second public relations nightmare. In 1959, doctors in half a dozen hospitals started noting an alarming rise in neonatal deaths among infants who had been given a prophylactic regimen of chloramphenicol because they were perceived to be at higher than normal risk of infection, usually because they were born premature. Those given chloramphenicol either alone, or in combination with other antibiotics such as penicillin or streptomycin, were dying at a rate five times higher than expected. The cause was the inability of some infants to metabolize and excrete the antibiotic. It’s still not well understood why some infants had this inability, but in a perverse combination, the infants receiving chloramphenicol not only were the ones at most risk, but once they developed symptoms of what has come to be known as “gray baby syndrome”—low blood pressure, cyanosis, ashy skin color—they were given larger and larger doses of the drug. Gray babies frequently showed chloramphenicol blood levels five times higher than the acceptable therapeutic dose.*
Gray baby syndrome was bad enough. Aplastic anemia was worse, and it was the risk of that disease that returned Chloromycetin to the news in the early 1960s. The new aplastic anemia scare was fueled in large part by the efforts of a Southern California newspaper publisher named Edgar Elfstrom, whose daughter had died of the disease after being treated—overtreated, really; a series of doctors prescribed more than twenty doses of Chloromycetin, one of them intravenously—for a sore throat. Elfstrom, like Albe Watkins before him, made opposition to chloramphenicol a crusade, and he had a much bigger trumpet with which to rally his troops. Watkins had been a well-respected but little-known doctor. Elfstrom was a media-savvy writer, editor, and newspaper publisher. He sued Parke-Davis and his daughter’s physicians; he wrote dozens of open letters to FDA officials, to members of Congress, to Attorney General Robert Kennedy, and to Abraham Ribicoff, the secretary of the Department of Health, Education, and Welfare. He even met with the president. As someone with easy access to the world of print journalism—Elfstrom wasn’t just a publisher himself, but a veteran of both UPI and the Scripps Howard chain of newspapers, with hundreds of friends at publications all over the country—he was able to give the issue enormous prominence. For months, stories appeared in both Elfstrom’s paper and those of his longtime colleagues, including a major series in the Los Angeles Times. They make heartbreaking reading even today: A teenager who died after six months of chloramphenicol treatment for acne. An eight-year-old who contracted aplastic anemia after treatment for an ear infection. Four-year-olds. Five-year-olds. A seventeen-year-old with asthma. The stories have a chilling consistency to them: a minor ailment, treatment with a drug thought harmless, followed by subcutaneous bleeding—visible and painful bruising—skin lesions, hemorrhages, hospitalization, a brief respite brought about by transfusions, followed by an agonizing death.
The tragic conclusion to each of these stories is one reason that the chloramphenicol episode is largely remembered today as either a fable of lost innocence—the realization that the miracle of antibacterial therapy came at a profound cost—or as a morality tale of greedy pharmaceutical companies, negligent physicians, and impotent regulators. The real lessons are subtler, and more important.
The first takeaway isn’t, despite aplastic anemia and gray babies, that antibiotics were unsafe; it’s that after sulfa, penicillin, streptomycin, and the broad-spectrum antibiotics, it wasn’t clear what “unsafe” even meant.
For any individual patient, antibiotics were—and are—so safe that a busy physician could prescribe them every day for a decade without ever encountering a reaction worse than a skin rash. It’s worth recalling that, only fifteen years before the aplastic anemia scare, the arsenal for treating disease had consisted almost entirely of a list of compounds that were simultaneously ineffective and dangerous. The drugs available at the turn of the twentieth century frequently featured toxic concentrations of belladonna, ergot, a frightening array of opiates, and cocaine. Strychnine, the active ingredient in Parke-Davis’s Damiana et Phosphorus cum Nux, is such a powerful stimulant that Thomas Hicks won the 1904 Olympic marathon while taking doses of strychnine and egg whites during the race (and nearly died as a result). The revolutionary discoveries of Paul Ehrlich and others replaced these old-fashioned ways of poisoning patients with scarcely less dangerous mixtures based on mercury and arsenic. No doctor wanted to return to the days before the antibiotic revolution.
But what was almost certainly safe for a single patient, or even all the patients in a single clinical practice, was just as certainly dangerous to someone. If a thousand patients annually were treated with a particular compound that had a 1 in 10,000 chance of killing them, no one was likely to notice the danger for a good long while. Certainly not most physicians. Eight years after Parke-Davis started affixing the first FDA-required warning labels to Chloromycetin, and even after the first accounts of gray baby syndrome, the Council on Drugs of the AMA found that physicians continued to prescribe it for “such conditions as . . . the common cold, bronchial infections, asthma, sore throat, tonsillitis, miscellaneous urinary tract infections . . . gout, eczema, malaise, and iron deficiency anemia.” The FDA had insisted on labeling Parke-Davis’s flagship product with a warning that advised physicians to use the drug only when utterly necessary, and that hadn’t even worked.
Most clinicians simply weren’t suited by temperament or training to think about effects that appear only when surveying large populations. They treat individuals, one at a time. The Hippocratic Oath, in both its ancient and modern versions, enjoins physicians to care for patients as individuals, and not for the benefit of society at large. Expecting doctors to think about risk the same way as actuaries was doomed to failure, even as the first antibiotics changed the denominator of the equation—the size of the exposed population—dramatically. Tens of millions of infections were treated with penicillin in 1948 alone; four million people took Chloromycetin, almost all of them safely, from 1948 to 1950.
But if doctors couldn’t be expected to make rational decisions about risk, then who? If the chloramphenicol story revealed anything, it was just how poorly society at large was at the same task. As a case in point, while no more than 1 in 40,000 chloramphenicol-taking patients could be expected to contract aplastic anemia, a comparable percentage of patients who took penicillin—1 in 50,000—die from anaphylaxis due to an allergic reaction; and, in 1953, a lotmore penicillin prescriptions were being written, every one of them without the skull-and-crossbones warning that the FDA had required on Parke-Davis’s flagship product.
Chloramphenicol also demonstrated why pharmaceutical companies were severely compromised in judging the safety of their products. As with physicians, this wasn’t a moral failing, but an intrinsic aspect of the system: a feature, not a bug. The enormous advances of the antibiotic revolution were a direct consequence of investment by pharmaceutical companies in producing them. The same institutions that had declined to invest hundreds of pounds in the Dunn School’s penicillin research were, less than a decade later, spending millions on their own. That, in turn, demanded even greater resources—collecting more and more samples of soil-dwelling bacteria; testing newer and newer methods of chemical synthesis; building larger and larger factories—in improving on, and so replacing, them.
This, as much as anything else, is the second lesson of chloramphenicol. Producing the first version of a miracle drug doesn’t have to be an expensive proposition. But the second and third inevitably will be, since they have to be more miraculous than the ones already available. This basic fact guarantees that virtually every medical advance is at risk of rapidly diminishing returns. The first great innovations—the sulfanilamides, penicillin—offer far greater relative benefits than the ones that follow. But the institutions that develop them, whether university laboratories or pharmaceutical companies, don’t spend less on the incremental improvements. Precisely because demonstrating an incremental improvement is so difficult, they spend more. The process of drug innovation demands large and risky investments of both money and time, and the organizations that make them have a powerful incentive to calculate risks and benefits in the way that maximizes the drug’s use. Despite the public-spiritedness of George Merck or Eli Lilly, drug companies—and, for that matter, academic researchers—were always going to be enthusiasts, not critics, about innovative drugs. It’s hard to see how the antibiotic revolution could have occurred otherwise.
This left the job of evaluating antibiotics to institutions that, in theory at least, should have been able to adopt the widest and most disinterested perspective on the value of any new therapy. This was why the Food, Drug, and Cosmetic Act of 1938 empowered the FDA to oversee drug safety—which sounds clear, but really isn’t. The decades since the Elixir Sulfanilamide disaster had demonstrated that any drug powerful enough to be useful was, for some patients, also unsafe. Few people, even at the FDA, really understood how to compare risks and benefits in a way that the public could understand.
The third lesson of the chloramphenicol episode should have been that risks and benefits in drug use aren’t measured solely by the probability of a bad outcome, or even its magnitude. They can only be established by comparing the risk of using a compound against the risk of not using it. For this reason, the association of chloramphenicol with blood dyscrasias, while tragic and notorious, was actually beside the point. Chloramphenicol, like penicillin, streptomycin, erythromycin, and the tetracyclines, was an almost unimaginably valuable medicine when used appropriately. The very different incentives of pharmaceutical companies and physicians—the first to maximize the revenue from their investments; the other to choose the most powerful treatments for their patients—practically guaranteed a high level of inappropriate use. Chloramphenicol was critical for treating typhus; not so much for strep throat.*
What the story of chloramphenicol’s rise and ultimate decline (at least as an antibiotic prescribed millions of times annually) revealed was that safety couldn’t be measured in a vacuum. Evaluating the danger of any new therapy demanded context—its efficacy, as balanced against its risks. The only candidate to do this was the FDA, but the Food, Drug, and Cosmetic Act only empowered the agency to measure safety, not effectiveness.
That was about to change.