Miracle Cure: The Creation of Antibiotics and the Birth of Modern Medicine 1st Edition

TWO

“Patience, Skill, Luck, and Money”

Anineteenth-century German opera, Der Freischütz—in English, The Marksman—tells the story of a young forester who must pass a test of shooting skill to win the heart of his true love. After losing a match to a young peasant, he is persuaded to improve the odds by using Freikugel or Zauberkugel, an enchanted bullet that could not fail to hit its target.* The magic bullets—the first six under the control of the marksman, the seventh owned by the devil—are a constant presence in collections of European folktales and a familiar trope in nineteenth-century drama.

Even more durably, they inspired the best-known modern usage of the term, which has little to do with the devil or folk mythology. In 1907, “magic bullets” were the theme of the Harben Lectures given at what was then known as Britain’s Royal Institute of Public Health. The lecturer, who used the term to describe a targeted drug, one that would attack a disease-causing microbe without harming the host suffering from the disease, was a German physician named Paul Ehrlich.

Ehrlich was then fifty-two years old, and one of the most respected physicians and scientists in the world. Like Robert Koch, he was a product of a terrifyingly rigorous, but undeniably effective, German education. Beginning with the reforms of Wilhelm von Humboldt in Prussia in the early nineteenth century, Germany had taken a far more pragmatic view of modern subjects such as mathematics and science than had been the case in the United Kingdom or France, and certainly the United States. By 1872, state-controlled education had added, to the nine years of Latin and Greek provided by the gymnasia, the Realgymnasia, and Oberrealschules, and especially in the technische Hochschulen (technical colleges), algebra, chemistry, and physics. After decades of providing its citizens with history’s most rigorous educational program, Germany had become the world’s leader in virtually every field of scientific research. Moreover, since the original reforms had been explicitly made to support the commercial interests of the German state, there were no qualms about partnerships between schools—secondary and postsecondary—and industry. Paul Ehrlich was a notable beneficiary, as his education took him from the Maria-Magdalenen-Gymnasium in Breslau through universities in Strasbourg, Freiburg, and Leipzig. In 1878, at the age of twenty-four, he received his doctorate for a dissertation on a subject that would occupy him for years, and turn out to be the first link in a chain that led to the first true antibacterial chemical therapy.

Credit: National Institutes of Health/National Library of Medicine

Paul Ehrlich, 1854–1915

Ehrlich’s dissertation was titled “Contributions to the Theory and Practice of Histological Dyes.” Histology—the word is taken from a Greek root meaning “something that stands upright” and was adopted by nineteenth-century physiology to mean “tissue”—had just come into its own as a credible discipline. Ever more powerful microscopes had made distinguishing one sort of tissue from another possible, but not so easy. Even with lenses that magnified cells hundreds of times, it was difficult if not impossible to identify distinct cell types, without something to improve the contrast between, for example, different sorts of blood cells. The something was staining: Some chemicals have a special affinity for different cell types, and can turn them a particular color while leaving other similarly shaped cells unchanged. Ehrlich’s specific contribution was to use a dye with what he called “an absolutely characteristic behavior toward the protoplasmic deposits of certain cells” in blood plasma. He named the “certain cells” mastzelle, from the German word for “fattening,” because he believed them involved in the process by which cells fed themselves. In this he was wrong—mast cells are part of the immune system (about which more below)—but the real triumph was the discovery that stains could differentiate components of blood: leukocytes, lymphocytes, red blood cells, and so on. The newly minted doctor, who had become known to his classmates as “the man with blue, yellow, red, and green fingers,” had made a huge discovery: Different stains were absorbed by different cell types.

The significance to medicine wasn’t merely that a new tool for studying human and animal cells had been discovered. The tool could identify all kinds of cells, including bacteria. And, so the logic went, if a dye could recognize a specific class of bacterium, could it not also deliver a specific attack on the pathogenic ones? It was not a giant leap to imagine transforming a blob of paint that unerringly marked a dangerous trap into a compound—a magic bullet, perhaps—that would destroy it. Nor was it especially difficult to find the best place to start the search: wherever Robert Koch was working.* In 1891, Ehrlich joined Robert Koch at his Berlin Institute for Infectious Diseases.

The first step was understanding the magic bullets already present in humans and animals: the immune system. Even before Koch and Pasteur had demonstrated diseases were caused by microorganisms, the concept of immunity had been recognized and even classified. Victims of diseases like smallpox who survived were known to be immune to the disease thereafter. Thucydides, writing about the Plague of Athens, recognized that Athenians who had recovered from one encounter with the disease were protected against a second. The discovery of the germ theory suggested an explanation: Whatever provided immunity was attacking and destroying specific pathogens without harming the host. Could this germ-killing machinery be harnessed as a targeted therapy?

Though Ehrlich and his collaborators didn’t yet understand it in detail, the vertebrate immune system is composed of a dizzying array of components, some of them just long chains of amino acids, others complex cells, complete with specialized and deadly organelles. Some parts of the immune system are intelligence analysts, able to identify the nature of attacking pathogens; others are messengers, sending out chemical alarm bells to summon destroyer cells to the site of an invasion. There are even components, like the dendritic cells, that identify antigens, and train other cells, the T-lymphocyte white blood cells, to recognize them the next time they pay the host a visit.

The key categorical distinction used for the immune system is between those components that are innate—nonspecific defenses that respond to invaders that the organism has never encountered before—and those that are specific (or adaptive): the specialized defenses created by organisms only after they have encountered a specific pathogen. Innate first: When an organism—you, perhaps—encounters a microbe with evil intent, a dozen different proteins that are always circulating in the bloodstream send out a chemical alarm that activates another group of proteins known collectively as the cytokines. These molecules are folded in distinctive ways, permitting them to attach themselves to a pathogen, and hang on while sending off another alarm, summoning cell-based immune defenders like white blood cells. Meanwhile, other cell-based defenders, Ehrlich’s mastzelle, release chemicals like the histamines, which turn up the body’s thermostat, causing what is known as the inflammatory response: fever.

And that’s just the off-the-shelf, generic version of the innate system. Far more powerful are the bespoke defenses of an organism that has previously been exposed to a particular pathogen and has responded by tailoring a weapon specifically to destroy it. These specialized forces include cell-based troops like the B-lymphocytes manufactured in the bone marrow that create highly specific antibodies that hold on to the surface of invading cells; the T-lymphocytes that can destroy invaders by punching a hole in the invader’s membrane; and macrophages that can swallow and destroy them. The specialized immune system is a double-edged weapon; it depends on a very precise match between the supply of defenses and the demand for them. The inflammatory response can kill hosts as well as pathogens. Chronic lymphoid leukemia, one of the most dangerous—and common—cancers does its damage by overproducing B-cells that accumulate in the bone marrow, and so crowd out the ones needed to fight infections. Victims frequently die, not from cancer per se, but from the recurrent infections that their immune systems are no longer able to fight.

When Ehrlich and colleagues, most notably the physician Emil von Behring and the bacteriologist Hans Buchner, started studying the immune system in the 1890s, however, this taxonomy of immunity was still decades in the future. Their starting points were far simpler: Koch and Pasteur’s discovery that a specific microbe causes a specific disease, and that exposure to that microbe conferred immunity thereafter.

The first components of the immune system they discovered were the complex of antibacterial macromolecules found in blood serum, originally named “alexins” by Buchner, but renamed the “complement” by Ehrlich, to reflect his discovery that the thirty or so proteins that comprised it complemented the work of other proteins known as antibodies. If those molecules could be made to appear by exposure to the pathogens, then perhaps they could not only prevent future occurrences of a disease but, like Pasteur’s rabies vaccine, treat a new one—acting not as a vaccine, but as an antiserum. Behring started looking.

Born in 1854, Emil Behring (not yet “von”) was, like Ehrlich, a man who had come to adulthood after Bismarck’s 1871 consolidation of most of the Germanophone world as the Second Reich: Imperial Germany. Unlike the Jewish Ehrlich, he was training for a career in the Church before he transferred to a program in medicine, specifically military medicine. After garrison duty at a number of German army bases, he found his way—again like Ehrlich—to an orbit around one of the two great founders of the science of microbiology, at Koch’s Hygiene Institute in 1888, and, after 1890, the Institute for Infectious Diseases.

Also in 1890, Behring and his colleague, a visitor from Japan named Kitasato Shibasaburo, engineered the first real breakthrough in serum therapy. Ever since Jenner, the immune system had been activated by exposing its host to a pathogenic organism. Behring and Shibasaburo discovered that the immune system could be used to cure disease by exposing the host not to a pathogenic organism itself, but to the specific toxin released by most pathogens: the toxin that caused the symptoms of disease.

Their first practical demonstration of this discovery was used to treat a very dangerous disease, indeed: diphtheria. Corynebacterium diphtheriae was first identified by the Swiss pathologist Edwin Klebs in 1883 (the toxin produced by the bacteria was first identified in 1888 by Pasteur’s colleague Emile Roux and independently by the Swiss physician Alexandre Yersin, who would later give his name to the bacterium that causes bubonic plague: Yersinia pestis). Diphtheria was then afflicting more than fifty thousand Germans annually with sore throats, fever, and the disease’s characteristic membrane covering the tonsils and pharynx.* Up to five thousand would die. So when Behring and Shibasaburo announced, in 1891, that they were able to cure infected rats, guinea pigs, and rabbits by injecting them with a heat-weakened version of the toxin produced by C. diphtheria, it was very big news indeed.

The announcement was made well in advance of a proven therapy. Until 1897, when Ehrlich established a standardized unit for measuring diphtheria toxin, the antitoxin was at best unreliable, at worst dangerous. The immediate significance of the enthusiasm that greeted antiserum therapy wasn’t on diphtheria treatment, but on an even more consequential aspect of medical history: It attracted the interest of Germany’s industrial chemists. In 1892, Behring signed on to collaborate on the production of a diphtheria antitoxin with the Hoechst chemical company, based in Frankfurt-am-Main.

The company was then barely thirty years old, but had nonetheless seen its name change three times: from Teerfarben Meister, Lucius & Co. to Meister Lucius & Brüning to Farbwerke vorm. Meister Lucius & Brüning AG. By the time they went into business with Behring, the founders had evidently decided that simplicity was the better part of vanity and renamed their company Farbwerke Hoechst AG, since Hoechst (or Höchst) was where their first factory was located. The company’s primary business is revealed in its original name: The German Teer means “tar.” Farben translates as “color,” or more appropriately, “dye.”

That Behring would collaborate with a dye company wasn’t exactly unusual. To a very great degree, the chemical industry in the late nineteenth century was the dye business: Dyes were by far the largest and most lucrative chemical process yet known, and enormously more profitable than, say, medicine. Vegetable-based dyes extracted from madder root, indigo plants, insects, and even shellfish, which was the source of the ancient dye known as “Tyrian purple,” have been used for at least three thousand years and probably longer to color textiles, but it took the scientific advances of the Industrial Revolution to create the first synthetic dyes. Most important of all were the aniline dyes, the organic compounds combined in a chemist’s lab.* Beginning in the 1830s, anilines had been extracted from coal tar; the same chemist who had distilled Joseph Lister’s carbolic acid from creosote also discovered that calcium hypochlorate could turn coal tar into a spectacular indigo dye. By the 1850s, the English chemist William Henry Perkin had accidentally discovered the first synthetic aniline dye, which he named mauveine. Even better, he had discovered a chemical breakthrough that revealed how to produce it by the carload, and the chemical industry was off to the races. Two other German chemists, Carl Liebermann and Carl Graebe, managed to isolate and synthesize alizarin, the active component in the madder root that had been used as a red dye since ancient Egypt. In 1870, Adolf von Baeyer did the same for indigo.

Since dye companies were always looking for innovative methods of producing color, a mastery of tissue staining was a prized talent. It’s therefore no coincidence that Hoechst brought Ehrlich—who had stained his first mast cells with aniline dyes—and Behring together. One of the many talents Ehrlich had cultivated was the ability to visualize the structure of dye molecules in three dimensions. By the end of the 1880s, Ehrlich, despite having no formal training as a chemist, had authored more than forty research papers on the chemistry of dyes and developed a dozen different staining methods using them. Working with Hoechst, and Behring, ought to have been a highly productive business; and it would have been, but for the very different objectives of a chemist working for an industrial lab rather than a hospital or university. Though Ehrlich had taken the precaution of taking out a basic patent on his method of standardizing diphtheria serum antitoxin, a fair reading of the subsequent events is that Behring managed to void those agreements in order to secure greater profits for himself from the collaboration with Hoechst.

The result was the poisonous transformation of the once close collegiality between two of Robert Koch’s greatest protégés into the same sort of lifelong hostility that characterized the relationship between Koch and Pasteur. It wasn’t merely the patent dispute. Ehrlich objected even more strongly to Behring’s attempt to profit from Koch’s tuberculin, producing a version marketed by Hoechst, even though the compound had already been shown to be useless. This was a point that Ehrlich made over and over again. “I must . . . be no longer exposed to Behring’s crass egotism and money-grubbing. I am not in the least inclined to . . . be subservient to his business shenanigans. I have no mind whatsoever to convert my Institute into a branch establishment or business venture of Behring’s. . . . [He] will now bad-mouth me everywhere, but my conscience is untroubled, and whatever he may be up to doesn’t faze me in the least.”

In any case, Ehrlich was searching for something bigger than antiserums. Though he joined Berlin’s Institute for Serum Research and Serum Testing (a grand name for a lab built in an abandoned bakery) in 1896 and continued his work on toxins when he moved to Frankfurt-am-Main’s Institut für Experimentalle Therapie in 1899, he already knew the limitations of antiserum therapy. Behring had shown that diseases could be caused not only by microbes, but also by toxins produced by the microbes, and that the body could produce a therapeutic response once exposed to the toxin. Turning this insight into a therapy, however, proved harder than expected. Antiserum therapy depends on the immune system’s ability to recognize a particular toxin: to identify it from its surface appearance. However, while some of the most dangerous toxins—exotoxins—are found on the surface of bacterial invaders, most bacterial diseases are caused by endotoxins, which do their damage only when the bacterial cell is ruptured. Since antiserum therapy only worked with exotoxins, it was effective against a limited number of pathogens. If the antiserum gun was firing magic bullets, very few of them were going to find their targets.

This didn’t mean that Ehrlich’s time in Berlin or Frankfurt was wasted; far from it. In 1897, he developed a truly revolutionary theory about the geometric relationship between an antigen and an antibody: the so-called side-chain theory.

Side-chain theory proposed that the membranes that enclosed the cells of multicellular animals were extremely complex chemical machines, each of whose components—Ehrlich’s side chains—has an affinity for a particular nutrient needed by the cell. Normally, each side chain is a kind of combination lock that opens when it encounters a needed protein. If any foreign substances—bacteria, viruses, or toxins—fit the same lock, metabolic activity is blocked. The cell responds by producing similar side chains as replacements, but overdoes it—in Ehrlich’s words, “nature is prodigal.” The extra side chains are then sloughed off into the cell’s surrounding fluid; and each of them, by definition, possesses precisely the shape to combine with the invading antigen. It is as if the cell had produced a finely machined gear that fits perfectly with its pathogenic match: antigens that latch onto the invaders and produce all the actions of the immune response—chemical signaling, inflammation, and everything else. The side-chain theory, and its explication of the immune response, was such an elegant and comprehensive discovery that it would win Ehrlich the 1908 Nobel Prize in Medicine.*

A commonplace observation about the Nobel Prizes says that no important work is ever done by winners after collecting the award. In this as elsewhere, Ehrlich was exceptional. Not only could the physician and scientist have been recognized by the Nobel committee for revolutionary medical discoveries made even before the creation of side-chain theory—histological staining; the investigations of the toxins ricin and abrin, which revealed the timing of the immune response—but, as time would reveal, his greatest achievement was still to come. That achievement, the birth of chemotherapy, was, nonetheless, very much of a piece with his very first publication nearly three decades before, on the staining of microscopic structures. Like his mastzelle, Ehrlich’s magic bullet appeared only in proximity to coal tar–based dyes.

The argument at the core of Ehrlich’s Nobel Lecture, entitled “Partial Cell Functions,” was that the future of microbiology depended less on observational biology, and more on fundamental chemistry.

Chemistry is one of the younger sciences. If you could pluck Isaac Newton out of the seventeenth century and drop him into a twenty-first-century high school, he could teach at least the first few chapters of a contemporary physics course; the laws of mechanics, for example, are still the ones Newton postulated. For that matter, a second-century mathematician could do the same for a full year of geometry or trigonometry. Even biologists still name species using the Linnaean classifications first published in 1735. In chemistry, though, hardly any idea that appeared before the French Revolution has survived except as an historical oddity, like the theory that chemical activity depended on a combustible substance called “phlogiston.

The discipline began to come together with Antoine Lavoisier’s 1789 codification of the twenty-three known elements as substances that could not be broken into smaller components. In 1808, John Dalton in England and Joseph Gay-Lussac in France independently derived similar laws about the constituents of gases that led directly to the theory that all gases—all everything—were composed of impossibly small distinct elements known as atoms. The science had, for the first time, a usable conceptual framework.

Elements would be discovered over the course of the nineteenth century, including ones essential to life: sodium and potassium in 1807, calcium in 1808, iodine in 1811. Chemical analysis grew sharper and more precise with the introduction of dozens of apparatus. Since the time of Lavoisier, scientists have used combustion to examine the constituents of organic matter: burning the sample using a hollow glass blowpipe, trapping the carbon dioxide and water vapor produced, and measuring their volume in order to calculate how much hydrogen and carbon the sample originally contained. Gay-Lussac improved on this method by exposing samples to anhydrous calcium carbonate, which likewise trapped water vapor. In 1831, Justus von Liebig (the “Father of the Fertilizer Industry” for his discovery that nitrogen, in the form of ammonia, was the essential element in plant metabolism) invented the five-balled glass vessel he called the “Kaliapparat,” which trapped carbon dioxide as it passed through a filter of caustic potash, or potassium hydroxide.

Inevitably, as the tools of analysis improved, the temptation to create chemical compounds, which had been the dream of alchemists since antiquity, grew. The shift from analysis to synthesis—and, not so incidentally, from inorganic to organic chemistry—found its starting point with the German chemist Friedrich Wöhler’s 1828 synthesis of urea from simpler components such as cyanic acid and ammonia. In 1845, Hermann Kolbe synthesized acetic acid.

There was, however, one big, unavoidable problem with chemical synthesis: Optical microscopes could make cells an object of study for the biologists like Pasteur and Koch, who used them to establish the precepts of the germ theory. But the molecules that are central to all chemical activity couldn’t be seen by even the most powerful lenses.* Because molecules aren’t perceptible to the eye, building one in a laboratory was a little like trying to build a scale model of the Eiffel Tower while wearing a blindfold.

Luckily, it’s not quite as daunting as that. There are a number of rules that determine how elements bond to, and react with, one another—the equivalent of understanding which bolt attaches to a particular nut in the overall construction of the Eiffel Tower. The ways in which acidic substances like hydrochloric acid react with alkaline ones, such as lye, were intuited as far back as Lavoisier (although he got almost all the elements involved wrong). In 1857, August Kekulé, a German organic chemist, discovered a concept about chemical structure that first-year chemistry students learn as an atom’s valence (Kekulé and his contemporaries called them “affinity units” or “combining power”). Different elements, he proposed, have distinct powers of combination, and can bond (or share an electron) with other elements based on their combining powers. If the power was one, as with hydrogen, the atom could form only a single bond; for a combining power of two, such as oxygen, either two single bonds or one double bond; and for a power of three, nitrogen, for example, either three single bonds, a double bond and a single bond, or one triple bond. Twelve years later, a Russian chemist named Dimitri Mendeleev published the first of his periodic tables, then containing sixty-five named elements, organized by atomic weight and valence. Though it wasn’t until 1897 that J. J. Thomson discovered the electron (and another nineteen years would pass before their essential role in bonding was recognized), from a practical standpoint, chemists knew the rules of bonding one atom to another.

With the discovery of electrons, though, a huge suite of different chemical reactions could now be explained: reduction reactions, in which electrons are gained, and oxidation reactions, where they are lost; acid-base reactions, mediated by charged particles—ions—where positive protons seek out negative electrons in base moleculesAs a result, the chemistry that Paul Ehrlich was able to teach himself was sophisticated enough that he could perform experiments on extraordinarily complex chemical structures.

As far back as the mid-1880s, Ehrlich had experimented with the use of azo dyes*—these are aniline derivatives with names like methylene yellow, congo red, and alizarin yellow—as potential therapeutic agents. About 1891, he identified a variant of the dye methylene blue* as a treatment for malaria—a distinctly mediocre treatment. First, the compound was only mildly effective; also, due to its origins as a coloring agent, it turned urine green and the whites of the eyes blue, which understandably limited its appeal to both patients and physicians. Even with its problems, though, the dye-based treatment was effective enough that methylene blue pills remained a first-line antimalarial drug well into the 1940s, and promising enough to encourage Ehrlich in the search for his Zauberkugel.

He had, by then, learned that testing a magic bullet required a fairly easy-to-hit target, and believed he had found one in trypanosomiasis, the “sleeping sickness” caused by the introduction of trypanosomes into the bloodstream of infected animals, generally by the bite of the tsetse fly. The disease seemed perfect: first, because the trypanosomes that cause it are giants of the microbial world (they’re protozoans: parasites that, like bacteria, have a single cell, but which also have nuclei and some other cellular machinery), they are relatively easy to see and identify; and second, because it was a reliable killer of the most prolific lab animals, white mice. In 1903, Ehrlich had synthesized his own azo dye as a potential cure. Named trypan red in honor of its intended target, it followed a frustrating but common course: early success, followed by long-term failure. It didn’t kill all versions of the protozoan, which made it functionally useless.

But not without providing useful data about the next step. In 1863, the French chemist (and one of Pasteur’s many rivals) Antoine Béchamp had discovered an arsenic-based compound he had named atoxyl; some recently published experiments indicated that atoxyl killed trypanosomes. In 1905, Ehrlich’s by then well-cultivated talent for visualizing molecular structure resulted in a remarkable insight: The structure of atoxyl was not that of a chemically deactivated, or stable anilide, but of a far more changeable arsonic acid. The significance was huge: He could try to induce a chemical reaction from an anilide for years without getting any more response out of it than he would by hitting a steel girder with a rubber hammer. An arsonic acid, however, was chemically reactive. As his colleague Alfred Bertheim would later write, “Probably for the first time, therefore, a biologically effective substance existed whose structure was not only known precisely but also . . . was of a simple composition and extraordinary reactivity, which permitted a wide variety of modifications” [emphasis added]. In 1907, Ehrlich began tinkering with the molecule’s promisingly unstable structure. He would continue tinkering for three years.

When he was asked, later in life, to describe his experimental strategy, Ehrlich responded with the ponderous “uniform direction of research combined with as much independence as possible for individual researchers.” (It is even more ponderous in German: Einheitliche Richtung der Forschung bei möglichst selbständigen Leistungen der Einzelnen.) This sounds a bit more democratic than was actually the case; most of his subordinates recall a lot more uniformity of direction than independence; one of them remembered vividly that Ehrlich “often hammered on the anvil of his assistant’s brains.” However perceived at the time, it seems unarguable that among Ehrlich’s many gifts as a scientist, his ability to organize the work of dozens of subordinates stood out. He had consciously adopted the same management style that had been pioneered by the synthetic dye industry, in which a prominent research director, Heinrich Caro, likened the process of testing new compounds to an “endless combination game [utilizing] scientific mass-labor.”

The atoxyl experiments were among the very first in biology to apply these principles; to take a compound whose structure resembled other molecules that exhibited at least some of the desired effect (the term of art is “lead compound”) and modify its chemical structure in a systematic and methodical way in order to optimize that effect. Though finding a promising compound like the organoarsenic atoxyl is good fortune, modifying it successfully depends on a conscious and deliberate strategy. Ehrlich’s was driven by his belief in the side-chain receptor theory, which demonstrated that the action of any substance, helpful or harmful, was entirely a matter of chemical affinity: the right lock-and-key combination.

The goal was at the intersection of helpfulness and harm: Anything powerful enough to have an effect is virtually certain to have more than one. If it can kill a pathogen, it can kill (or at least damage) a host. The goal of the atoxyl experiments was to increase the damage the compound could inflict on the pathogen, while reducing its impact on the host. Ehrlich’s hypothesis, based on long chemical experience with both medicines and aniline dyes, was that substituting different amine groups—the simple nitrogen-based structures that attached themselves to atoxyl’s central ring like a kickstand on a bicycle—could lower toxicity to the host. One group of researchers in Ehrlich’s lab busied themselves with finding just such an amine structure.

The other goal was increasing atoxyl’s firepower against pathogens; a task made even more difficult because it wasn’t clear where the firepower came from in the first place. Ehrlich postulated that what made atoxyl toxic to trypanosomes was not arsenic per se, but a particular arsenic radical: an atom with an unpaired electron in its outer shell. Ehrlich asked Bertheim to test this particular hypothesis by dumping electrons into, or reducing, atoxyl in the lab.* The one Ehrlich had his eye on was trivalent arsenic, one with three bonds and an electron in its outer shell. However, in a test tube, atoxyl was pentavalent, with five bonds to arsenic in its outer shell. It was also harmless. Somehow, the host organism itself was changing the structure of the arsenic, reducing the number of free electrons. This gave the lab’s other team its task: hastening the reducing process that turned pentavalent arsenic into trivalent.

This sounds more rational than it actually was. The nice name for the approach is trial and error: brute force. Each of the new compounds, with increased amounts of trivalent arsenic, and different substitute compounds in the amine group, was tried out on test animals all over Europe.

Small wonder that the approach worked slowly. But it did work. It wasn’t until arsenophenylglycine, known internally as Compound 418, was tested in 1909 that Ehrlich and his colleague, the Japanese bacteriologist Sahachirō Hata, found any real success, and it was decidedly mixed. Compound 418 cured sleeping sickness, but came with significant side effects, including an unfortunate habit of causing blindness in a small percentage of the animals on which it was tested.

On August 31, 1910, the lab hit the jackpot. Compound 606: arsphenamine.

Arsphenamine was not, as most histories have it, the 606th compound synthesized. Ehrlich’s lab had been working hard, but not that hard. Hoechst’s naming convention used the first digit to refer to a particular experimental compound, and the following numbers to denote a specific variant, which made arsphenamine the sixth version of the sixth compound. Because different compounds were being tested simultaneously, 606 had actually been synthesized for the first time in 1907, before number 418, which was the eighteenth version of the fourth compound.

Normally, variants that underperformed would have been discarded; and, indeed, Ehrlich’s lab had moved on after the first tests of 606. The reason the lab returned to a compound discovered years earlier was the accidental discovery that arsphenamine didn’t kill only trypanosomes. Because Ehrlich thought, mistakenly, that trypanosomes caused syphilis, he had tested the drug on a wide range of syphilitic animals as well, recruiting collaborators from all over the world, including Kitasato Shibasaburo, since returned to Tokyo, and Albert Neisser, in what was then the Dutch East Indies, to test versions on rabbits, monkeys, and apes. Compound 606 cured them, too.

For good sound business reasons, this made arsphenamine far more interesting to a European chemical company. Sleeping sickness was a disease of what was not yet known as the Third World. Syphilis, on the other hand, had been killing and disabling Europeans for centuries. Though epidemiological historians continue to debate whether the disease was already present in Europe before explorers brought it back from the New World at the end of the fifteenth century, there’s no doubt that it was one of the best-known and feared diseases in Europe from 1495 onward, causing everything from the characteristic genital sores, to painful abscesses, to destruction of the nervous system, to death. In 1520, the great humanist philosopher and poet Desiderius Erasmus called it “. . . the most destructive of all diseases,” asking, “What contagion does thus invade the whole body, so much resist medical art, becomes inoculated so readily, and so cruelly tortures the patient?”

To ask the question was to answer it. Attempts to treat syphilis were either useless—the resin from the Caribbean “holy wood” known as guaiacum—or nearly as dangerous as the disease itself—mercury in its various forms, whose predictable side effects included mouth ulcers, loss of teeth, and even death, particularly since treatments could go on for years.* And, since blocking the route of transmission for the “great pox”—almost always sexual contact—was as problematic as asking people to stop breathing, syphilis was a terror for centuries before the organism that caused it was identified: the corkscrew-shaped bacterium Treponema pallidum.

A terror for some, an opportunity for others. With hundreds of thousands if not millions of men and women in the world’s wealthiest countries acquiring syphilis every year, a cure looked like a very profitable item for the company that could manufacture it in quantity.

By the first years of the twentieth century, the German chemical industry was a patchwork of partnerships and cooperative marketing arrangements—in German, Interessengemeinschaft, or “communities of interest.” All of them were profitable, but like all profitable enterprises eager to become more so. One group was dominated by Agfa (Aktiengesellschaft für Anilinfabrikation, or the Aniline Manufacturing Corporation) and BASF (Badische Anilin-und Soda-Fabrik, or Baden Aniline and Soda Manufacturing). Another consisted of Hoechst AG—the same company where Behring had managed to exclude Ehrlich from participation in the profits from the diphtheria antitoxin—and Cassella Manufacturing.

The motto of Ehrlich’s lab was Geduld, Geschick, Glück, und Geld—patience, skill, luck, money. Paying dozens of researchers to synthesize and test hundreds of potential molecules over the course of years isn’t cheap. That’s why the most significant innovation of the great German biochemical revolution was neither drugs nor vaccines themselves, but the creation of a durable funding source for ongoing lab research. Cassella had invested thousands of marks in Ehrlich’s research—far more, even accounting for inflation, than Napoleon III had provided to Pasteur—and supplied dozens of different chemical compounds customized to Ehrlich’s specifications, all in return for a share in any subsequent patents. This, even more than the drugs themselves, was new. It was also more than a little controversial, at least within the great chemical corporations, which had grown profitable on dyes and fertilizers rather than drugs. Ehrlich argued strenuously that it was also necessary, writing in 1908, “The material and mental support of our chemical factories is largely indispensable for modern therapeutics, and it would therefore not be advisable to loosen this natural union.” In 1910, the “natural union” paid off. Cassella’s partners at Hoechst AG introduced arsphenamine under the trade name Salvarsan. It was the world’s first synthetic chemotherapeutic agent.

Salvarsan was a huge success, within a year of its introduction the most widely prescribed drug in the world. It was also one of the most challenging, for both doctors and patients. Common side effects included nausea and vomiting. Storage of the compound was a tricky business, requiring vials tightly sealed to avoid oxidation. Treatment entailed weekly injections of the highly diluted compound over the course of a year—very highly diluted; each injection required at least 600 cc of solution per injection, or a pint and a quarter of solvent pumped into a patient’s body with every visit to the doctor. Many doctors decided to experiment with different solvents; others to try different injection locations: into the muscle, under the skin, or directly into the bloodstream. Intramuscular and subcutaneous injections tended to be painful; intravenous ones still unfamiliar to physicians, and therefore risky. The dangers of such a regimen, given the needles and syringes available in the first decade of the twentieth century, were very real: Too much seepage from the injection site could result in amputation or even death. Moreover, as would become a familiar theme of every subsequent advance in drug therapy, it was widely used for ailments well outside its arena of effectiveness. In some unfortunate cases, it resulted in death from cerebral hemorrhage. In 1912, Hoechst, Ehrlich, and Hata responded with a new and improved version, marketed as Neosalvarsan, which was soluble in water, and less toxic to boot. Like its precursor, it was a blockbuster: a miracle drug.

Even a century after its introduction, the structure of Salvarsan remained unknown; that is, though the formula for Salvarsan is well known, Ehrlich’s lab never did establish the precise structure of the compound. They originally synthesized it by a very delicate chemical process* that tended to introduce impurities that could affect the compound’s toxicity and/or efficacy dramatically. The actual structure of Salvarsan, which wasn’t discovered until 2005, consists of mixtures of cyclic molecules, which prompted some people to suggest that it wasn’t a Zauberkugel, but Zauberschrota magic buckshot. Even more mysteriously: As of this writing, no one has yet figured out the mechanism by which Salvarsan and Neosalvarsan target T. pallidum so precisely.

The precision of the arsphenamines is one reason that, while Ehrlich’s achievement looms as large as any in the history of medicine, it was also a false start. Though it would remain the most important drug in the medical arsenal for decades, it was too narrowly effective to revolutionize the practice of medicine all by itself. The few attempts that followed in its wake were notably less successful; Ehrlich himself developed another compound intended to cure streptococcal pneumonia. However, while optochin (also known as ethylhydrocupreine hydrochloride) is indeed toxic to the strep bacteria, it is nearly as dangerous to humans, in whom it frequently causes irreversible blindness. Today it is used, like Koch’s tuberculin, as a diagnostic tool to identify the presence of Streptococcus pneumoniae rather than a therapy.

Nonetheless, Salvarsan marked as huge a step on the way to the birth of the antibiotic age as the germ theory itself. The power of a well-financed mass attack on disease had been demonstrated. It might very likely have provided the template for a true therapeutic revolution even before Ehrlich’s death in 1915, had Europe not chosen that very moment to destroy itself in the most violent war in the history of Western civilization.

Within a week of the assassination of the heir to the throne of the Austro-Hungarian Empire in Sarajevo, France and Germany had declared war on one another, troops of the Dual Monarchy were shelling Belgrade, Tsar Alexander had mobilized the armies of the Russian Empire, Kaiser Wilhelm’s army had invaded Belgium, and Great Britain had declared war on Germany. Europeans would spend the next four years killing, maiming, poisoning, and starving one another.

This grim future was unknown to the Russians, French, Germans, Austrians, and Britons who applauded the start of hostilities, each convinced that a rapid and painless victory was theirs for the taking. Two months later, the cheering hadn’t quite stopped, but it probably should have. During the first eight weeks of combat on what came to be known as the western front, the armies of the German Empire had suffered 550,000 casualties; those of the Republic of France 590,000. At the first Battle of Ypres alone, which bloodied the fields of Flanders for five gory weeks, from the nineteenth of October to the twenty-second of November 1914, 85,000 French and 56,000 British soldiers were either killed or wounded, virtually destroying the relatively small British Expeditionary Force. On the other side, more than 18,000 German soldiers were listed killed or missing, and more than 29,000 wounded.

One of them was a nineteen-year-old onetime medical student from the University of Kiel named Gerhard Domagk. At the beginning of hostilities, he had enlisted in a Leibgrenadier Regiment from Frankfurt-von-Oder that was, literally, decimated at Ypres: more than one soldier in ten killed or wounded. With nothing salvageable of his unit, he was transferred to the eastern front and the German army’s medical department, named, with apparently no irony, the “Sanitary Service.” As might be expected, the field hospital to which Domagk was assigned in the Ukraine lacked something on the sanitary front. It also lacked any real weapon against the infectious diseases that killed as many soldiers during the First World War as gunfire: cholera, typhus, gangrene, dysentery, and a hundred more. Domagk would spend the rest of the war seeing medical impotence close up, treating patients and assisting in surgeries. By the end of 1918, and the Armistice, he had had more than enough of war, but not of medicine. Three years later, he graduated from Kiel as a fully accredited physician, and went to work in the Baltic port city’s main hospital.

He recognized quickly that he was far better suited to a career as a researcher than as a clinician, and joined the Pathological Institute at the University of Greifswald, as a lecturer, a privatdozent, in 1924. There, under the institute’s director, the pathologist Walter Gross, whom he would follow to the University of Münster, he began his studies of the most powerful weapon (really, the only weapon) against infectious disease: the vertebrate immune system. In a series of experiments, he injected hundreds of mice with a known pathogen, the bacterium known as Staphylococcus aureus, which causes everything from skin infections to pneumonia, and then extracted cells from the lining of the animals’ livers known to be part of the innate immune system* to see how many of the staph bacteria they had gobbled up.

 M0008948 Portrait of Gerhard Domagk Credit: Wellcome Library, London. Wellcome Images images@wellcome.ac.uk http://wellcomeimages.org Portrait of Gerhard Domagk Nordisk Medicin Published: 1939  Copyrighted work available under Creative Commons Attribution only licence CC BY 4.0 http://creativecommons.org/licenses/by/4.0/

Credit: Wellcome Library, London

Gerhard Domagk, 1895–1964

Domagk’s experiments resulted in two significant findings. First: The performance of the Kupffer cells improved with exposure to the staph pathogen (though he couldn’t know the reason, which wasn’t discovered until the 1980s: pattern recognition proteins known as Toll-like receptors that identify different sorts of toxins produced by bacteria and tailor the response by white blood cells sent to destroy them). The second revelation was the big one: A staph cell that had been weakened beforehand by exposure to antiseptics was easier for the Kupffer cells to destroy. The insight would determine the next thirty years of Domagk’s life. In 1927, he followed his boss Walter Gross once again to a job in the same industry that had served as patron to Ehrlich and Behring: the dye business.*

Two years before, the German chemical and dye business had made a giant leap in the consolidation it had been pursuing before and especially after World War I. As far back as 1904, Carl Duisberg, the managing director of Bayer, had been lobbying his competitors on the value of combination, in the form of a fifty-eight-page memo laying out the advantages: lower costs, shared patents, lower risk, and much higher profits. He persisted through the First World War (during which Bayer’s U.S. subsidiary, known primarily for its Bayer Aspirin brand, was confiscated as enemy property) and the hyperinflation of the 1920s, in 1925 finally calling a conference—never one to shy away from grandiosity, he named it the “Council of the Gods”—at which the eight largest chemical firms in Germany, including Agfa, BASF, and Hoechst/Cassella, merged into one, to be known as the “Community of Interest of Dye Businesses”—in German Interessengemeinschaft Farbenindustrie, or I. G. Farben. Carl Bosch of BASF was its chief executive, Duisberg its chairman of the board.

I. G. Farben was the largest chemical company in the world, and one of the largest of any sort, comparable in size to American companies like General Motors or U.S. Steel, with interests not only in dyes, but in photographic film, industrial solvents, and, with the acquisition of BASF, fertilizers. Bosch had developed a method for synthesizing ammonia—the Haber-Bosch process, the discovery of which would produce two separate Nobel Prizes in Chemistry, one for Fritz Haber in 1918, the other for Carl Bosch in 1931—that today produces 150 million metric tons of fertilizer annually and is responsible for feeding nearly half of the world’s population. Most relevantly for Domagk, I. G. Farben was also Germany’s largest manufacturer of pharmaceuticals, such as they were.

They weren’t much. Though Behring’s antiserums and Ehrlich’s arsenicals were widely used (and highly profitable), and Bayer’s antiprotozoal drugs Plasmoquine and mepacrine (also known as Atabrine) were starting to be used as malaria treatments, the pharmacopoeia available to fight infectious disease wasn’t significantly greater than it had been when George Washington awakened on the last morning of his life. Nonetheless, the possibility of antibacterial drugs was still promising enough that Heinrich Hörlein, head of pharmaceutical research at Bayer, set Domagk up in a newly refurbished research lab that Carl Duisberg had built in Elberfeld, a Westphalian town just east of Düsseldorf.

Duisberg and Hörlein believed they knew why no successful antibacterial drugs had appeared since Ehrlich’s Neosalvarsan a decade previously. It was the same argument behind the formation of I. G. Farben itself: scale. If Ehrlich had tested dozens of different recipes in order to find the antisyphilis treatment, Bayer would try hundreds. Or thousands. The way to make trial and error work was simply to increase the number of trials to the point that an effective compound would be guaranteed to emerge: to do for chemical innovation what Henry Ford had done for automobile manufacture.*

And so they did. Beginning with Domagk’s arrival in 1927, two chemists from Bayer’s tropical medicine group, Josef Klarer and Fritz Mietzsch, started producing coal tar–based compounds with at least some antibacterial properties on the test-tube-and-ring stand equivalent of the assembly line at Ford’s River Rouge factory. By 1931, they had delivered more than three thousand such compounds to Domagk’s lab. There, the bacteriologist exposed them, one by one, to the family of Streptococci bacteria, pathogens responsible for diseases ranging from the familiar and moderate—strep throat, impetigo—to deadly diseases like toxic shock, streptococcal pneumonia, meningitis, and such exotica as the flesh-eating disease known as necrotizing fasciitis. Most of the compounds supplied for testing against strep were created by modifying chemicals that, as with Ehrlich’s early researches, were originally dyes that showed some affinity for a particular bacterium by staining it.

Meanwhile, Domagk was working at an equally feverish clip to isolate a strain of Streptococci that was a consistent and reliable killer of laboratory mice. His purpose was to create a kind of ideal experimental bacterium, which could rapidly show the effectiveness—or ineffectiveness—of each of the compounds supplied by Klarer and Mietzsch. Over the first few years, his supercharged strep produced thousands of rodent corpses, each one accompanied by autopsy notes documenting the particular compound used to treat it, the progress and symptoms of the disease(s) at time of death, and the method by which the bacterium and the compound had been exposed to the unlucky rodent. The notes also, of course, had a place for showing which compound had a significant effect against the strep bacteria.

For years, the last space remained stubbornly blank. A decade later, Iago Galdston, a physician and the secretary of the Medical Information Bureau of the New York Academy of Medicine, wrote, “By 1930 it was the universal opinion of physicians that nothing could be discovered which would be effective against the ordinary diseases produced by bacteria.”

Less than a year later, in 1931, Mietzsch and Klarer started modifying azo dyes, believing them to be less toxic than aniline dyes, and therefore more promising. By summer, the promise appeared to be borne out. One of the azo-derived compounds—KL-487, for “Klarer, #487”—killed at least some of Domagk’s strep strain, without overwhelmingly toxic side effects. Others followed: KL-517, KL-529. The protocol employed looks, in retrospect, a lot like fairly arbitrary tinkering. The first attempts involved successively adding side chains composed of chlorine atoms, one at a time, to the azo base. When the possibilities of chlorine were exhausted, they moved on to arsenic, then to iodine.

Eventually, on the advice of Hörlein himself, Klarer tried sulfur as the missing ingredient in an azo compound. The first one he tried, KL-695, used another important chemical included in the dye business known as sulfanilamide—formally para-amino-benzene-sulfonamide—that had been around since 1909. In late 1932, Domagk treated some more of his unfortunate mice with a compound that integrated sulfanilamide with an azo dye.

After four years of failure, it’s not hard to imagine the exultation when the results of the new compound were revealed. Domagk had administered the sulfanilamide-plus-azo compound to twelve mice infected with his superstrep strain, with fourteen untreated mice as a control. Within a week, all fourteen untreated mice were dead, most within two days. All twelve that received the compound survived. Whether administered intravenously or orally, it completely cured strep infections in mice. By the time Klarer and Mietzsch had delivered KL-730 to Domagk, the conclusion was inescapable. The lab had found the world’s first successful antibacterial drug.

Or, actually, drugs. Virtually all azo dyes with a sulfanilamide side chain worked on strep infections, which meant that patenting all conceivable variations would be a legal nightmare for Bayer. Unlike tangible goods, which can be sold only once, intellectual property can be sold to multiple consumers without diminishing inventory, a horn of plenty that never empties.* This means that patents are valuable to inventors because they alone gain the legal right to bar anyone else from profiting off the invention. In the early days of patents, legal protection stopped at national borders, making patents far more valuable to inventors working in a large country like France or Britain than in a relatively small one like the Netherlands or Switzerland. Patents also behave very differently on mechanical inventions than on chemicals. No one can sell a new engine while keeping its components secret from a competitor, but it is considerably harder to reverse engineer a dye or a drug. One consequence is that, although patent offices first started granting licenses in the seventeenth century, the Swiss, the Dutch, and, before the 1870s, the Germans, who specialized in chemical innovation, tended to avoid even seeking them. So long as secrecy could be preserved while dyes and other chemicals were being sold, patents offered far more risk than reward.

As chemistry matured into a science that could decipher the secrets of drugs and dyes, techniques for analyzing the components and structures of new chemical compounds made it relatively easy to reproduce a competitor’s proprietary molecule. And once secrecy offered no protection to novel chemicals, patents became as necessary for them as they had long been for machinery. Because of their different histories with patent protection, the Swiss, Dutch, Germans, and even the French still guarded chemical innovation with systems very different from the British and American model. German chemical patents didn’t protect a novel product like a new chemical compound, but rather the process by which it was synthesized. Bayer didn’t need to patent every effective variant of the sulfanilamide-plus-azo compounds, but they did have to publish a detailed description of the method by which all of them were created, which was itself a risky proposition. There are multiple methods of chemical synthesis, and a competitor could, if clever enough, create a sufficiently different one, and so benefit from Bayer’s discovery while investing only a fraction of Bayer’s time and money. So when Bayer applied for a new patent on Domagk’s discovery in 1932, the application was quite deliberately as obscure as possible about the creation of KL-730 in order to protect its exclusivity.

For the next two years, the new drug, which had been named Streptozon, was tested on both animal and human subjects. By the time the patent was granted, in 1934, Domagk had demonstrated that the drug worked against a number of nonstrep infections as well, including spinal meningitis, some strains of pneumococci, and gonorrhea. As a result, Bayer changed the name of the new drug. They called it Prontosil.*

In 1935, Domagk finally announced his discovery in an article for the journal Deutsche Medizinische Wochenschrift, and the number of human trials expanded to include subjects in Great Britain. There, the first results were considerably less encouraging than the ones reported from the German tests, probably because in the United Kingdom it was tested on strains of streptococci different from the supercharged version developed by Domagk. Notably, however, one of them was S. pyogenes, the primary cause of what was then known as puerperal, or childbed, fever.

The oldest known medical texts record the dangers, and frequency, of fevers suffered by women within hours of giving birth. The risks of contracting such a fever, however, skyrocketed with the growth of hospital births during the nineteenth century. By the time the Hungarian physician Ignaz Semmelweis, then working at Vienna General Hospital, published a study of childbed fever in 1847, it was attacking as many as four new mothers in ten. Tellingly, Semmelweis discovered that the risks of childbed fever were significantly lower in home births than obstetrics wards. The cause, in the days before Lister, was the way physicians practiced their craft: never washing their hands.*Improved hygiene and antisepsis reduced the numbers of victims substantially, but did not eliminate the risk of disease. Though Semmelweis and others had made puerperal fever less common, their technique was prevention, not cure. Throughout the 1920s, physicians tried Salvarsan and other arsenicals, failing miserably. As late as 1936, it was still attacking up to three hundred out of every ten thousand new mothers in the United States . . . and killing forty-nine of them. Until, that is, the discovery that Prontosil stopped S. pyogenes even more effectively than Ehrlich’s magic bullets targeted T. pallidum. Virtually overnight, mortality from childbed feverfell from 20 to 30 percent to 4.7 percent.

By then, the power of Prontosil had already been demonstrated to the Bayer scientists in the most personal way possible. In December 1935, a needle was driven into the hand of Gerhard Domagk’s six-year-old daughter, Hildegarde. Lasting injury from the trauma was unlikely, but her risk of infection considerable, as the following days proved. An abscess formed, and Hildegarde’s temperature skyrocketed, peaking at more than 104°Streptococcihad entered the girl’s bloodstream. A year earlier, she would very probably have lost her arm, and possibly her life. But this was the year of Prontosil. A week after starting the therapy, her infection was beaten.

The effectiveness of Prontosil was no longer in doubt. The mechanism by which it performed its magic, however, remained a mystery. Did the sulfa chain activate the azo dye, or the other way around? Why did Prontosil cure streptococcal infections in infected animals, but fail to kill strep bacteria in a test tube? Throughout Europe and the United States, biochemists tried to solve the puzzle, including Leonard Colebrook at St. Mary’s Hospital in London, and Ernest Fourneau, head of pharmaceutical research at the Institut Pasteur in Paris, both of whom immediately requested samples of Prontosil for testing. Colebrook’s request was granted. Fourneau’s was not.

Fourneau’s history with Bayer generally, and Hörlein particularly, was less than collegial. Despite Pasteur’s early experience conducting research on behalf of the French wine and cheese industries, France had scrupulously avoided strong links between commerce and chemistry. The premier research facility in the country, the Institut Pasteur, had originally been funded by donations from French families with no expectation of a return on their investment. As a result, the institute, employing a few dozen researchers at most, was unable to produce anything like the number of new chemical compounds that were created by hundreds of Bayer and I. G. Farben scientists and technicians. Nonetheless, Bayer regarded Fourneau as a formidable competitor. For one thing, he was a superb chemist; the author of more than two hundred scientific papers, Fourneau had already developed a synthetic alternative to cocaine for his then-employer, Camille Poulenc of Établissement Poulenc Frères,* before joining the Institut Pasteur as its director of research in 1911. For another, he had the French patent system on his side. Since Bayer drugs and dyes had no legal protection outside the borders of Germany, they were fair game for anyone with the energy to decode even the most opaque patent. Fourneau was nothing if not energetic; he devoured German academic journals and patent filings, attended German scientific meetings and trade fairs, and worked long hours building a file showing every niche where a homegrown drug might replace an import. In 1921, he had permanently alienated Duisberg and Hörlein when he reverse engineered Bayer’s proprietary treatment for sleeping sickness, originally introduced under the brand name Germanin, but sold in France by Poulenc as Stovarsol.

Scarcely surprising, therefore, that when Hörlein received Fourneau’s request for samples of Prontosil, he stalled. Fourneau did not. He assembled a team of French chemists to copy the existing drug using the obscure language of Bayer’s patent, and by the middle of 1935, Rubiazol, the French version of Prontosil, was on the market.

To Bayer, this was an expected, though exasperating, cost of doing business. What followed, however, was startling—and disastrous. On November 6, 1935, Daniel Bovet, a member of Fourneau’s team engaged in testing Rubiazol, infected forty mice with streptococci. The population was then broken into ten groups of four mice each: one an untreated control group, another with the original version of Prontosil/Rubiazol, and seven more with new products just synthesized at the Institut Pasteur. Bovet would later write, “I only had seven new products and we had an extra group of four mice. Why, I asked, not just try the product common to all these products?” The common product was the side chain that Klarer and Mietzsch had added to their azo dye: sulfanilamide.

Within days, the results were tallied. The control group of untreated mice had all died, as had six of the seven groups treated with the newly synthesized compounds. The four mice injected with Prontosil/Rubiazol were alive. So were the four given pure sulfanilamide. “From that moment on,” wrote Bovet, “the German chemists’ patents had no more value.”

And they knew it. At virtually the same moment that Bovet was testing his mice in Paris, in Elberfeld, Klarer and Mietzsch tested a compound—KL-821—that was nothing but sulfanilamide alone: no azo dye. It not only worked just as well as Prontosil, it was even more useful, treating staph infections as well as strep. The reason that Prontosil worked only in vivo and not in vitroin a living animal and not a test tube—was finally clear: Living animals produced enzymes that separated the dye from sulfanilamide.

It was disappointing enough that all those experiments with different azo combinations had turned out poorly. The really bad news was this: Since a Viennese chemist named Paul Gelmo had identified sulfanilamide as part of his 1908 doctoral thesis and patented it in 1909, the substance was now in the public domain.

At Bayer, the news was devastating. No one yet knows how so many skilled researchers could fail so totally—the original documents remain sealed away—but a good guess is that Prontosil was a particularly acute example of the cognitive bias that psychologists call “functional fixedness” and civilians know as “if the only tool you have is a hammer, everything looks like a nail.” The large chemical industries had been built on dyes and wore blinders that shielded them from just about anything else. Which also explains why, even after Bayer knew that adding azo dyes to sulfanilamide did literally nothing to improve the drug’s antibacterial effectiveness, and that the active ingredient in the drug was freely available to anyone with the 1935 equivalent of a home chemistry set, they still persisted with the launch of Prontosil.

They tried to brand their product, so far as they could, selling something they called Prontosil Album (white Prontosil, also known as pure sulfa) alongside Prontosil classic for a year. It helped, a bit. Sales were strong and getting stronger when, in November 1936, the son of the president of the United States, Franklin Delano Roosevelt, Jr., contracted a vicious strep infection and, after weeks at death’s door, was miraculously cured by the still-experimental drug. The front-page headline for the December 17, 1936, edition of the New York Times read:

YOUNG ROOSEVELT SAVED BY NEW DRUG

DOCTOR USED PRONTYLIN ON STREPTOCOCCUS INFECTION OF THE THROAT CONDITION ONCE SERIOUS

But Youth, in Boston Hospital, Gains Steadily; Fiancée, Reassured, Leaves Bedside

This wasn’t all bad for Bayer. “Prontylin” was the trade name under which sulfanilamide was sold in the United States by the Winthrop Chemical Company, which was half owned by Bayer’s parent, I. G. Farben. It was even manufactured at the old Bayer factory in Rennselaer, New York, which had been seized during the First World War. But the combination of unprecedented American demand and a complete lack of patent protection had the predictable result. By the end of 1937, a hundred different companies were selling sulfanilamide under a hundred different names. And not just in the United States; a Japanese version was named Pratanol; a Dutch one, Streptopan. In Brazil it was being sold as Stopton, in Czechoslovakia as Supron. The French had five different versions; the British, more than thirty. Forty-six years later, the physician Lewis Thomas wrote, “I remember the astonishment when the first cases of pneumococcal and streptococcal septicemia were treated in Boston in 1937. . . . Here were moribund patients, who would surely have died without treatment, improving in their appearance within a matter of hours of being given the medicine and feeling entirely well within the next day or so. . . . Medicine was off and running.”

Off and running, though not always in a helpful direction. For the first time, though far from the last, a truly effective medicine was wildly overprescribed. Physicians used it for head colds. Expectant mothers were regularly given sulfa prophylactically, before they showed any signs of puerperal fever.

The immediate consequences of the sulfa craze were significant enough. The side effects of the drug, including nausea and painful crystals in the urinary tract, affected hundreds of thousands of patients who got no benefit in return. The first high-profile sulfa treatments are just as historically consequential, though, marking the moment in time when large numbers of patients first demanded a specific therapy from their physicians by name. It would be some time before it was accurate to refer to the sick as “consumers” of health care, but that’s how they—and what were not yet known as the “worried well,” patients with no need for doctoring, but a lot of demand for it—started to behave. Physicians, finally armed with a treatment that actually cured infectious disease, were only too happy to oblige.

Sequelae is the word physicians use to describe the chronic and persistent consequences that follow an acute episode: back pain after an automobile accident, for example. The long-term sequelae of the public embrace of sulfa include the belief that doctors ought to be able to cure just about anything. One result was that everything else they provided—typically comforting their patients with knowledge of the likely course of a disease or condition—started to be devalued by both doctors and patients. Antibiotics, and the transformation they ignited, professionalized health care in a way like nothing before or since. That professionalism was a huge benefit, but it wasn’t cost free.

Nor was sulfa itself. Among other things, the drug remained difficult to administer, since it wasn’t easily soluble in water, or really much of anything else. This limited its appeal to patients, particularly in the United States, who preferred to take their medicine orally. This problem, all by itself, led to the single largest scandal in the history of antibacterial drugs, as well as the birth of drug regulation in America.

When the sulfa craze began in 1937, laws protecting patients from the dangers of the new drugs weren’t unknown, but they might as well have been. Though the first code of ethics of the American Medical Association barred direct-to-consumer advertising of drugs from “ethical medical practice,” the first federal law to address the administration of powerful and dangerous compounds in service of medicine was the Biologics Control Act of 1902. It was followed by the Pure Food and Drugs Act of 1906, which set out penalties for adulterating or misbranding medicines. Those penalties were determined by the United States Department of Agriculture’s Bureau of Chemistry, headed from 1906 to 1913 by Harvey Washington Wiley, whose feelings about pharmaceutical regulation can be summed up by the title of a book he wrote in 1929The History of a Crime Against the Food Law: The Amazing Story of the National Food and Drugs Law, Intended to Protect the Health of the People, Perverted to Protect Adulteration of Food and Drugs.

Even when Wiley wrote his “amazing story,” most of the drug business remained in patent medicines, a predictable consequence of a widespread American belief in self-medication (or “autotherapy”). The number of advertised compounds grew from about a thousand in 1858, generating about $3.5 million in annual sales, to twenty-eight thousand by 1905, with revenues of nearly $75 million. By 1912, that number had increased to more than $110 million. To call them fraudulent is to compliment them. Some of the most popular were targeted at the same market that would eventually turn Viagra into a multibillion-dollar business: “Persenico” promised to combat “low vitality . . . of sexual origin” while “Revivio” simply told men they could “improve your vigor.” Those that weren’t ineffective were frequently poisonous; “Gouraud’s Oriental Face Cream” contained possibly toxic levels of mercury. The most famous of all patent medicines wasn’t quite that dangerous. The recipe that Confederate army veteran John Pemberton concocted to wean himself from an addiction to the morphine he had been given after the Battle of Columbus, a mixture of alcohol, coca leaves, and kola nuts that was first sold as Pemberton’s French Wine Coca, and later, sans alcohol, as Coca-Cola, took the nation by storm beginning in 1886.

The standardization of “real medicines dates back as far as 1820, when one of Benjamin Rush’s former students, Jacob Bigelow, published the first edition of a catalog of more than two hundred drugs he entitled the United States Pharmacopeia.” Nonetheless, it took nearly a century before the 1906 Pure Food and Drugs Act established the USP as a national formulary standard and used it to require that drugs not be adulterated. It also set out punishments for misbranding, such as selling under a false name, or not identifying the presence of narcotics, like morphine, opium, cocaine, and heroin. The word “penalties” should be used advisedly, however, since they only called for confiscation of adulterated or misbranded drugs, not prosecution of dealers.* Moreover, because the original law focused exclusively on labeling, rather than safety, it was virtually useless. Robert N. Harper’s “Cuforhedake Brane-Fude” was marketed as an entirely safe “brain tonic” containing alcohol, caffeine . . . and acetanilide, a highly toxic analgesic that causes cyanosis. When the government, under Harvey Wiley, won the maximum penalty of $700, it represented a tiny fraction of the thousands the compound had earned.

And so it went. In 1911, the 1906 act was amended to allow a misbranding prosecution if the package included statements that were “false and fraudulent,” which created a loophole big enough to drive an entire industry through. Remedies that promised to cure everything from pneumonia to tuberculosis to cancer were protected so long as the manufacturer could claim that it believed the claims, and thus had no fraudulent intent.

Which was the state of play when Franklin Delano Roosevelt, Jr., became the poster boy for the new wonder drug, sulfa. Though his father’s New Dealers (primarily Walter G. Campbell, one of Wiley’s protégés, and the agricultural economist Rexford Tugwell) had been trying to expand the reach of the Pure Food and Drugs Act ever since they took office in 1933, they failed each and every time.

They probably would have gone on failing, if sulfa had been an easily soluble powder.

The S. E. Massengill Company of Bristol, Tennessee, was one of the dozens of American patent drug manufacturers eager to get on board the sulfanilamide gravy train. The sales staff at Massengill, and Samuel Evans Massengill himself, believed that the most successful version of the new drug would resemble cough syrup: a sweetened liquid. Harold Watkins, the company’s chief chemist, tried a number of solvents before he happened on a solution of 58 pounds of sulfa, along with raspberry flavoring and saccharine, dissolved in 60 gallons of diethylene glycol, a component of resins, brake fluid, and coolants whose properties in living animals—dizziness, intoxication, and nausea an hour after ingestion; within days, an elevated heart rate, muscle spasms, and acute kidney failure—were evidently unknown to him. In October 1937, Massengill’s Elixir Sulfanilamide went on sale.

On October 11, 1937, the president of the Tulsa County Medical Society sent a telegram to the American Medical Association’s chemical lab, alerting them to six deaths that occurred shortly after administration of the elixir. Eight days later, the Washington Post’s headline read “VENEREAL DISEASE ‘CURE’ KILLS 8 OUT OF 10 PATIENTS IN OKLAHOMA.” On October 25, the New York Times ran a story about the “nationwide race with death.”

The response from Washington was uncharacteristically decisive. The Food and Drug Administration put its entire field force—239 inspectors—to work tracking down every drop of Massengill’s initial shipment: 240 gallons of the diethylene glycol–laced sulfa syrup. They secured 234 gallons and 1 pint. It is presumed that most of the remaining 578 gallons never made it down the throats of anyone, since fewer than a hundred people were killed.*

Massengill, despite protesting its innocence (and blaming sulfa, rather than diethylene glycol, for the deaths), was brought to trial, though not, as one might expect, for poisoning its customers. In doing so, it hadn’t actually broken any law. Instead, it was prosecuted under the only permissible statute: for mislabeling, since they had marketed the deadly syrup as an “elixir” and elixirs were required to contain alcohol. In fact, the existing law only allowed FDA agents to seize bottles that had been shipped across state lines, and whose seals had not been broken. Massengill eventually pleaded guilty to 174 counts of mislabeling, fined $150 each plus costs, for a total of $26,100.

Nonetheless, the scandal had at least one salutary effect: The 1938 Federal Food, Drug, and Cosmetic Act was signed into law by President Roosevelt on June 25, 1938. It was, in many ways, a dramatic improvement over existing law. Section 502, for example, extended the “misbranding” violation to include “adequate warnings for use in those pathological conditions or by children where its use may be dangerous to health, or against unsafe dosage or methods or duration of administration . . .” and required listing of all ingredients, not simply the “active” ones (diethylene glycol, after all, hadn’t been the active ingredient in the Massengill syrup). Secret remedies were now prohibited. Interstate shipment was forbidden without approval from the secretary of agriculture. The act’s final clause created new regulations for a new category: “New Drugs.” Henceforth, manufacturers needed governmental permission to market any drug not already available.

It’s hard to know whether in 1938 anyone knew just how many “new drugs” would appear over the next ten, twenty, and fifty years. For the immediate future, however, sulfa remained king. In 1939, a variant known as sulfapyridine, developed in Great Britain by the firm May & Baker, a subsidiary of Rhône-Poulenc, was used to quell a meningitis outbreak in Sudan—significantly, one caused by meningococcus, not streptococci. Sulfapyridine was also far more effective against pneumococcal pneumonia than its predecessors like Prontosil, reducing mortality in one test from 27 percent to 8 percent. Popularly known as M&B 693, it was approved for sale by prescription in the United States in March 1939 (and would famously cure Winston Churchill from a strep infection in December 1943).

Also in 1939, Gerhard Domagk was awarded the Nobel Prize in Physiology or Medicine. He was forbidden to appear in Stockholm to accept the award by his government, then only months from plunging Europe into a war more impactful than the First World War of 1914–1918, but offended at the Peace Prize given to the German pacifist Carl von Ossietzky in 1936 for exposing German rearmament. Domagk was even briefly jailed for being “too polite to the Swedes” in his response to the prize.*

By then, the British chemists Donald Woods and Paul Fildes had discovered the mechanism by which sulfa drugs do their magic. Sulfa inhibits an enzyme essential for the production of the B vitamin required for folate production in bacteria. Sulfanilamide essentially tricks bacterial enzymes into latching onto it, rather than the correct compound, para-aminobenzoic acid, or PABA (which is why sulfa remained limited in its effectiveness; not all bacteria use PABA).

Despite those limits, sulfa was used throughout the Second World War to treat everything from wound sepsis to gonorrhea. The first wonder drug, however, was no more immune to the resourcefulness of bacteria than any of its successors.

The phenomenon known as acquired antibiotic resistance is no simple thing, and is imperfectly understood even today.* Bacteria are exquisitely sensitive to environmental change; the single-celled organisms are quick studies of evolutionary innovation, able to acquire new genetic blueprints for just about every imaginable function from other bacteria, and even from free-floating viruses. Moreover, they reproduce so quickly—some bacteria have generations only twenty minutes long—that a useful bit of DNA spreads extremely rapidly. Under strong selection pressure, such as the presence of a powerful bactericide like the sulfanilamides, a new gene that codes for a folate-producing enzyme unaffected by sulfa appears seemingly overnight. Which is why, given its widespread use in wartime, the phenomenon of sulfa resistance appeared quickly. The first soldiers treated for gonorrhea with sulfa experienced a 90 percent cure rate in the late 1930s. By 1942, the rate was 75 percent and falling fast.

By then, however, Domagk’s miracle was about to be preempted by a new weapon against bacterial disease; not just a single magic bullet, but an entire arsenal.



If you find an error or have any questions, please email us at admin@doctorlib.info. Thank you!