ONE
On the morning of Saturday, December 14, 1799, George Washington awakened before dawn, and told his wife, Martha, he was so sick that he could barely breathe.
The indicated treatment was straightforward enough. By the time the sun had risen, Washington’s overseer, George Rawlins, “who was used to bleeding the people,” had opened a vein in Washington’s arm from which he drained approximately twelve ounces of his employer’s blood. Over the course of the next ten hours, two other doctors—Dr. James Craik and Dr. Elisha Dick—bled Washington four more times, extracting as much as one hundred additional ounces.
Removing at least 60 percent of their patient’s total blood supply was only one of the curative tactics used by Washington’s doctors. The former president’s neck was coated with a paste composed of wax and beef fat mixed with an irritant made from the secretions of dried beetles, one powerful enough to raise blisters, which were then opened and drained, apparently in the belief that it would remove the disease-causing poisons. He gargled a mixture of molasses, vinegar, and butter; his legs and feet were covered with a poultice made from wheat bran; he was given an enema; and, just to be on the safe side, his doctors gave Washington a dose of calomel—mercurous chloride—as a purgative.
Unsurprisingly, none of these therapeutic efforts worked. By 10:00 P.M., America’s first president knew he was dying. His last words were, “I am just going! Have me decently buried, and do not let my body be put into the vault less than three days after I am dead. Do you understand me? ’Tis well!”*
Less than twenty-two years later, another world-historic figure had his final encounter with early nineteenth-century medicine. Napoleon Bonaparte, exiled to Longwood House on the South Atlantic island of St. Helena after his defeat at Waterloo in 1815, experienced bouts of abdominal pain and vomiting for months, while four different physicians (each of whom wrote a memoir about their famous patient) treated him by administering hundreds of enemas and regularly dosing him with the powerful emetic known chemically as antimony potassium tartrate—not, perhaps, the best treatment for a patient already weak from vomiting. The onetime emperor of France breathed his last on May 5, 1821.
Historians with a morbid bent have produced thousands of pages of speculation on the diseases that killed two of the most famous men who ever lived. Today, the prevailing retrospective diagnosis for Washington is that he was dispatched by an infection of the epiglottis, probably caused by the tiny organism known as Haemophilus influenzae type b, the pathogen that also causes bacterial meningitis. A popular minority opinion is that Washington died from PTA, or peritonsillar abscess, a strep infection that creates an abscess under the tonsil that swells with pus until it actually strangles the patient. (The other name for PTA is “quinsy” or “quinsey,” from the Greek word that means “to strangle a dog.”) One thing that didn’t kill Washington, despite its mention in just about every biography of the man, was his tour of Mount Vernon in cold, wet weather during the days leading up to his death, and his decision to dine with friends wearing still-wet clothing on the night of December 13. Infectious diseases aren’t caused by catching a chill.
The debate about the cause of Napoleon Bonaparte’s death is, likewise, fueled by enough raw material that it seems likely to go on forever. The initial autopsy concluded that l’empereur had died of stomach cancer, the same disease that killed Napoleon’s father in 1785. Hepatitis has its advocates, as does the parasitic disease known as schistosomiasis, which Napoleon is thought to have acquired during his Egyptian campaign of 1798. Neither is as popular among amateur historians as arsenic poisoning, either as a murder weapon or an accident caused by exposure to wallpaper more or less saturated in the stuff.
An equally honest answer for both former generals is that they died of iatrogenesis. Bad medicine. Or, more accurately, heroic medicine.
The era of heroic medicine is generally used to describe the period, roughly 1780–1850, during which medical education and practice was highly interventional, even when the interventions did at least as much harm as good. The dates are slightly deceptive. Medical practice, from Hippocrates to Obamacare, has constantly oscillated between interventional and conservative approaches; the perfect point of balance is a moving target for physicians, and seems likely always to be.
Consider the persistence of the humoral theory of disease. Bloodletting, for example, was first popularized by the second-century Greek physician Galen of Pergamon as a way of balancing the four humors: blood, phlegm, and black and yellow bile. The doctrine that gave a vocabulary for distinct individual temperaments based on the relative amounts of these bodily fluids—sanguine personalities had a high level of blood; bile led to biliousness—was originally a guide to medical practice: Too much bile caused fevers, while too much phlegm resulted in epilepsy.
Humoral doctrine, in one form or another, dominated Western medicine for nearly two thousand years. It didn’t persist because following its dictates improved the chances that patients would recover from disease, nor even because it was an accurate guide to physiology. Blood, in classic humoralism, was produced by the liver; what the humoral physician believed to be “black bile” was likely blood that had been exposed to oxygen. As much as anything else, the appeal of humoralism seems to have been the belief that, since health was a sign of balance, disease must represent an imbalance of something. It also reinforced a nearly universal belief in elementalism, which suggested that all phenomena could be reduced to interactions among fundamental elements like air, earth, water, and fire.
The real secret to humoralism’s durability was the lack of a superior alternative. And it was nothing if not durable. Humoral balancing was still being recommended in the 1923 edition of The Principles and Practices of Medicineby Sir William Osler, one of the four founders of the Johns Hopkins School of Medicine. The sixth-century Byzantine physician Alexander of Tralles may have treated his patients with powerful alkaloid extracts like atropine and belladonna, purged them with verdigris (copper[II] acetate), and sedated them with opium. But he also cared for them in hospitals whose primary function was not treatment, but support: making patients as comfortable as possible while waiting for either recovery or death. The best-known words of the fifth-century B.C. Father of Medicine, Hippocrates, are “do no harm” and “nature is the best healer.”*
If a further reminder is needed that the practice of heroic medicine began long before the eighteenth century, its greatest icon lived nearly two centuries before his successors turned him into a cult figure. The Swiss German physician, astrologer, and master of all things occult named Philippus Aureolus Theophrastus Bombastus von Hohenheim—as a kindness to modern readers, he is generally remembered by the honorific Paracelsus—was, like Galen, a protoscientist: a careful observer of nature limited by the lack of a mechanism for testing his hypotheses experimentally. Though he recognized the inadequacies of the humoral theories of Galen, he substituted an equally unlikely schema built on the balance of three different elements: mercury, sulfur, and salt. From the sixteenth century on, mercury especially became a remarkably popular remedy for virtually every medical condition. In 1530, Paracelsus recommended mercury with such enthusiasm that he inspired the Austrian physician Gerard van Swieten to prescribe it in the form of mercuric oxide—more soluble in water and, therefore, even more toxic than the mercurous chloride known as calomel prescribed by Washington’s physicians—as a cure for syphilis. One syphilis treatment from 1720 called for four doses of calomel over three days, separated by a modest amount of bleeding—only a pint or so. Concoctions that contained mercury remained part of the materia medica for centuries because of a confusion between potency and effectiveness. Doctors cheered when patients exhibited the ulcerated gums and uncontrollable salivation that are the classic signs of mercury poisoning, as evidence that the medicine was clearly working.
Mercury therapy was only one of a collection of techniques and beliefs that seem, in retrospect, ghoulish in the extreme. True, some areas of medical knowledge had increased dramatically in the sixteen centuries after Galen. The Brussels-born physician Andreas Vesalius, the first European physician allowed to dissect human corpses, revolutionized anatomy; William Harvey discovered that blood circulated from the heart to the extremities and back again. Even the enthusiasm for mercury, and a number of other toxic substances, wasn’t complete nonsense. As a more scientific group of physicians would soon demonstrate, mercury actually does kill some very nasty disease-causing pathogens. The great weakness of Washington’s and Napoleon’s physicians wasn’t ineptitude—they were probably the most skilled men on the planet when it came to making their patients bleed or vomit—but theory. Eighteenth-century physicians knew as little about the causes of disease as a cat knows about calculus, and certainly no more than their predecessors had known in the second century. A doctor could set a broken bone, perform elaborate-though-useless tests, and comfort the dying, but hardly anything else. As the eighteenth century turned into the nineteenth, the search for reliable and useful treatments for disease had been under way for millennia, with no end in sight.
Benjamin Rush, the most famous physician of the newly independent United States, dosed hundreds of patients suffering from a yellow fever outbreak in 1793 Philadelphia with mercury. Rush also treated patients exhibiting signs of mental illness by blistering, the same procedure used on Washington in extremis. One 1827 recipe for a “blistering plaster” should suffice:
Take a purified yellow Wax, mutton Suet, of each a pound; yellow Resin, four ounces; Blistering flies in fine powder, a pound. [The active ingredient of powdered “flies” is cantharidin, the highly toxic irritant secreted by many beetles, and M. cantharides, Spanish fly.] Melt the wax, the suet, and the resin together, and a little before they concrete in becoming cold, sprinkle in the blistering flies and form the whole into a plaster. . . . Blistering plasters require to remain applied [typically to the patient’s neck, shoulder, or foot] for twelve hours to raise a perfect blister; they are then to be removed, the vesicle is to be cut at the most depending part . . .
Benjamin Rush was especially fond of using such plasters on his patient’s shaven skull so that “permanent discharge from the neighborhood of the brain” could occur. He also developed the therapy known as “swinging,” strapping his patients to chairs suspended from the ceiling and rotated for hours at a time. Not for Rush the belief that nature was the most powerful healer of all; he taught his medical students at the University of Pennsylvania to “always treat nature in a sick room as you would a noisy dog or cat.”
When physicians’ only diagnostic tools were eyes, hands, tongue, and nose, it’s scarcely a surprise that they attended carefully to observable phenomena like urination, defecation, and blistering. As late as 1862, Dr. J. D. Spooner could write, “Every physician of experience can recall cases of internal affections [sic] which, after the administration of a great variety of medicines, have been unexpectedly relieved by an eruption on the skin.” To the degree therapeutic substances were classified at all, it wasn’t by the diseases they treated, but by their most obvious functions: promoting emesis, narcosis, or diuresis.
Heroic medicine was very much a creature of an age that experienced astonishing progress in virtually every scientific, political, and technological realm. The first working steam engine had kicked off the Industrial Revolution in the first decades of the eighteenth century. Between 1750 and 1820, Benjamin Franklin put electricity to work for the first time, Antoine Lavoisier and Joseph Priestley discovered oxygen, Alessandro Volta invented the battery, and James Watt the separate condenser. Thousands of miles of railroad track were laid to carry steam locomotives. Nature was no longer a state to be humbly accepted, but an enemy to be vanquished; physicians, never very humble in the first place, were easily persuaded that all this newfound chemical and mechanical knowledge was an arsenal for the conquest of disease.
And heroic efforts “worked.” That is, they reliably did something, even if the something was as decidedly unpleasant as vomiting or diarrhea. Whether in second-century Greece or eighteenth-century Virginia (or, for that matter, twenty-first-century Los Angeles), patients expect action from their doctors, and heroic efforts often succeeded. Most of the time, patients got better.
It’s hard to overstate the importance of this simple fact. Most people who contract any sort of disease improve because of a fundamental characteristic of Darwinian natural selection: The microorganisms responsible for much illness and virtually all infectious disease derive no long-term evolutionary advantage from killing their hosts. Given enough time, disease-causing pathogens almost always achieve a modus vivendi with their hosts: sickening them without killing them.* Thus, whether a doctor gives a patient a violent emetic or a cold compress, the stomachache that prompted the intervention is likely to disappear in time.
Doctors weren’t alone in benefiting from the people-get-better phenomenon, or, as it is known formally, “self-limited disease.” Throughout the eighteenth and early nineteenth centuries, practitioners of what we now call alternative medicine sprouted like mushrooms all over Europe and the Americas: Herbalists, phrenologists, hydropaths, and homeopaths could all promise to cure disease at least as well as regular physicians. The German physician Franz Mesmer promoted his theory of animal magnetism, which maintained that all disease was due to a blockage in the free flow of magnetic energy, so successfully that dozens of European aristocrats sought his healing therapies.*
The United States, in particular, was a medical free market gone mad; by the 1830s, virtually no license was required to practice medicine anywhere in the country. Most practicing physicians were self-educated and self-certified. Few ever attended a specialized school or even served as apprentices to other doctors. Prescriptions, as then understood, weren’t specific therapies intended for a particular patient, but recipes that druggists compounded for self-administration by the sick. Pharmacists frequently posted signs advertising that they supplied some well-known local doctor’s formulations for treating everything from neuralgia to cancer. Doctors didn’t require a license to sell or administer drugs, except for so-called ethical drugs—the term was coined in the middle of the nineteenth century to describe medications whose ingredients were clearly labeled—which were compounds subject to patent and assumed to be used only for the labeled purposes. Everything else, including proprietary and patent medicines (just to confuse matters, like “public schools” that aren’t public, “patent” medicines weren’t patented), was completely unregulated, a free-for-all libertarian dream that supplemented the Hippocratic Oath with caveat patiens: “Let the patient beware.”*
—
The historical record isn’t reliable when it comes to classifying causes of death, even in societies that were otherwise diligent about recording dates, names, and numbers of corpses. As a case in point, the so-called Plague of Athens that afflicted the Greek city in the fifth century B.C.E. was documented by Thucydides himself, but no one really knows what caused it, and persuasive arguments for everything from a staph infection to Rift Valley fever are easily found. That such a terrifying and historically important disease outbreak remains mysterious to the most sophisticated medical technology of the twenty-first century underlines the problem faced by physicians—to say nothing of their patients—for millennia. Only a very few diseases even had a well-understood path of transmission. From the time the disease first appeared in Egypt more than 3,500 years ago, no one could fail to notice that smallpox scabs were themselves contagious, and contact with them was dangerous.* Similarly, the vector for venereal diseases like gonorrhea and syphilis—which probably originated in a nonvenereal form known as “bejel”—isn’t a particularly daunting puzzle: Symptoms appear where the transmission took place. Those bitten by a rabid dog could have no doubt of what was causing their very rapid death.
On the other hand, the routes of transmission for some of the most deadly diseases, including tuberculosis, cholera, plague, typhoid fever, and pneumonia, were utterly baffling to their sufferers. Bubonic plague, which killed tens of millions of Europeans in two great pandemics, one beginning in the sixth century A.D., the other in the fourteenth, is carried by the bites of fleas carried by rats, but no one made the connection until the end of the nineteenth century. The Italian physician Girolamo Fracastoro (who not only named syphilis, in a poem entitled “Syphilis sive morbus Gallicus,” but, in an excess of anti-Gallicism, first called it the “French disease”) postulated, in 1546, that contagion was a “corruption which . . . passes from one thing to another and is originally caused by infection of imperceptible particles” that he called seminaria: the seeds of contagion. Less presciently, he also argued that the particles only did their mischief when the astrological signs were in the appropriate conjunction, and preserved Galen’s humoral theory by suggesting that different seeds have affinities for different humors: the seed of syphilis with phlegm, for example. As a result, Fracastoro’s “cures” still required expelling the seeds via purging and bloodletting, and his treatments were very much part of a medical tradition thirteen centuries old.
However, while seventeenth-century physicians (and “natural philosophers”) failed to find a working theory of disease, they were no slouches when it came to collecting data about the subject. The empiricists of the Age of Reason were all over the map when it came to ideas about politics and religion, but they shared an obsessive devotion to experiment and observation. Their worldview, in practice, demanded the rigorous collection of facts and experiences, well in advance of a theory that might, in due course, explain them.
In the middle of the seventeenth century, the English physician Thomas Sydenham attempted a taxonomy of different diseases afflicting London. The haberdasher turned demographer John Graunt detailed the number and—so far as they were known—the causes of every recorded death in London in 1662, constructing the world’s first mortality tables.* The French physician Pierre Louis examined the efficacy of bloodletting on different populations of patients, thus introducing the practice of medicine to the discipline of statistics. The Swiss mathematician Daniel Bernoulli even analyzed smallpox mortality to estimate the risks and benefits of inoculation (the fatality rate among those inoculated exceeded the benefit in population survival). And John Snow famously established the route of transmission for London’s nineteenth-century cholera epidemics, tracing them to a source of contaminated water.
But plotting the disease pathways, and even recording the traffic along them, did nothing to identify the travelers themselves: the causes of disease. More than a century after the Dutch draper and lens grinder Anton van Leeuwenhoek first described the tiny organisms visible in his rudimentary microscope as “animalcules” and the Danish scientist Otto Friedrich Muller used the binomial categories of Carolus Linnaeus to name them, no one had yet made the connection between the tiny creatures and disease.
The search, however, was about to take a different turn. A little more than a year after Napoleon’s death in 1821, a boy was born in the France he had ruled for more than a decade. The boy’s family, only four generations removed from serfdom, were the Pasteurs, and the boy was named Louis.
—
The building on rue du Docteur Roux in Paris’s 15th arrondissement is constructed in the architectural style known as Henri IV: a steeply pitched blue slate roof with narrow dormers, walls of pale red brick with stone quoins, square pillars, and a white stone foundation. It was the original site, and is still a working part of one of the world’s preeminent research laboratories: the Institut Pasteur, whose eponymous founder opened its doors in 1888. As much as anyone on earth, he could—and did—claim the honor of discovering the germ theory of disease and founding the new science of microbiology.
Louis Pasteur was born to a family of tanners working in the winemaking town of Arbois, surrounded by the sights and smells of two ancient crafts whose processes depended on the chemical interactions between microorganisms and macroorganisms—between microbes, plants, and animals. Tanners and vintners perform their magic with hides and grapes through the processes of putrefaction and fermentation, whose complicity in virtually every aspect of food production, from pickling vegetables to aging cheese, would fascinate Pasteur long before he turned his attention to medicine.
For his first twenty-six years, Pasteur’s education and career followed the conventional steps for a lower-class boy from the provinces heading toward middle-class respectability: He graduated from Paris’s École Normale Supérieure, then undertook a variety of teaching positions in Strasbourg, Ardèche, Paris, and Dijon. In 1848, however, the young teacher’s path took a different turn—as, indeed, did his nation’s. The antimonarchial revolutions that convulsed all of Europe during that remarkable year affected nearly everyone, though not in the way that the revolutionaries had hoped.
Alexis de Tocqueville described the 1848 conflict as occurring in a “society [that] was cut in two: Those who had nothing united in common envy, and those who had anything united in common terror.” It seems not to have occurred to the French revolutionaries that replaced the Bourbon monarchy with France’s Second Republic that electing Napoleon Bonaparte’s nephew as the Republic’s first president might not work out as intended. Within four years, Louis-Napoleon replaced the Second Republic with the Second Empire . . . and promoted himself from president to emperor.
France’s aristocrats had cause for celebration, but so, too, did her scientists. The new emperor, like his uncle, was an avid patron of technology, engineering, and science. Pasteur’s demonstration of a process for transforming “racemic acid,” an equal mixture of right- and left-hand isomers of tartaric acid, into its constituent parts—a difficult but industrially useless process—both won him the red ribbon of the Légion d’honneur, and earned him the attention of France’s new leader. The newly crowned emperor Napoleon III was a generous enough patron that, by 1854, Pasteur was dean of the faculty of sciences at Lille University—significantly, in the city known as the “Manchester of France,” located at the heart of France’s Industrial Revolution. And the emperor did him another, perhaps more important, service by introducing the schoolteacher-turned-researcher to an astronomer and mathematician named Jean-Baptiste Biot. Biot would be an enormously valuable mentor to Pasteur, never more so than when he advised his protégé to investigate what seemed to be one of the secrets of life: fermentation.
At the time Pasteur embarked on his fermentation research, the scientific world was evenly divided over the nature of the process by which, for example, grape juice was transformed into wine. On one side were advocates for a purely chemical mechanism, one that didn’t require the presence or involvement of living things. On the other were champions for the biological position, which maintained that fermentation was a completely organic process. The dispute embraced not just fermentation, in which sugars are transformed into simpler compounds like carboxylic acids or alcohol, but the related process of putrefaction, the rotting and swelling of a dead body as a result of the dismantling of proteins.
Credit: National Institutes of Health/National Library of Medicine
Louis Pasteur, 1822–1895
The processes, although distinct, had always seemed to have something significant in common. Both are, not to put too fine a point on it, aromatic; the smell of rotten milk or cheese is due to the presence of butyric acid (which also gives vomit its distinctive smell), while the smells of rotting flesh come from the chemical process that turns amino acids into the simple organic compounds known as amines, in this case, the aptly named cadaverine and putrescine, which were finally isolated in 1885. But did they share a cause? And if so, what was it? The only candidates were nonlife and life: chemistry or biology.
The first chemical analysis of fermentation—from sugar into alcohol—was performed in 1798 by the French polymath Antoine Lavoisier, who called the process “one of the most extraordinary in chemistry.” Lavoisier described how sugar is converted into “carbonic acid gas”—that is, CO2—and what was then known as “spirit of wine” (though he wrote that the latter should be “more appropriately called by the Arabic word alcohol since it is formed from cider or fermented sugar as well as wine”). In fact, the commercial importance of all the products formed from fermentation—wine, beer, and cheese, to name only a few—was so great that, in 1803, the Institut de France offered the prize of a kilogram of gold for describing the characteristics of things that undergo fermentation. By 1810, in another industrial innovation, French food manufacturers figured out how to preserve their products by putting them into closed vessels they then heated to combust any oxygen trapped inside (and, hence, inaugurating the canning industry). Since the oxygen-free environment retarded fermentation, and, therefore, spoilage, it was believed that fermentation was somehow related to the presence of oxygen: simple chemistry.
However, another industrial innovation, this time in the manufacture of optical microscopes, provided another theory. In the 1830s, the Italian astronomer Giovanni Amici discovered how to make lenses that magnified objects more than five hundred times, which allowed observers to view objects no wider than a single micron: a thousandth of a millimeter. The first objects examined were the ones most associated with commercially important fermentation: yeasts.* In 1837, the German scientist Theodor Schwann looked through Amici’s lenses and concluded that yeasts were, in fact, living things.
As with many such breakthroughs, Schwann’s findings didn’t convince everyone. To many, including Germany’s preeminent chemist, Justus von Liebig, this smacked of a primitive form of vitalism. It seemed both simpler, and more scientific, to attribute fermentation to a simple interaction of exposing sugar to air. The battle would go on for decades,* until Pasteur summed up a series of experiments with what was, for him, a modest conclusion. “I do not think,” he wrote, “there is ever alcoholic fermentation unless there is simultaneous organization, development, and multiplication of” microscopic animals. By 1860, he had demonstrated that fermenting microorganisms were responsible for spoilage—turning milk sour, and grape juice into wine. And by 1866, Pasteur, by then professor of geology, physics, and chemistry in their application to the fine arts at the École des Beaux Arts, published, in his Studies on Wine, a method for destroying the microorganisms responsible for spoiling wine (and, therefore, milk) by heating to subboiling temperatures—60°C. or so—using a process still known as pasteurization. He even achieved some success in solving the problem of a disease that was attacking silkworms and, therefore, putting at risk France’s silk industry.
The significance of these achievements is not merely that they provide evidence for Pasteur’s remarkable productivity. More important, they were, each of them, a reminder of the changing nature of science itself. In an era when national wealth was, more and more, driven by technological prowess rather than the acreage of land under cultivation, the number of laborers available, or even the pursuit of trade, industrial chemistry was a strategic asset. France was Europe’s largest producer of wine and dairy products, and the weaver of a significant amount of the world’s silk, and anything that threatened any of these “industries” had the attention of the national government.
The next twenty years would revolutionize medicine further still, and once again Pasteur would be at the revolution’s red-hot center, establishing the critical connection between fermentation, putrefaction, and disease.
—
As early as 1857, Pasteur had disputed Liebig’s position that putrefaction was the cause of fermentation and contagious disease—that both were, in some sense, a result of rot. The leap was to invert Liebig’s logic. When Pasteur had examined the diseased silkworms in Lille, and the ailing wine grapes he had studied in Arbois, what he saw in the microscope looked exactly like fermentation. Since he knew that fermentation was caused by microorganisms like yeasts and even smaller living things, fermentation and disease must have a common, microbial cause.
Pasteur wasn’t the only one to arrive at this hypothesis.
The Frenchman’s first notable breakthrough, in the revolutionary year of 1848, was the discovery that two molecules can be made up of the same ingredients but structurally be mirror images of one another. One of the by-products of wine fermentation, tartaric acid, which is composed of four carbon atoms, six hydrogen, and six oxygen, is a “right-handed” molecule: Light passing through it is bent in a right-handed direction. Racemic acid, on the other hand, has the same formula—C4H6O6—but rotates light in both directions: It is, in formal terms, both dextrorotatory and levororotatory. This discovery was important enough on its own terms, as any high school chemistry student who has struggled with the concept of stereochemistry can testify. Rotating the lens of history on Pasteur reveals his only serious rival for the title of “Father of the Germ Theory of Disease” (also “Father of Microbiology”) was Robert Koch: his chiral double.
Koch was born in 1843 in the town of Claustal in the Kingdom of Hanover, one of the principalities that preceded the creation of the modern German state. Like Pasteur, he was a beneficiary of an entire nation’s newfound enthusiasm for technical education, even more profound in the Germanophile world than in France. This was especially true in medical education; every major hospital in the patchwork of German-speaking states had been aligned with a university since the late eighteenth century, when Joseph Andreas von Stifft opened the Vienna Allgemeine Krankenhaus as part of the University of Vienna. In 1844, a year after the birth of Koch, Carl von Rokitansky succeeded Stifft, and the “Vienna School of Medicine” linked examination of living patients with the results of autopsies performed on the same patient. By separating clinical medicine from pathology—before Rokitansky the same doctors had been responsible for both—and documenting more than sixty thousand autopsies, they built a huge diagnostic database that could be validated by postmortem studies.
Credit: National Institutes of Health/National Library of Medicine
Robert Koch, 1843–1910
As a result of the German-speaking world’s embrace of scientific education, especially in medicine, Koch attended at least as rigorous a secondary school and university as Pasteur, where, again like Pasteur, he acquired both a medical degree and a mentor who planted the seed for his future researches. For Koch, that mentor was Jacob Henle, professor of anatomy at the University of Göttingen, who had been an advocate for the idea of infection by living organisms since the 1840s.
Both Pasteur and Koch were gifted experimentalists, happiest working in laboratories.
Pasteur came to the experiments that would revolutionize medicine by a relatively roundabout route: first studying the basic chemistry of organic molecules, then the phenomenon of industrial fermentation. Koch did so more directly. As medical officer for the Rhineland town of Wöllstein, he began studying a disease then decimating herds of farm animals in his rural district.
Anthrax was and remains a deadly disease both to all sorts of herbivorous animals who contract it while grazing, and (rarely) to carnivores who get it secondhand from their prey. Every year of the nineteenth century, it killed hundreds of thousands of European cows, goats, and sheep. It was also a feared killer of the humans who acquired it indirectly from infected animals they handled as herders or ranchers; a particularly deadly form is known as “wool-sorter’s disease,” for obvious reasons. For both animals and human victims, anthrax kills in a gruesome fashion: Lethal toxins* cause severe breathing problems, the painful tissue swelling known as edema, and eventually death. Koch was determined to learn the disease’s secrets. What caused it? How was it transmitted from sick to healthy organisms? Most important: Could it be prevented or cured?
Even by the standards of the day, Koch’s lab equipment was notably primitive; he inoculated twenty generations of lab mice with fluids taken from the spleens of dead cows and sheep, using slivers of wood. It was an object lesson in the importance of rigorous technique rather than sophisticated tools. Koch’s wood slivers established that the blood of infected animals remained contagious even after the host’s death. They showed him that it wasn’t the blood itself, but something within the blood, that carried the disease. In order to find it, he needed a pure sample of the contagious element. Once again working with homemade equipment, he isolated and purified the element and caused it to multiply in a distinctive environment: the watery substance taken from inside the eye of an uninfected ox, where he cultivated pure cultures of it. When he injected the cultured fluid into healthy animals, they contracted anthrax.
He had his pathogen. The country doctor had forged, for the first time, a link between a distinct microorganism and a single disease. And he had something more, something that explained how grazing animals contracted the disease that he was able to grow only in very specific conditions. When unable to grow, anthrax produces spores that allow it to survive in the absence of food, a host, or even oxygen (in, for example, the well-ploughed soil of Wöllstein). When conditions improve—after they enter either the digestive or respiratory system of some unlucky bovine—they germinate and start multiplying again. Soon enough, the toxins that cause the disease reach deadly levels.
This was big enough news that it attracted the attention of Ferdinand Julius Cohn, a professor of botany at the University of Breslau, and, perhaps most notably, the scientist who had, in 1872, named an entirely new class of living thing: bacteria (from baktron, the Greek word for “staff”). By then, the complicity of the tiny organisms that Leeuwenhoek had called animalcules in both fermentation and disease was starting to become doctrine. Awareness of the existence of a connection between disease and bacteria didn’t, however, tell much about the microorganism’s pervasiveness, the mechanism by which they killed, or even how long they had existed.
One reason for the nineteenth century’s ignorance about the age of bacteria was a comparable deficiency in knowledge about the age of the earth itself. Until the beginning of the twentieth century, the oldest estimates for the planet’s origins were those of William Thomson, Lord Kelvin, who had used the equations of thermodynamics to calculate that the earth was approximately twenty million years old. This was a massive problem for advocates of Darwinian evolution, including Darwin himself, who was “greatly troubled” by it, since a mere twenty million years were insufficient for anything like the known diversity of life on earth.
Current estimates of the age of the planet, roughly 4.6 billion years, solve that problem. For most of that unimaginably long time, bacteria were the dominant form of life on earth. By some estimates, they still are. Recognizable bacterial life—single celled, with a full suite of metabolic tools, but without a nucleus—first appeared about 3 billion years ago, and were the only form of life on earth until about 570 million years ago. They remain by far the most numerous. A single gram of topsoil can contain more than forty million bacteria; an ounce of seawater thirty million. Overall, the mass of the earth’s 5 x 1030 bacteria may well exceed that of all the plants and animals combined.
Until the middle of the twentieth century, bacteria were a puzzle for taxonomists, who had spent centuries assuming that all living things were either plants or animals. It wasn’t until the 1930s that a French marine biologist named Edouard Chatton came up with a different, and more accurate, bifurcation of the living world, dividing it between organisms that possess, and those that lack, a cellular nucleus, the “kernel of life.” The Greek word for “kernel,” karys, gave Chatton his naming convention: Bacteria are prokaryotes (in French, procariotique); virtually everything else, from Pasteur’s yeasts to a blue whale, is a eukaryote.* Bacteria live everywhere from Arctic glaciers to superheated vents in the ocean floor to mammalian digestive systems. They have been around so long—the number of generations that separate the first bacteria from the ones that probably gave George Washington his sore throat is about 3 x 1011, roughly six orders of magnitude greater than the number of generations separating the general from the first yeasts—that they have become past masters of evolutionary innovation. Bacteria, which reproduce as frequently as three times an hour, can mutate into entirely new versions of themselves in what is, to every other organism on the planet, a figurative blink of an eye. And, perhaps most relevant for a history of disease, they can feed themselves on everything from sunlight, to chemicals so toxic that they are used to clean the undersides of ships, to, well, us. In the words of one twentieth-century biologist, “It is not surprising that microbes now find us so attractive. Because the carbon-hydrogen compounds of all organisms are already in an ordered state, the human body is a desirable food source for these tiny life forms.”
Though its age and extent was unknown to Cohn, he did know that the microorganism that Koch had found was part of this bacterial universe. He published Koch’s work in his journal, Beiträge zur Biologie der Pflanzen—in English, Contributions on Plant Biology—in 1876. The discovery immediately turned Koch into one of Europe’s best-known life scientists. Which brought him to the attention of an even more famous one: Louis Pasteur.
In 1877, Pasteur took it upon himself to resolve what remained of the debate about the causes of anthrax. The bacteria isolated by Koch were still thought to be, in the words of at least one biologist, “neither the cause nor necessary effect of splenic fever [i.e., anthrax]” since exposure to oxygen destroyed them, but material containing the dead organisms still caused anthrax. Pasteur wasn’t convinced. He repeated the same process used by Koch: successive dilution—essentially taking a few drops from a flask in which he grew anthrax, diluting it in a new flask, over and over, until there was no doubt that every other potentially contagious element had disappeared—then injecting the pure culture into host animals, who reliably contracted the disease. The endospores discovered by Koch were the reason that seemingly dead bacteria remained carriers of disease: Anthrax cells weren’t killed by oxygen, but simply became dormant inside the walls of a spore. The groundbreaking understanding of anthrax—a combination of Koch’s spores and Pasteur’s dilutions—was the first to link the German physician with the French chemist. It would not be the last.
The next step for Pasteur was the transformation of his experimental results into a practical therapy. A hundred years before, the English physician Edward Jenner had demonstrated that exposing healthy subjects to fluid taken from (relatively benign) cowpox lesions conferred immunity to smallpox. Pasteur himself had discovered that injecting hens with cultures containing chicken cholera that had lost virulence offered the same protection against that disease. Why not anthrax? The key, as always in vaccination, was to find a version of the disease-causing agent that was weakened enough that exposure to it was unlikely to cause the disease itself. In 1881, Pasteur and several of his colleagues, including Charles Chamberland and Emile Roux, used a variety of methods, such as exposure to acids or to different levels of heat, to reduce the disease-causing powers of the bacterium, though not without complications. Pasteur’s absolute belief in his irreplaceable gifts had by then taken the form of insufferable arrogance; in 1878, he had attacked the brilliant physiologist Claude Bernard, who questioned the argument that fermentation required living organisms, as a near-blind, publicity-seeking fraud.* And so, naturally, when the veterinarian Jean-Joseph Henri Toussaint successfully attenuated the anthrax bacterium first, Pasteur was furious. So much so that when his team discovered a procedure that weakened the bacteria sufficiently to make a practical vaccine using Bernard’s technique—potassium dichromate, an oxidant—he claimed to have done so by using oxygen alone in order to avoid sharing credit.
His tactics worked, both as a disease preventative and as a public relations coup. On May 5, 1881, at the French village of Pouilly-le-Fort, fifty sheep and ten cows were divided into a control group and an experimental one, with the latter receiving the vaccination, and all subsequently exposed to anthrax bacilli. A month later, all the unvaccinated animals in the control group had died; none of the vaccinated ones had even contracted the disease. Anthrax had been defeated by Louis Pasteur. Already France’s favorite scientific hero, he was now mentioned in the same breath as Lavoisier and Blaise Pascal.
Germany reacted less enthusiastically. After attending Pasteur’s 1882 presentation to the Fourth International Congress for Hygiene, Robert Koch wrote a ten-thousand-word-long screed, of which the following offers a taste:
Pasteur began with impure material, and it is questionable whether inoculations with such material could cause the disease in question. But Pasteur made the results of his experiment even more dubious by inoculating, instead of an animal known to be susceptible to the disease, the first species that came along—the rabbit. . . . Pasteur follows the tactic of communicating only favorable aspects of his experiments, and of ignoring even decisive unfavorable results. Such behavior may be appropriate for commercial advertising, but in science it must be totally rejected. At the beginning of his Geneva lecture, Pasteur placed the words “Nous avons tous une passion supérieure, la passion de vérité.” [“We have no passion greater than the passion for truth.”] Pasteur’s tactics cannot be reconciled with these words. His behavior is simply inexplicable. . . .
Koch’s report was nothing less than a declaration of war, one that would last until Pasteur’s death in 1895 (some would say, ending not even then). By 1880, Koch had moved to a much-improved lab at the Imperial Health Bureau in Berlin, from which discoveries continued to appear on what seemed to be a monthly basis. Having learned, by hard trial and error, that growing bacteria in nutrient-rich liquids like beef broth was a losing game—colonies of different sorts mixed together far too easily—he discovered that he could grow pure strains on potato slices, and later on what is still the standard growth medium for bacterial cultures, the seaweed-based jelly known as agar. His assistant, Julius Richard Petri, designed and built the eponymous dishes on which agar compounds would host microbial colonies. In 1882, Koch discovered the bacterium that caused tuberculosis, an achievement that his colleague Friedrich Löffler called a “world-shaking event” that transformed Koch “overnight into the most successful researcher of all times.”* In 1885 he discovered and identified the bacterium responsible for cholera.
Nor did he limit himself to experimental work in his Berlin lab; Koch formulated criteria for managing cholera epidemics, and created, with Löffler, what became known as the “four postulates” of pathology, a diagnostic tool that would link a single pathogen to a single disease. The postulates themselves were plausible and useful. The first states that a pathogen must be found in all organisms that are a disease’s victims, but not in any healthy organisms. The second, that the microorganism must be isolated from a diseased organism and multiply in a culture like agar. The third postulate held that a cultured microbe must continue to cause disease in a healthy host. Finally, the fourth required that the reisolated microbe would be the same as the original one.*
It seemed at the time to scientists throughout Europe that it was Pasteur’s unwillingness to use the postulates as a diagnostic tool that caused so much of Koch’s hostility. No doubt it was a contributing factor. Another was more basic: Koch was German, Pasteur French. And neither had forgotten the year 1870.
In July 1870, the French parlement, in support of Pasteur’s patron, the Emperor Napoleon III, voted to declare war on Prussia. They had been maneuvered into doing so by Prussia’s prime minister, Otto von Bismarck, whose North German Confederation swiftly and decisively destroyed the French armies in the east, captured the emperor, besieged Paris, accepted France’s surrender, and proclaimed a new German Empire under the Prussian king, all in less than a year. This upset the balance of power that had been obtained in Europe after the defeat of Bonaparte, but not as much as it upset Louis Pasteur, whose reaction was the opposite of moderate: “Every one of my works to my dying day will bear the epigraph: Hatred to Prussia. Vengeance. Vengeance.” That Koch was German—Hanoverian, not Prussian, not that it mattered—didn’t escape Pasteur’s notice. Nor the fact that Koch had served as a military surgeon during the war.
So it went, then, for the remainder of the two men’s lives: two brilliant experimentalists revolutionizing the practice of biology and medicine, each with a résumé listing a dozen achievements, any one of which would have bought them scientific immortality, each honored in every way that their nations could honor them. And, as if to underline the mirror-image metaphor, each was retrospectively dogged by accusations of what might kindly be called embellishment.
For Robert Koch, the moment of overreach wouldn’t come until 1890, when, working at Berlin’s Imperial Health Bureau, he announced the discovery of a new therapeutic technique for tuberculosis, an extract of the tubercle bacillus that he named “tuberculin.” By then, Koch’s reputation was sufficiently great that his word was sufficient grounds for widespread acceptance; a cure for one of the most dangerous diseases known was at hand, and it was used as a treatment over the course of the next eleven years. That it took so long before tuberculin’s therapeutic uselessness was discovered is partly because the subjects on whom it was used were already so sick that frequent failure was expected, even forgiven. Koch himself was not so easily forgiven. He had kept his formulation a secret, which was damning enough, but his reason was worse: He had made no secret of his intention of profiting by it, and was therefore unwilling to share potentially valuable trade intelligence with other scientists. Moreover, when he was finally forced, by reports that the substance was actually harmful (as it turned out, the bacteria, even in glycerin, produced the equivalent of an allergic reaction),* it became clear that Koch had only the sketchiest idea of its ingredients, nor could he provide the guinea pigs that he had supposedly cured with it. It took until the end of Koch’s life for his reputation to recover from the scandal, a testimony both to the limitations of the vaccine approach itself—as we shall see, attacking and defeating infectious disease is a very different process than defending against it—and to the pressures of scientific discovery.
Pasteur’s reputation managed to remain largely unsullied by any similar scandal, at least during his lifetime. It was only a century later that one of his most widely publicized achievements, his rabies vaccine of 1885, was revealed, in a biography entitled The Private Science of Louis Pasteur, to be considerably less significant than it had appeared.
By 1885, the search for a bacterium that caused rabies had failed—inevitably, since the disease is caused by a virus, one of those free-floating bits of genetic material wrapped in a protein coat that are so much smaller than bacteria that they reproduce inside them, and that wouldn’t even be identified until 1892. However, Pasteur’s thinking went, since rabies had a very slow gestation period—anywhere from a month to a year—perhaps a vaccine could actually “cure” the disease, by immunizing the victim from it after infection, but before symptoms appeared. That is, a vaccination given after exposure to the bite of a rabid animal could serve to inoculate the victim against the disease, which was inevitably fatal once symptoms appeared.
So, when nine-year-old Joseph Meister was bitten by a rabid dog in July 1885, and survived after Pasteur inoculated him with a weakened rabies virus that had been cultivated in live rabbits, the public acclaim was enormous. The French public practically deified Pasteur, and raised more than 2.5 million francs—at least 12 million dollars today—that enabled the Institut Pasteur to open its doors three years later.
Meister made Pasteur a national hero, though at least partly through a misunderstanding of just what “curing” a disease means. A bite from a rabid dog will cause rabies in a human only about one time in seven (though, in 1885, that one would invariably die). Had Pasteur not inoculated Meister, he had a good chance of surviving anyway. Another, rather larger problem with the story of Pasteur’s heroic achievement is that Meister wasn’t the first victim to receive Pasteur’s vaccine; two weeks before, in June 1885, a girl named Julie-Antoinette Poughon had been given the vaccine but died shortly thereafter. Nor had Pasteur tested the method on dogs prior to giving it to Meister, though he claimed to have done so. Pasteur, perhaps understandably, neglected to mention either fact to journalists or other scientists.*
For Koch and Pasteur, however, the real achievements remain so outsized that embarrassments that would destroy the reputation of garden-variety scientists seem barely to rise to the level of peccadillo. Pasteur, especially, was a hero from the time of his original fermentation discoveries, and not just in France. In 1867, the Englishman Joseph Lister was so taken with Pasteur’s researches that he wrote:
Turning now to the question how the atmosphere produces decomposition of organic substances, we find that a flood of light has been thrown upon this most important subject by the philosophic researches of M. Pasteur, who has demonstrated by thoroughly convincing evidence that it is not to its oxygen or to any of its gaseous constituents that the air owes this property, but to the minute particles suspended in it, which are the germs of various low forms of life, long since revealed by the microscope, and regarded as merely accidental concomitants to putrescence, but now shown by Pasteur to be its essential cause, resolving the complex organic compounds into substances of simpler chemical constitution, just as the yeast plant converts sugar into alcohol and carbonic acid.
When he wrote this, Lister was a working physician and the Regius Professor of Surgery at the University of Glasgow. He had been born in Essex, forty years before, to a prosperous and accomplished Quaker family. His father, Joseph Jackson Lister, was a respected physicist and pioneer of microscopy; a classic scientific amateur, he was elected a Fellow of the Royal Society, the world’s oldest and most respected scientific organization, for inventing the achromatic microscope.
In 1847, Joseph graduated from University College London—even in the middle of the nineteenth century, both Oxford and Cambridge were still barred to Quakers—and entered the Royal College of Surgeons. In 1853, he became a house surgeon at University College Hospital, and three years later was appointed surgeon to the Edinburgh Royal Infirmary.
In 1859, the newly married Lister moved to the University of Glasgow, and his story really began.
The Glasgow Royal Infirmary had been built in the hope that it would prevent “hospital disease” (the name coined by the Edinburgh obstetrician James Young Simpson in 1869 for the phenomenon known today as “surgical sepsis” or “postoperative sepsis”). In this, it was a notable failure. In Lister’s own records of amputations performed at the Glasgow Royal Infirmary, “hospitalism” killed between 45 and 50 percent of his patients. Lister wrote, “Applying [Pasteur’s] principles to the treatment of compound fracture, bearing in mind that it is from the vitality of the atmospheric particles that all mischief arises, it appears that all that is requisite is to dress the wound with some material capable of killing those septic germs, provided that any substance can be found reliable for this purpose, yet not too potent as a caustic.”
Credit: National Institutes of Health/National Library of Medicine
Joseph Lister, 1827–1912
Lister’s earlier work—on the coagulation of the blood, and especially the different microscopically visible stages of inflammation in the sick—convinced him that Pasteur had it right. The “pollen” idea, however, convinced him also that the microorganisms traveled exclusively through the air. This was wrong, but usefully so, since it argued for the most impassable barrier between the “infected” air and the patient.
Not a physical barrier, a chemical one. In 1834, the German chemist Friedlieb Runge had discovered that what he called Karbolsäure could be distilled from the tarry substance left behind when wood or coal are burned in furnaces or chimneys: creosote, the stuff that gives smoked meat its flavor. Sometime in the 1860s, Lister read an article about how a German town used creosote to eliminate the smell of sewage. Since he knew, pace Pasteur, that the smell of sewage was caused by the same chemical process that caused wounds to mortify, he reasoned that a compound that prevented one might, mutatis mutandis, halt the other. In the spring of 1865, he started testing other coal tar extracts on patients, and, on August 12, he hit the jackpot: The substance known as phenol, or carbolic acid, stopped infections cold.* Two years later, he published his results: Surgical mortality at the Glasgow Infirmary had fallen from 45 percent to 15 percent. “As there appears to be no doubt regarding the cause of this change, the importance of the fact can hardly be exaggerated.”*
It took years before Lister was able to persuade the medical establishment of the importance of what has come to be known as antisepsis, helped along more by practical and highly publicized results—in 1871, he safely drained an abscess under the arm of Queen Victoria—than by experimental validation. Dependence on antisepsis, however, had its own risks. Patients were frequently required to inhale the fumes of burning creosote, which was dangerous enough. Even worse, some were given injections of carbolic acid, which doesn’t kill just dangerous pathogens, but often enough, the patients themselves. As the German physiologist (and winner of the very first Nobel Prize in Medicine) Emil Behring pointed out in the 1880s, “It can be regarded almost as a law that the tissue cells of man and animal are many times more susceptible to the poisonous effects of disinfectants than any bacteria known at present. Before the antiseptic has a chance either to kill or inhibit the growth of the bacteria in the blood or in the organs of the body, the infected animal itself will be killed.”
Lister’s reputation, and the importance of both antiseptic and aseptic surgical practice—not merely disinfecting wounds, which Lister pioneered, but maintaining a fully sanitary environment around patients, a technique he adopted far later—continued to grow over the rest of his life. He would become one of the heroes of nineteenth-century Britain, president of the Royal Society, founder of the British Institute of Preventive Medicine (renamed the Lister Institute of Preventive Medicine in 1903), and be made Baron Lister of Lyme Regis. In 1899, the Chinese minister to the Court of St. James’s, commanded by the emperor to produce biographies of the hundred greatest men in the world, announced that the three Englishmen to make the cut were William Shakespeare, William Harvey, and Lister himself. In retrospect, this seems modest enough. The germ theory of disease that had been developed and tested by Pasteur, Koch, and Lister himself produced an astonishing number of discoveries about the causes of disease; not merely anthrax, tuberculosis, and cholera—respectively the bacteria known as Bacillus anthracis, Mycobacterium tuberculosis, and Vibrio cholerae—but gonorrhea (Neisseria gonorrhoeae, discovered 1879), diphtheria (Corynebacterium diphtheriae, discovered 1883), bacterial pneumonia (Streptococcus pneumoniae, discovered 1886), gas gangrene (Clostridium perfringens, discovered 1892), bubonic plague (Yersinia pestis, discovered 1894), dysentery (Shigella dysenteriae, discovered 1898), syphilis (Treponema pallidum, discovered 1903), and whooping cough (Bordatella pertussis, discovered 1906). Moreover, the discovery of the nature of these infectious agents led directly to a powerful suite of defensive weapons; not merely antisepsis and vaccination, but even more usefully, improved sanitation and hygiene. Since by their very nature, preventive measures succeed when disease doesn’t even appear, it is impossible to know with certainty how many lives were saved by these expedients, but they are the most important reason that European life expectancy at birth increased from less than forty years in 1850 to more than fifty by 1900.
Nonetheless, as valuable as these practices were in defending human life from pathogens, millions continued to fall ill from infectious disease every day. And when they did, medicine could do virtually nothing about it. Perversely, the greatest triumph in medical history—the germ theory of disease—destroyed the ideal of heroic medicine, replacing it with a kind of therapeutic fatalism.* As physicians were taught the bacterial causes of diseases, they also learned that there was little if nothing to do once a patient acquired one.
In one of Aesop’s best-known fables, a group of frogs living in a pond prayed to the gods to send them a king; an amused Zeus dropped a log in the pond, and announced that this was, henceforth, the frogs’ king. The frogs, disappointed with their new king’s inactivity, prayed again for a king . . . this time one that would do something, upon which Zeus sent them a stork, who promptly ate the frogs. The Aesopian moral—always choose King Log over King Stork—is one that the Western world’s physicians took to heart, and from the 1860s to at least the 1920s, humility reigned. Only a few drugs had any utility at all (mostly for relieving pain), which made for skepticism about virtually all treatment. On May 30, 1860, Dr. Oliver Wendell Holmes, Sr., famously announced in an address before the Massachusetts Medical Society:
Throw out opium, which the Creator himself seems to prescribe, for we often see the scarlet poppy growing in the cornfields, as if it were foreseen that wherever there is hunger to be fed there must also be a pain to be soothed; throw out a few specifics which our art did not discover, and it is hardly needed to apply; throw out wine, which is a food, and the vapors which produce the miracle of anaesthesia, and I firmly believe that if the whole materia medica [medical drugs], as now used, could be sunk to the bottom of the sea, it would be all the better for mankind,—and all the worse for the fishes.
Holmes overstates, but not by much. The achievements of the nineteenth century in revolutionizing medical therapeutics are nothing to sneeze at, including the recognition that sneezing itself was a powerful source of dozens of infectious diseases. The great biologists of the era established a robust theory about disease, along with powerful tools for defending against it, and left behind a model for research, experimentation, and validation.
It takes nothing away from the extraordinary discoveries of Pasteur, Koch, Lister, and others to wonder whether their most enduring contribution to the revolution in medicine wasn’t informational but institutional: the modern biological research laboratories. The Institut Pasteur was founded in 1888; the Lister Institute of Preventive Medicine was established in 1891, the same year that the Robert Koch Institute was founded, originally as the Royal Prussian Institute for Infectious Diseases. In 1890, the Royal Colleges of Surgeons and Physicians opened its first research laboratory in London. These establishments weren’t only fertile schools for training for the next generations of researchers, or structures in which the best biologists and physiologists in the world could cooperate—and, truth be told, compete—one with the other. They were also magnets for the resources that research demanded—magnets for the philanthropy of wealthy families, and subsidies from national governments. As the nineteenth century turned into the twentieth, the life sciences had not yet become the enormously expensive proposition they would become decades hence. Nonetheless, even frugal research still cost money, and institutional laboratories were, for a time, the most productive place to spend it.
But for the next chapter in the story that leads from George Washington’s sickbed to the maternity wing of New Haven Hospital in 1942, there was an even more important development: the marriage between industrial chemistry and medicine.