Epigenetics: The Death of the Genetic Theory of Disease Transmission 1st Edition

CHAPTER THIRTEEN

Epigenetics

Facts are facts—however, truths change as new facts appear.

—Robert Leaky

Washington University Speech in 1967

We talk about DNA as if it’s a template (or a blue print), like a mould for a car part in a factory. In the factory, molten metal or plastic gets poured into the mould thousands of times and, unless something goes wrong in the process, out pop thousands of identical car parts.

But DNA isn’t really like that. It’s more like a script. Think of Romeo and Juliet, for example. In 1936 George Cukor directed Leslie Howard and Norma Shearer in a film version. Sixty years later Baz Luhrmann directed Leonardo DiCaprio and Claire Danes in another movie version of this play. Both productions used Shakespeare’s script, yet the two movies are entirely different. Identical starting points, different outcomes.

That’s what happens when cells read the genetic code that’s in DNA (that is nutritionally deficient). The same script can result in different productions. The implications of this for human health are very wide-ranging, as we will see from the case studies it’s really important to remember that nothing happened to the DNA blueprint of the people in these case studies. Their DNA didn’t change (or mutate), and yet their life histories altered irrevocably in response to their environment (i.e., diet and nutritional deficiencies).

—Nessa Carey

The Epigenetics Revolution (2012)

Just as a pianist interprets the notes of a musical score, controlling the volume and tempo, epigenetics affects the interpretation of DNA genetic sequences in cells. Epigenetics usually refers to the study of ‘heritable traits’ (caused by nutritional deficiencies), that do not involve changes to the underlying DNA sequence of the cells.

—Clifford A. Pickover

The Medical Book (2012)

Geologists believe that some “two hundred and fifty million years ago” the world’s land mass was a single “continent” known as Pangaea and that unknown forces divided Pangaea into two land masses: Eurasia/Africa and the Americas. While there was some overlap, the two separate hemispheres (eastern and western) developed vastly different populations of animal and plant species.

Food sources—quantity and nutritional quality—were and continue to be the limiting factors in human health, longevity, and population growth and density. Historically, when one excludes cataclysmic ecological disasters and microbial plagues and epidemics, simple starvation inflicted the most damage to human populations.

In 1968 Wallach, while at the Center for the Biology of Natural Systems, designed a simple experiment that would demonstrate how “genetic potential” can be affected by the environment and nutrition. Thus we have the field of epigenetics. Wallach randomly selected 100 identical ducklings with the same mother and father (one keeps stealing eggs from the nest and refrigerating them until you collect 100, then all 100 are incubated at the same time) and they were divided into four groups of twenty-five each and each group was fed a different diet:

1.             Lettuce only

2.             Hydroponically grown barley grass only

3.             Purina duck grower pellets only

4.             Purina duck grower pellets and barely grass

After one month, the results were dramatic. Groups 1 and 2 showed identical growth and development rates (almost zero); group 3 ducks were three times larger in height and weight than groups 1 and 2; and group 4 was twice as large as group three and six times larger than groups 1 and 2.

Only groups 3 and 4 came close to fulfilling their genetic potential for growth and development at one month of age; only groups 3 and 4 were fed supplemented Purina duck grower pellets that contained all of the known nutrients required by ducklings.

A human parallel to the duck experiment is the Japanes immigrants, who originally came to the United States as small wiry people about four foot eleven inches tall and weighing 100 pounds soaking wet. Their genetic potential for growth and development was never achieved by eating the low calorie, low nutrient Japanese rice, vegetable and fish diet of their native Japan. The second generation Japanese, conceived and born in the United States, were a different story. The number-one son over the next generation was six feet four inches tall, weighed 240 pounds and played tight end for the USC football team. Their genetic background was the same, however, their potential for height and physical development was more completely fulfilled by having access to unlimited calories, meat, protein, milk, eggs, vegetables, and vitamin and mineral supplements.

After 1492 and the transoceanic visit to Hispaniola by Columbus, who according to historian Alfred W. Crosby, “reknit the seams of Pangaea” and literally thousands of insect, animal, and plant species were transported east and west across the world.

Crosby dubbed these mass ecological migrations the “Columbian Exchange.” Upon Columbus’s second voyage west to Hispaniola, his fleet of seventeen ships and a combined crew of fifteen hundred men brought with them stowaways of many species. According to Charles Mann, author of 1493:

They were accompanied by a menagerie of insects, plants, mammals and micro-organisms. Beginning with La Isabela, European expeditions brought cattle, sheep, and horses along with crops like sugar cane (originally from New Guinea), wheat (from the Middle East), bananas (from Africa), and coffee (also from Africa). Equally important, creatures that the colonist knew nothing about hitchhiked along for the ride. Earthworms, mosquitoes, and cockroaches honey bees, dandelions, and African grasses; rats of every description—all of them poured from the hulls of Colon’s vessels and those that followed, rushing like eager tourists into lands they had never seen before.

John Rolfe was the man (aka: John Smith) who married Pocahontas, the “Indian princess.” He was also the man behind the survival and success of Jamestown, Virginia, and he was also the man who systematically brought earthworms to the Americas as a hitchhiker on the root balls of tobacco and English soil that was used as ballast on the ship’s return voyages.

Charles Darwin was apparently the first to recognize the great event that had taken place when earthworms came to the Americas. Darwin wrote a three-hundred-page book on the ecological and agricultural value of the earthworm. Darwin wrote, “It may be doubted whether there are many other animals which have played so important a part in the history of the world, as have these lowly organized creatures.”

The Columbian Exchange produced such dynamic ecological effects that many biologists believe that Columbus’s voyages denoted the beginning of a new biological era: the “Homogenocene”—places that had been isolated and ecologically unique have become the same.

The Ecuadorean people can be traced back to the late 15th century, when Europeans fled from the Iberian Peninsula to the New World. These peoples, the Sephardic Jews, were desperate to leave Spain and Portugal because of the horrors of the Inquisition. They fled to North Africa, the Middle East, southern Europe, and the New World; however, the Inquisition followed them. It was in their interest to stay away from the larger cities such as Lima and Quito where the Catholic Church had its strongest influence. They settled in small villages and towns, where even up until the 1980s there were few roads, no phones, and no electricity, and where wood was the common fuel.

When Columbus first arrived in Hispaniola, the numbers of the local Indian inhabitants (the Taino) numbered by estimation from 60,000 to more than eight million. In 1514 their numbers were reduced to 26,000, and by 1548 there were less than 500 alive. European microbial plagues and epidemics had wiped out seventy-five percent of the entire native population of the western hemisphere. This is an example of Darwin’s survival of the fittest!

Survivors starved and entire cultures disappeared or fell into ruin—a repeat of what had happened to Europe during the Dark Ages. Only small enclaves of peoples flourished, and their survival was the result of food—lots of high quality food. In the Americas the main indigenous food source was the potato: the tuber that had originated in the Andes was now contributing to what would be called the Agro-Industrial Complex worldwide. The potato became the fifth most important food crop worldwide, only behind sugar cane, wheat, corn, and rice.

William H. McNeill, a highly respected historian, makes the point that it is universally believed by scholars that “the introduction of the potato to Europe was a key moment in history.” The widespread use of the potato as a calorie source in northern Europe is credited with ending centuries of famine among the peasants. Additionally, McNeill also stated that the potato (S. tuberosum) led to empire: “Potatoes, by feeding rapidly growing populations, permitted a handful of European nations to take advantage of American silver—the potato fueled the rise of the West.”

The American and European adoption of the potato set the stage for modern agriculture and the Agro-Industrial Complex. This food empire is supported by three legs: (1) improved crops including the potato, (2) high intensity fertilizer, and (3) pesticides. All three came into play with the Columbian Exchange and the potato.

The potato came from the Andes and the fertilizer to grow them came from the Chincha Islands, a three very small islands made mostly of granite, positioned thirteen miles west off of the Peruvian shore, about five hundred miles south of Lima. The Chincha Islands have less than an inch of rainfall per year and therefore are the most productive of the 147 Peruvian “guano islands” as far as accumulation of deposits of bird guano, the excrement vauled and sold as fertilizer.

The Chincha islands only other claim to fame is that they are the home of three species of large sea birds: the Peruvian booby, the Peruvian cormorant, and the Peruvian pelican. The birds are drawn to the islands by the strong cold coastal currents. Phytoplankton bloom because of the heavy nutrient levels in the coastal water; zooplankton consume the phytoplankton that are then eaten by the small anchoveta fish.

Anchoveta travel in large schools that are magnets to larger predatory fish, which are then in turn hunted down, scooped up, and eaten by the booby, cormorant, and pelicans. These large predatory birds have lived and reared their young on the Chincha Islands for thousands of years.

Avian urine is excreted as a paste, in a consistency not unlike tooth-paste, and as a result guano can build up rapidly in an arid environment. Over the centuries the birds have covered the islands with layer after layer of foul-smelling guano that adds up to be one hundred and fifty feet of their thick urine!

According to The Biochemistry of Vertebrate Excretion, a classic reference book by G. Evelyn Hutchinson, an adult cormorant’s annual production of the paste-like urine is approximately thirty-five pounds; thus the combined annual urine production of the cormorant colony would approximately be thousands of tons per year!

Guano is an ideal fertilizer. A good fertilizer does several things: it provides organic material which feeds soil organisms, it provides nitrogen that drives plant growth, and it provides minerals necessary for plant metabolism and nutrition for the consumer of the harvest.

Plants specifically need nitrogen and magnesium to produce chlorophyll, which is the green pigment in plant leaves that convert sunlight and CO2 into energy (photosynthesis), to make the carbon chains and amino acids necessary for the production of DNA and protein.

The first European to grasp the potential value of guano for the Agro- Industrial Complex was Friedrich Wilhelm Alexander von Humboldt, a German pioneer in botany, geography, astronomy, geology, and anthropology—a true polymath.

Included in the thousands of samples Humboldt took back to Germany in 1804 were several small bags of Peruvian Guano, which he shared with two French chemists. Their analysis showed that the guano nitrogen levels ranged from eleven to seventeen percent nitrogen which could effectively be used as a fertilizer.

During that period in history, the most commonly applied fertilizer included bone meal and wood ashes that contained plant minerals. Wood ashes were taken from the wood stoves and applied to gardens and small fields. Shipping wood ash to Europe from the eastern American shore became an enormous business. Entire forests had been burned in Europe for fuel and now entire forests along the eastern seaboard of North America were being burned commercially to supply mineral fertilizer to Europe. Wood ash was such an important export product that efficient methods of burning wood and collecting the resultant by-product of plant minerals (contained in wood ash) became U.S. Patent Number 1.

Bone meal was manufactured by pulverizing or grinding slaughter-house bones. Driven by concern over the looming probability of soil depletion that could be caused by intensive potato growing operations, bone suppliers brought in bones from the battlefields of Waterloo and Austerlitz. “It is now ascertained beyond a doubt, by actual experiment upon an extensive scale, that a dead soldier is a most valuable article of commerce,” remarked the London Observer in1822. The newspaper further noted that there was reason to believe that grave robbers were limiting their source of human bones for fertilizer to battle fields. “For aught known to the contrary, the good farmers of Yorkshire are, in a great measure, indebted to the bones of their children for their daily bread.”

The story of epigenetics is the story of nutrition and nutritional deficiency at the enzyme, chromosomal, and gene level, and how they affect the duplication and transmission of DNA.

The ancient Egyptians cured night blindness in 1,000 BC by applying extracts of beef liver juice to the eyes and faces of the afflicted individual. We know today that night blindness is caused by a deficiency of vitamin A and that vitamin A is primarily stored in the liver of all species.

Pellagra is considered a “new disease” in the civilized world, as it was unknown in Europe until Christopher Columbus delivered the first corn seeds from the “New World” to Europe. The European peasant looked upon corn as an easy to produce miracle crop; it spread quickly as a food crop and the peasants became dependent on corn.

Corn was the beginning of the end for millions of people. Complete villages and towns came down with a plague of skin disease (dermatitis), dementia, diarrhea, and death (the four Ds—pellagra had arrived.)

Productivity faltered and a dullness settled over entire communities as a result of peoples affliction with a vitamin B3 (niacin) deficiency.

Over the next centuries many good observers connected either a deficient or toxic diet with pellagra; however, physicians and community leaders rejected the idea. Theophile Roussel, a French physician, demonstrated that pellagra was related to the poor peasants and the regular consumption of corn, and finally in 1848 he convinced the French government to stop encouraging the planting and consumption of corn. Pellagra just about disappeared from France, but the dependence on corn and the continued “epidemic” of pellagra continued in Italy and Spain, especially when the economy was troubled and peasants were driven to cultivate and live almost entirely on corn.

Corn in of itself is not poisonous. However, the little niacin that occurs naturally in corn is not easily used by humans and the resulting deficiency that occurs in a high-corn diet produces the skin disease (dermatitis), dementia, and diarrhea of classic pellagra.

The Indians of the New World subsisted on a corn-based diet for centuries without ill effect because their diets also included beans, squash, chilies, and coffee—common foods that are rich in niacin. In Mexico the Indian women soaked the corn in lime water after they ground the grain for tortillas, which produced increased availability of the small amounts of niacin naturally found in the corn.

Pellagra sufferers, including many poor white farmers and in the slaves in the old American South, consumed bowl after bowl of cornmeal mush. During good years other foods were often consumed, but they were not available in adequate levels to prevent pellagra. The universal diet for slaves and poor whites was called the three– M diet: Meat (fat back), corn meal, and molasses, and, as predictable as gravity, it produced dermatitis, dementia, diarrhea, and death. Pellagra had come to America.

The pellagra plagues were rarely diagnosed properly. Pellagra was referred to as a “Negro disease” or “black tongue,” and it occurred most frequently during droughts and economic down turns. In 1928, 7,000 people in the American South died of pellagra. It was originally thought to be an infectious disease by physicians, and they created a network of sanitariums known as “retardation centers” to isolate those afflicted with pellagra in the same manner as the system of isolation that was common in tuberculosis sanitariums. Doctors were so convinced that pellagra was “caused by a germ,” that despite overwhelming evidence that pellagra was in fact a nutritional-deficiency disease they insisted on continuing to ask “Where is the germ?”

The initial clue that pellagra could be caused by diet was revealed when Casimir Funk found that beriberi, another disabling and fatal disease characterized by dementia and congestive heart failure, could be cured by the supplementation of “vital amines,” which were substances that could be identified in food. The anti-beriberi vitamin that Funk had identified was thiamine (vitamin B1). At the same time, Funk identified other food compounds, including niacin (vitamin B3) from rice polishings (bran). But since it did not have any positive effect on the illness of beriberi, he put niacin on the back burner.

Another investigator, Carl Voegtlin, vigorously championed the theory that pellagra was a deficiency disease and chastised those who dismissed the theory that a deficient diet was the cause of pellagra.

Eventually, Joseph Goldberger, an objective and determined individual, was sent to the Old American South to identify the cause of pellagra once and for all. Goldberger visited small southern communities and towns where grown men laid dull and depressed against buildings from the scourge of pellagra. Wide-eyed and crying kids clawed and scratched at the painful and pruritic skin lesions that typified pellagra.

The wives and mothers of the pellagra sufferers, also debilitated by pellagra, suffered from hallucinations, dullness, and low energy, and they spent their days preparing corn pone, corn muffins, corn coffee, cornmeal mush, corn bread, and grits. Since they couldn’t work, corn was all they could afford.

After visiting a publicly-funded orphanage, Goldberger was sure that pellagra had a nutritional cause. The younger children wracked with pellagra were bedridden, covered with skin lesions, and depressed. The older children tended to be normal and free of pellagra. They were able enough to work odd jobs and earn small amounts of money that they used to buy fruit, vegetables, and eggs to improve their diets. By then Goldberger recognized that Voegtlin was correct: the cure for pellagra was simply a proper diet!

To prove the diet connection, Goldberger provided milk, meat, and eggs for all of the children. Within days, children who just days earlier could not participate in any activities, were able to smile and get out of bed. The plague of pellagra in the orphanage had ended.

Goldberger had ended pellagra in the orphanage by providing a niacin-rich diet to the afflicted children. However, another government team released information claiming that a species of flies was spreading the pellagra “microbe” with its sting. Physicians and the general public, familiar with the “germ theory” of disease transmission, quickly accepted the microbe theory as the cause of pellagra.

Now highly motivated and energized to refute the microbe misinformation and get the truth out to the 170,000 Americans suffering from pellagra, Goldberger brushed the red scales from the weakened legs of pellagra patients, blended them with the foul-smelling secretions and mucus, and injected the slime into himself and his family. He ate the awful concoction right in front of startled colleagues. Goldberger and his family didn’t get sick, and the microbe theory for the cause of pellagra was dead.

Goldberger then produced pellagra in prison volunteers in exchange for an early release. After this he reversed their disease with a diet rich in yeast, meat, and milk. Goldberger didn’t discover what was in the food that was necessary to prevent or cure pellagra, but he was able to prove that pellagra was not infectious and that corn itself was not a poison.

It wasn’t until 1937 that Conrad Elvehjem purified the anti-pellagra vital-amine, that was one of the fractions (niacin is vitamin B3) previously isolated by Funk years earlier.

Between the years of 1880 and 1883, more than 6,000 Japanese sailors died as a result of the paralysis, dementia (Korsakoff’s syndrome), and cardiac arrest (enlarged heart/congestive heart failure) that typify beriberi. In 1886 only three Japanese sailors died, and in 1887 none died.

There were no vaccines then, and no microbe was isolated. But since Beriberi was caused by a dietary deficiency, by changing a diet to include sources of thiamine (vitamin B1) the disease could be prevented and cured..

Kanekiro Takaki, a Japanese naval surgeon, was able to prove that a diet primarily made up of polished rice was the cause of the beriberi of the Japanese sailors. Those afflicted with beriberi typically walked with a swaying sheep-like gait and clinically exhibited a general paralysis and dementia, but the well-fed officers on the same ship were less likely to be affected with beriberi.

The Riuyo, a Japanese naval ship with a complement of 276 men, returned from an extended voyage in 1883 with 25 dead sailors and 144 other crew members stricken with beriberi. In 1884 Takaki was able to convince the Japanese naval command to allow him to join the crew of a second ship going on an identical voyage. Takaki believed that beriberi was caused by an “imbalance in the carbon and nitrogen in the diet. He was incorrect about this explanation for the cause of beriberi, although he was on the correct path. It was caused by their diet!

On the second ship, the standard diet of polished rice was replaced with the British naval diet consisting of oatmeal, vegetables, and condensed milk. This dietary change was treated with skepticism and resistance by the Japanese crew who did not want to eat British style: “fourteen sailors scoffed at the “barbarous food” and lived on polished rice that they had smuggled aboard. When the ship returned to port, only the fourteen who had smuggled the polished rice aboard had developed beriberi.

The news of Takaki’s success in preventing beriberi with a change in diet fell on deaf ears in Europe. Takaki’s report was thought of as shallow and childish because his theory didn’t include germs.

The Dutch colonists in the Dutch East Indies would have saved enormous amounts of money by not having to bury large numbers of work gangs and retraining new ones who then had to labor with congestive heart failure and paralysis.

Dutch troops fell ill from beriberi so rapidly after arrival in Indonesia that they couldn’t suppress native revolts in Northern Sumatra. Beriberi continued to plague the Dutch troops until Christian Eijkman was sent by the Dutch government to determine what bacteria was causing beriberi.

Eijkman had no success in transmitting the disease to healthy chickens by injecting them with “infectious blood.” He was close to throwing his hands up and going home. Just before he was ready to return home he looked out the window and saw his entire flock of chickens weaving and reeling around the courtyard like “drunken sailors”—they had developed beriberi!

The question was: Why were they all sick? Six months later the entire flock recovered. It turned out that six months earlier the chickens had been fed the expensive polished rice by a lazy kitchen employee. When a supervisor corrected the mistake by feeding the flock with the less expensive brown rice, beriberi disappeared. The waste, the rice bran, was the cure!

Eijkman added to Takaki’s theory by stating that there was something in the polished rice that caused beriberi. Eijkman’s theory was summarily dismissed by the authorities who believed that polished rice was a kind gift to the hungry natives, and his theory was also rejected by physicians who declared that “beriberi was caused by an infection from a bacterial germ.”

The steam-driven rice mills that Westerners believed were civilizing the Far East were actually causing the beriberi plague, but no one wanted to believe it could be.

After the Spanish American War, in a kindly effort to engender good will, the United States government introduced polished rice to the Philippine prison system. Immediately, the number of beriberi cases skyrocketed. In 1900 there were no prisoners afflicted with beriberi, but by January of 1902 there were 169 cases, and by October of the same year there were 579 more cases. It seemed as though nothing could stop the “outbreak” of beriberi, because 5,000 prisoners were affected and many died within ten months of the first cases.

Whole rice with the bran was reintroduced, and by February of the next year the beriberi scourge had ended. Eijkman’s successor, Gerrit Grijns, a Dutch physician had come to the realization that there was some “protective substance” in the rice polishings (rice bran) that was essential to health.

Another physician, W. L. Braddon, investigated different tribal groups in Malaya and found that those peoples who ate polished rice suffered from a high rate of beriberi, while those that ate whole grain rice did not. Braddon blamed “toxins” in the rice for beriberi, so polished, white rice was accepted as the cause of beriberi, but the toxin theory was incorrect.

Ten years later, Jansen and Donath finally identified the beriberi protective vital amine-thiamine (vitamin B1).

Scurvy was another common, debilitating, and fatal scourge of sailors for thousands of years. It was accepted that a high percentage of sailors would die each voyage from scurvy. Within two to three months at sea their teeth would fall out, their gums would bleed, and they would fall overboard from a general weakness or die from an internal hemorrhage.

Over history, millions of townspeople, soldiers, sailors, and prisoners have died from scurvy. Most of these deaths could have been prevented or cured. Unfortunately, as in the case of beriberi, the correct observations of the few on the front lines of the disease were ignored, and in some cases were vigorously rejected by the “educated and powerful.”

“Scurvy was responsible for more deaths at sea than storms, shipwreck, combat, and all other disease combined,” writes historian Stephen Brown. “Historians have conservatively estimated that more than two million sailors perished from scurvy during the Age of Sail—a time period that began with Columbus’s voyages . . . and ended with the development of steam power . . . on ships in the mid-nineteenth century.”

The story goes that a young British sailor, Zoe, had signed on for a sea voyage in London, and after three months at sea eating salted meat, beans, and biscuits his gums began to bleed. He had developed a classic case of scurvy. The ship’s captain set Zoe ashore on a small island and left him to his own devices.

When he became very hungry, Zoe began to gorge himself with grass and sea weed. Within a few days he had the strength to get up and walk—and found he was cured! Another ship picked him up a few weeks later, and his tale of how the green grass and sea weed had reversed his scurvy spread like wildfire.

Most plants and animals (except for the guinea pig) can store vitamin C. However, humans are unable to store vitamin C and require a daily source of it.

In 1564 a Dutch physician recommended a daily supply of oranges for sailors to prevent and cure scurvy. In 1639 John Woodall, an English physician, prescribed citrus, including lemons and limes, for British sailors to prevent scurvy.

In 1601 the English navigator Sir James Lancaster wrote that lemons helped to prevent scurvy.

James Lind, a Scottish naval surgeon, finally eliminated scurvy in the British navy after researching the legend of Zoe and his miraculous self-cure. In 1753, as he recorded in his A Treatise on Scurvy, Lind divided sailors with scurvy into six separate groups. Each group was provided with different foods, and only the sailors who consumed lemons and oranges recovered rapidly and completely. Lind did not prescribe a specific treatment as a result of his study, so it took another forty-two years before the British navy adopted his recommendations.

The British navy delayed another fifty years before officially supporting the concept of providing citrus to sailors to prevent and cure scurvy.

Finally, after two hundred years of controversy, citrus was recognized as the prevention and cure of scurvy at the end of the fifteenth century, but not in time for Vasco da Gama’s voyage around the Cape of Good Hope in 1497, in which he ended up having to bury at sea 100 of his 160 man crew who had died of scurvy.

Jacques Cartier’s crew suffered even more from ninety percent of them having scurvy. Many were disabled by edematous legs blackened with subcutaneous hemorrhage. Local Indians rescued them by providing them with a vitamin C-rich-tea brewed from the bark of the white cedar.

Even with the general success of fruit and vegetables in preventing and treating scurvy, including the efforts of the great Captain Cook, whose crewman were legendary for their good health (he required them under the threat of the lash to eat up to twenty pounds of onions each week), progress in the prevention of scurvy lagged for lack of official support. There was a failure of physicians universally agreeing that scurvy was a nutritional deficiency, as well as confusion resulting from misdiagnosis of sailors who had scurvy, beriberi, and pellagra all at the same time.

In 1795, a year after Lind’s death, two surgeons were successful in getting the British naval command to routinely provide lemons and limes to the ship’s crew to prevent and cure scurvy—thus the nickname “limey” for British sailors.

Pure vitamin C was finally identified by a laboratory accident by Albert Szent-Gyorgyi, who said, “Vitamins . . . will help us reduce human suffering to an extent which the most fantastic mind will fail to imagine.”

Rickets was recognized as a disabling malady from medieval times through the smog-choked skies of the Industrial Revolution and deep into the twentieth century. As rickets usually didn’t produce the death of a sufferer, few had the will or interest to find the cause and cure. The general masses actually thought that people with twisted spines, bowed legs, and enlarged joints were normal!

In 2013 rickets reappeared in children in the industrialized world with a reported increase of 400 percent between 1995 and 2011. This is a result of fears of increased risk of skin cancer from skin exposure to the sun as well as widespread use of sun screens, fear of cholesterol in egg yolks (a good source of vitamin D), and instructions from pediatricians to avoid supplementation of vitamins and minerals to children because of overdose concerns. Again, rickets has become the scurge of our children as a physician-caused disease.

Children with rickets show the classical bowed legs, swollen wrists and swollen ribs. In October of 2013, Dr. Sally Davies, Britain’s chief medical officer described the return of rickets as “appalling.”

To Wallach, the most interesting primate and human disease cases were the ones that involved nutritional deficiencies. Despite his interest, the golden age of critical and detailed research projects for the essentiality of various minerals in animal nutrition occurred between 1920 and 1978. Any remaining curiosity about the role of mineral deficiencies in human pathology was dealt a crippling blow by the discovery of penicillin in 1938, and a second body punch was the discovery of cortisone in 1942.

The coup de grace for interest in dietary mineral deficiencies came to the ranks of the medical community in the 1980s during the heady drive to find patentable genetic-engineering techniques to treat everything from bowel gas to cancer. While the basic studies of nutrition have become the stepchild of 21st century science, Wallach observed that “the unquestionable basic mineral needs of our human flesh cry out for attention from the waiting rooms of physician’s offices, hospital wards, and morgues.”

Obesity and being overweight are problems that are ubiquitous in America.. Americans are the most obese nation in the world as of 2010. The American eating habit has changed from three square meals per day to nibbling 24/7—nibble, nibble, nibble all the way home! This behavior in horses is called “cribbing.” The same behavior in humans is referred to in humans as pica or the “munchies,” a seeking, a craving with a licking and chewing behavior that has its genesis in mineral deficiencies. No known vitamin, protein, or calorie deficiency initiates this pica behavior. Nor will supplementing the diet with vitamins or eating sugar, carbohydrate, fat, or protein quench it!

Cribbing is the name given to a particular form of pica in domestic animals. An example of cribbing in animals is when they chew or gnaw on a wooden feed box, fence, hitching post, or stall rail, gate or barn door. A good farmer knows that when a horse cribs, the animal really has an obsessional craving for minerals. The farmer or rancher supplements an animal’s diet with minerals to: (1) preserve the animal’s health, (2) save the animal’s life, (3) save on veterinary bills, and (4) save having to rebuild the fence or barn, since a mineral-deficient horse will literally eat the fence or barn looking for minerals!

Essential minerals never occurred in a uniform blanket in an individual field or over the crust of the earth; they have always occurred in veins, much like chocolate in chocolate ripple ice cream. Whatever essential minerals may have existed in a particular area of the earth’s crust have been severely depleted through intensive modern agriculture. It should be no surprise then that the mineral- deficient animal behaviors of cribbing will also appear in mineral-deficient humans as pica, the “munchies,” binge-eating, and cravings. As a result of being an essentially mineral-deficient nation, American has become the number one obese nation in the world—“We’re number one!”

“Salt appetite” or the “munchies” dominate the American scene and these deficiency symptoms are universally observed at dramatic levels in pregnant animals and pregnant humans. All vertebrates will exhibit these symptoms when they become mineral deficient. This desperate state is a physician-caused plague. Dieters, athletes, vegetarians, vegans, meat eaters, embryos, fetuses, children, teen agers, young adults, adults, baby boomers, seniors, and the centenarians are all mineral deficient and are universally exhibiting the symptoms of pica, cribbing, and the “munchies” that used to be common only in expectant mothers!

From antiquity, the description of the cribbing and pica behaviors in humans has been recorded in the written records of all societies. The universal presentation of these behaviors were in relationship to pregnant women. The Hawaiian King Kamehameha’s mother, Queen Kekuiapoiwa, had cravings for eyeballs. Although she specifically demanded a chief’s eyes, she was given the salty eyes of sharks which she ate with ravenous abandon.

The snack-food and fast-food industries are aware of this relationship between the behaviors of pica, cribbing, cravings, the munchies, binge-eating, salt-hunger, and other behaviors demonstrating mineral deficiencies, and they formulate and engineer their products so that “You just can’t eat one!” Unfortunately for humans, our bodies will temporarily interpret sugar (and sugar substitutes) and salt intake as a fulfillment of the cravings for essential minerals. Historically, the consumption of salt to satisfy a pica behavior may have had some value because raw sea salt did oftentimes contain small amounts of trace minerals and rare earths.

Today, contrary to popular belief, salt is not intrinsically harmful as doctors would have you believe. It does present the problem of allowing our bodies to falsely perceive that we are getting sufficient minerals when we eat salt. This is the mineral equivalent of the “empty calorie diet” concept. Just as processed white flower and sugar calories satisfy hunger while providing no protein, essential fatty acids, or vitamins, our salt intake confuses the body into believing it is consuming adequate amounts of all essential minerals.

Farmers and husbandrymen use the salt-hunger behavior to ensure the consumption of trace minerals in livestock by incorporating trace minerals into the formulation of salt blocks containing a minimum of eighty-five percent sodium chloride. Anything less than eighty-five percent sodium chloride would be ignored by the animals, even if they were to have major mineral deficiencies. The salt-consuming animals never get high blood pressure or stroke, even though by design they will consume 0.5 percent of their diet as salt. For them, adequate salt intake obtained by licking “trace-mineral salt blocks” guarantees their proper mineral intake.

Most physicians (even “alternative physicians” of all types) would have you believe that you need little or no salt. They must think that humans are dumber than cows, for the first food item the successful husbandry man provides for his animals in a pasture is a trace-mineral salt block! At the same time, the multi-billion dollar a year snack food industry is well aware of the human requirement for salt (NaCl) and other elements and minerals. Even the USDA reports that ninety-five percent of all Americans of all age groups are minerally deficient.

In July of 1993 thousands of Americans, particularly people in the upper Midwest and on the East coast, were swooning and fainting during a sweltering heat wave that soared above 110 degrees F. Seven hundred thirty-three Americans, in fact, died during the heat wave. The effects of the heat wave on the American population were so dramatic that the daily body count was published every morning in the local and national newspapers as though they were fallen American soldiers who had been killed in a far-off place defending American interests. Seemingly, no one knew what to do!

The state medical examiner of Pennsylvania said, “We don’t know why so many have been affected by the heat—half of the dead and hospitalized were people who had air conditioners.” That statement was especially odd when one thinks of the millions of people who live in terrible deserts with temperatures above 120 degrees F in the shade. They don’t have air conditioners, and they don’t die from the heat. Could it be some genetic shield that protects them?

The cause of this disaster was screaming at the medical profession, but no one heard, voiced, or printed the appropriate public warning. Having lived in the Kalahari Desert while in Africa, Wallach had taken basic medical physiology, and Wallach was already a well-respected pathologist of animals and humans.. He knew the horrible toll of the heat wave was the result of a simple salt (NaCl) deficiency!

Hence the disaster was a physician-caused mass heat stroke! It was your kitchen-variety heat stroke that any boy scout could diagnose, recognize, and remedy instantly with a glass of water spiked with a teaspoon of salt! Yes, the cause of these thousands of American casualties was a simple salt deficiency, and almost all of those who fell ill and died were those who had been placed on a salt-restricted diet by “their medical doctors” This was a physician-caused disaster.

The human tragedy of the heat wave of ’93 was the direct result of the medical profession’s paranoia about salt. They put their human charges on a low-sodium or low-salt diet for prevention of hypertension and heart disease. There is not a single double-blind study that will show any benefit of a salt-restricted or low-sodium diet.

About a week after the carnage, the state medical examiners again marveled from their pulpit, “The only common denominator of those who had died or were hospitalized during the heat wave were those who had been diagnosed with heart disease or high blood pressure.” Yes, as predicted their physicians had placed each one on a salt-restricted or no-salt diet” and none were contacted to warn them that during the heat wave it would be in their best interest to drink eight glasses of water each day, and, oh yeah, by the way, “don’t forget to add a teaspoon of salt to each glass.” Those who quickly made it to hospitals and were successfully treated were given IV saline solution—salt water!

Yes, the medical profession must think the general public is dumber than a cow. Unfortunately for the general public, it is the medical profession itself that is dumber than history.

Aristotle noted in Historia Animalicem VIII that “sheep are healthier when they are given salt.” Sheep never get hypertension or high blood pressure.

Where salt was rare, it was traded ounce for ounce for gold, brides, or slaves. Salt, salt-rich clays, and wood ashes were the first mineral food supplements used by man in the dawn of time. The Roman statesman Cassiodorus was quite observant when he said, “Some seek not gold, but there lives not a man who does not need salt.”

Rome’s major highway was called Via Salacia or the Salt Road. Soldiers used the Salt Road to carry salt up from the Tiber River where barges brought salt from the salt pans of Ostia. Soldiers “worth their salt” were paid a “salary.” The word salary is derived from salarium, a soldier’s “salt ration.”

Marco Polo reported salt coins and discs in Cathay. Salt discs in Ethiopia were “salted away” in the king’s treasury. The production of salt as a food supplement for man and beast is as old as civilization itself. Salt was produced in shallow ponds of seawater through evaporation and by mining rock salt from large land-locked deposits.

The rock salt mines of the Alps (Salzberg, Halstatt, and Durrenberg) played an essential role in the development of cultures in ancient prehistoric Europe. The Halstatt salt mine is the one of the oldest commercial salt businesses on Earth. It is located fifty miles from Salzburg (“Salt Town”). Salt has been mined from the Salzburg mine since the early Iron Age. Salzberg (“Salt Mountain”) contains a salt deposit 2,000 feet wide and 2,500 feet deep.

Tools found in the salt mines date back to the Bronze Age (1400 BC). Early communities sprang up around salt springs as humans followed and hunted wild herbivores that were drawn to the springs by their craving for salt—the behavior of pica and cribbing. Wallach saw this same pattern in Africa: the muddy and cloudy water supplies that contained salt and minerals were heavily used by the game animals, while water that was sparkling clear and was mineral and salt-free was rarely consumed.

Between the 9th and 14th centuries, peat was soaked in sea water, then dried and burnt, and the resultant ash was extracted with sea water. Many millions of tons of peat, a plant source of minerals, was harvested for the mineral/salt dietary supplement process for commercial trade.

Mesopotamian towns specialized in the salt-production industry and transported salt up the Tigris and Euphrates Rivers. Jericho (8000 BC), near the Dead Sea and the salt mountain of Mo, and was one of the oldest known agricultural communities that participated in the salt trade.

The merchants of Venice developed an elaborate salt trade, and by the 6th century the salt trade from the villages surrounding the city was its main business. By 1184 Venice controlled the export of Chioggia salt, and by the 14th century was supplying salt to Alexandria, Cyprus, and the Belearic Islands.

The salted-herring business developed in the 1300s. The Dutch perfected the process of salting fish as a method of preservation. At its peak, this industry alone produced three billion salted herring annually and used 123 million Kg of salt per year. In the beginning of the 20th century, salt pork and salt herring provided the main source of animal protein for most of Scandinavia with a daily per person consumption of 100 grams or a quarter of a pound. (Modern physicians want their patients to consume less than three grams of salt per day.)

Perhaps the most famous and most romantic modern day part of the salt industry occurred in Africa. Twice each year great camel caravans carried salt slabs from the Taoudeni Swamp in the Sahara to Timbuktu in Mali. Two thousand to 25,000 camels (only 25% of the animals survived the round trip journey) were used to carry over 300 tons of salt slabs to their destination 720 km away.

In other parts of salt-poor Africa, humans developed the practice of drinking cattle blood and urine to obtain salt. The residence of the Sierra Leone coast gave all they possessed, including their wives and children, in exchange for salt—after all, salt was a necessity of life. Salt, like other elements and minerals, is not distributed equally around the earth and is therefore coveted by the have-nots. It is said by African tribesmen that, “He who has salt has war.”

The British imposed a despised salt tax on India. Ghandi (1924) published a monograph (Common Salt) in protest of the government monopoly. Ghandi pointed out that the grains and the green foods of India were very low in salt. Because of the vegetarian habits of the majority of tropical Indians, they required a significant salt supplement to their diet.

In 1930 Ghandi led 78 “rabid” supporters on a 300 km “Salt March” protest from Ahmedabad to the sea. He swam in the sea, and picked up a crystal of salt on the beach, and then walked back to Ahmedabad, where he was promptly arrested and thrown into prison. Angered, 100,000 Indians revolted against the salt tax and were arrested after they too picked up untaxed salt. The British salt tax was eventually repealed in 1946 at the end of World War II.

The death of soldiers from a sodium loss was historically common during military operations in tropical countries. Soldiers in the desert could lose up to twenty-four pints of water per day as sweat. And that much sweat was not just water, it was a soup that contained all the nutrients floating around in their blood including up to seventy to one hundred grams of salt per day.

Salt is known as the most universal and most widely used food supplement and condiment. So great is the human craving for salt, it is obvious that salt itself is in fact necessary to the health, and even to the life and survival, of man, Yet, it’s claimed by modern medicine to be dangerous. As we have seen, the only danger from salt is that when one consumes it in response to a craving that has been produced by a mineral deficiency, in which case the intake of salt will mask the absence of other minerals. The average salt requirement for a human is approximately 0.5% of the dry weight of the daily diet or six to ten grams per day. If you are very good at following a doctor’s advice and restrict salt to less than one gram per day, you can increase your risk of a heart attack by as much as six hundred percent.

In addition to the legitimate salt cravings, medicine’s efforts to suppress the human craving for other minerals is well documented:

A catalogue of bizarre instances of human pica is found in the doctoral thesis of Augustus Fredericus Mergiletus (1701). In men, he recorded one individual who ate leather, wood, nestlings, and live mice. A second consumed woolen garments, leather, a live cat, and small mice. A third ate cat’s tails and decomposed human flesh infested with maggots.

In women, Mergiletus recorded cannibals who ate human flesh, including one horrible lady who “lured children to her house with the promise of sweets, killed them, and pickled them for storage and for consumption at a later date”—a female version of Jeffrey Dahmer! The murders of the children where only discovered when the woman’s cat stole the pickled hand of a child and carried it over to the neighbor’s house.

Girls who ate their own hair, cotton thread from their own clothes, handfuls of raw grain, and lizards have been documented.

Cooper (1957), in her classic report on pica, refers to several ancient and medieval writers who emphasized the occurrence of pica in pregnant women. Aetios noted pregnant women crave various and odd foods, some salty, some acid, saying that “some crave for sand, oyster shells and wood ashes.” He recommended a diet to include “fruits, green vegetables, pig’s feet, fresh fish, and old tawny fragrant wine.”

Boezo (1638) noted that pica occurred most often in pregnant women. Boezo saw pica as a physiological problem, and is the first to mention iron preparations as a treatment for pica. He suggested “one and one half scruples of iron dross taken for many days as wonderfully beneficial for men and women.”

Boezo also noted the case of “a virgin who was accustomed to devour salt in great quantities from which chronic behavior she developed diarrhea and wasting.” She probably had Addison’s disease.

Christiani of Frankfurt (1691) reported that a woman had eaten 1,400 salt herring during her pregnancy.

LeConte (1846) suggested that animals eating earth do so because of a “want of inorganic elements.”

The most common descriptions of pica and cribbing by Mergiletus were of women’s desire for clay, mud, and mortar scraped from walls, just like modern children who eat caulking and lead paint. Wallach has often said that children who eat lead paint are screaming for minerals for their mineral-starved bodies. Give them minerals and they won’t eat lead paint.

Pica is no stranger in modern times. The substances most frequently reported to be consumed as the result of pica in humans include paper, metallic gum wrappers, chewing gum, ice, dirt, coal, clay, chalk, corn starch, baking powder, pebbles, wood, plaster, paint, chimney soot, hair, human and animal feces, and cloth.

In March 2012 it was reported on the national news that a three-year-old girl, Natale Hayhurst from Terra Haute, Indiana loved to eat light bulbs, paper, cardboard, sticks, dirt, aluminum diet soda cans, magnets from shower curtains, and plastic bottles and toys. There was a complete story in most local newspapers about her “rare condition.” However, none of the reports mentioned that her pica was due to mineral deficiencies, and when contacted neither her parents nor did any media show any interest in interviewing Wallach about the obvious mineral deficiencies that the poor child was obviously suffering from.

In August 19, 2013, the British Daily Mail, reported that Kelly-Marie Pearce, a pregnant 28-year-old woman, ate 5,000 bath washing sponges (twenty sponges per day) and bowls of foul feces contaminated sand from the bottom of a parrot cage during her two pregnancies.

In the same Daily Mail article stated that, “some develop pica because of stress, obsessive-compulsive disorders, a liking for fragrances or behavioral problems that would require cognitive behavioural therapy with an experienced psychologist.” How can professional health care providers so completely miss the diagnosis of mineral deficiencies?

Because of social constraints on our public behavior, most people under public scrutiny who suffer from mineral deficiencies and pica (aka: “The Munchies”) will chew gum, eat sugar (4.5 calories per gm), milk chocolate, snack food, or soft drinks when their bodies are really screaming for minerals. Other socially popular ways to publically respond to pica and the cravings of mineral deficiencies include smoking, alcohol, and drug use. Ice eating, known as “pagophagia,” is also common, particularly in iron-deficient children and adults.

The research confirms these observations:

Henrock (1831) attributed pica to “a paucity of good blood and a lack of proper nutrition.”

Waller (1874) reported that David Livingston observed many cases of clay and earth eating (geophagia), a common form of pica frequently observed in pregnant women in central Africa.

Orr and Gilka (1931) and de Castro (1952) recognized that “edible Earths might be rich in sodium, iron and calcium.”

Gilford (1945) reported that pica was common in Kenya amongst African tribes (Kikuyu) living mainly on a vegetarian diet. Pica was absent in those peoples eating diets rich in animal flesh, blood, and bones (Massai).

Nicolas Monardes (1493–1588), a Spanish physician, published Historia Medicinal. In the second volume is a scientific dialogue on the “virtues” of iron. However, iron was not generally accepted for medical use as a supplement until the 1600s when Nicolas Le’Mery found iron in an analysis of animal tissue ash.

Dr. Thomas Sydenham (1682) recommended for all diseases involving anemia, treatment by “bleeding” (if the patient was strong enough) followed by a course of Dr. Willis’ “Preparation of Steel.”

In 1745 V. Menghini demonstrated the iron in tissue was found primarily in red blood cells. In the same year, Dr. Willis’ “Preparation of Steel” was marketed as a patent medicine. It consisted of iron fillings and tartar roasted and given in wine as syrup, or rolled into pills.

In 1850 it was reported in a medical journal that a woman who had lost three premature pregnancies was given “iron scales from the smith’s anvil, steeped in “hard” cider during her entire pregnancy. The woman’s appetite increased, and her digestion, health, and spirits improved. She delivered a full-term boy who was so strong that he could walk by nine months of age. At age five he was so strong and tall that he became known as the “iron baby.”

Dickens and Ford reported that twenty-five percent of all children ate earth.

Cooper (1957) reported a twenty-one percent rate of pica in American children referred to the Mother’s Advisory Service in Baltimore.

Lanzkowsky (1959) reported that twelve children with pica had hemoglobin that ranged from 3-gm percent to 10.9-gm percent with a mean of 7.89-gm percent +/-2.64. The institution of iron dextran “resulted in a cure for pica in two weeks.” Again, if children are consuming or supplemented with adequate amounts of minerals they will not eat lead paint.

McDonald and Marshall (1964) reported on twenty-five children who ate sand. They divided the group in half, giving one group iron and the other group saline. After three to four months, eleven of the thirteen children given iron were cured of their pica behavior compared to three of the twelve given saline.

Reynolds et al. (1968) reported on thirty-eight people with anemia who exhibited “pagophagia,” or ice eating, as the most common form of pica. Twenty-two of the thirty-eight had their pica symptoms disappear after correcting an iron depletion anemia.

Woods and Weisinger (1970) reproduced pagophagia experimentally in rats by withdrawing blood. The pagophagia in the anemic rats was cured when the anemia was cured. They also noted that pica and cribbing behaviors were not produced by vitamin deficiencies or cured with vitamin supplementation.

Two-thirds of the 153 pregnant women studied by Taggart (1961) developed cravings. The most common craving was for fruit, pickles, blood pudding, licorice, potato chips, cheese, and kippers. A craving for sweets, vegetables, nuts, and sweet pickles came in second place.

Phosphate appetite was described by LeVaillant (1796) as the anxious search by cattle in phosphate-deficient South African pastures for discarded animal bones (osteophagia). Phosphorous-deficient cattle also chewed wood (termed cribbing or pica behavior) and each other’s horns.

Osteophagia has been reported in many phosphorous-deficient wild species of herbivores including the reindeer, caribou, red deer, camel, giraffe, elephant, and wildebeest.

It was demonstrated that calcium-deficient weanling rats will consume large amounts of a lead-acetate solution, even though it tastes bad, when compared to calcium-fed controls.

Lithium was discovered in mineral water in 1817 by August Arfvedson in a Swedish laboratory in the town of Berzelius. Its use in mental illness dates back to 400 AD when Caelius Aurelianus prescribed waters containing lithium for mental illness.

In the 1840s, it was reported that lithium salts combined with uric acid dissolved urate deposits. Lithium was then used to treat kidney stones “gravel,” gout, and rheumatism, as well as a plethora of physical and mental illnesses.

Health spas picked up on the notoriety that lithium was receiving and often times marketed themselves with exaggerated claims of the lithium in their hot mineral springs, even going to the extent of adding the word “lithia” to their name to woo the general public. Because of the easy access to lithium salts by the general public at hot springs and spas, physicians looked elsewhere for arthritis therapies.

There are seventy-five metals listed in the periodic table, all of which have been detected in human blood and other body fluids. We know that at least sixty of these metals (minerals) have a direct or indirect physiological value for animals and man. Organically, not a single function in the animal or human body can take place without at least one mineral or metal cofactor.

“On an inorganic chemical basis little distinction can be made between metals. Both metals and non-metals enter actively into chemical reactions. The difference reveals itself in the physical properties. By common agreement, those elements that possess a high electrical conductivity and a lustrous appearance in the solid state are considered to be metals,” according to Bruce A. Rogers, metallurgist and physicist.

Minerals truly govern our lives whether we recognize it or not. Sadly, the current medical “wisdom” on salt, and the medical profession’s inability to recognize that the various forms of pica that are exhibited in America, show that we have failed to understand the true effects and importance of the ubiquitous mineral-deficiency diseases.

Most Americans suffer from diseases that are thought to be genetic by the medical community when they are in reality diseases that are preventable and or curable with simple nutritional supplementation of minerals and dietary changes. These diseases are examples of events that can be explained and solved by the use of our knowledge of epigenetics.

Several racial and religious groups of American people suffer more than others from “the lack of knowledge and laziness and greed of the medical community” according to the CDC.

African-Americans

“Black Americans have shed their chains made of iron and the white-only ballot, and yet, have again been enslaved by new slave masters—doctors in white coats with fraudulent chains of DNA and a conjured up black gene.”

—Dr. Joel D. Wallach, BS, DVM, ND and

Dr. Ma Lan, MD, MS, LAC

Black Gene Lies: Slave Quarter Cures

The African-American community (aka Blacks) has long been abused by the medical industry who falsely teaches them that their common diseases, such as high blood pressure, sickle-cell anemia, type 2 diabetes, obesity, heart disease, arthritis, cancer, as well as a high rate of birth defects and their shorter than average life spans, are generated by a terrible “black gene.”

Since the days of slavery, blacks have been taken advantage of by many, but the most insidious injustices have been from those in the the medical community. Poor health outcomes crushing Blacks and the poor were “scientifically” rationalized as being genetically and hereditarily predetermined. Segregation, involuntary sterilization, nihilistic public health and educational policies, and incarceration were considered “solutions” for the nation’s poor, disabled, mentally challenged, or racially “inferior” populations and the problems they created.

Sympathy for the poor during the Great Depression, and the public’s reaction to the rise of Hitler and Nazi Germany’s eugenic and dysgenic ideologies and policies, did help to slow and neutralize the increase of these hateful practices in America. By the end of World War II many of these activities had gone underground or were almost completely discredited.

The healthcare environment perpetrated the U.S. tradition of misusing Black, poor, socially disadvantaged (prisoners), and disabled (mentally-challenged children) patients for medical experimentation, eugenics, or medical school class demonstrations. These practices were defended on the grounds of “scientific research” or the health profession’s need for subjects for “teaching purposes” or studies to limit defective genes (and limiting the welfare rolls as a result).

In the mid-1920s, highly respected African-American social scientists, such as E. Franklin Frazier, argued that white physicians regarded their black patients as “simply experimental material.” This distance between medical ethics and U.S. health system practices happened on several levels, including, but not limited to, utilizing uninformed or poorly-informed patient populations for research that results in injury or, at the very least, compromises their health outcomes.

This includes performing excessive or unnecessary surgery on patients, often under coercion or without their consent, especially for eugenic purposes. It includes tailoring patients’ care and treatment according to the professional training or research requirements of the institution’s surgical demonstrations and “teaching and training materials,” instead of attending to actual medical needs driven by the requirements of the individual patient’s case. This also includes denying, inadequately treating, or abusing patients in need of simple basic care or services because they fail to have rare diseases, are “uninteresting” or “routine” cases, or do not meet the standards for research protocols.

Between 1930 and 1945, the picture of this misuse and abuse changed as the research excuse for these practices became more accepted. This was largely the result of the Flexner/ Johns Hopkins model for medical education and biomedical research that had been enacted at a network of major American medical facilities. The infamous Tuskegee syphilis experiment, initiated in the 1930s is an example:

Initially implemented as part of a U.S. Public Health Service/Rosenwald Fund rural syphilis public health and treatment program in the late 1920s, the non-treatment phase of approximately 400 syphilitic Black men with 200 uninfected controls began in 1932 in Macon County, Alabama (Tuskegee, the county seat, is home of the famous Tuskegee Institute). The purpose was to study the effects of syphilis on untreated African-American men. None of the patients were specifically informed that they had syphilis. They were told they were being treated for “bad blood,” but no treatment was given during the study.

Fueled by patient deception and professional paternalism that viewed the patients as laboratory animals, the decision was made to block patients from being informed of or receiving the effective and standard treatment of penicillin available after World War II, and the 40-year experiment continued until 1972. It resulted in 100 deaths from untreated syphilis, scores of blind and demented participants from the ravages of the disease, numerous wives who contracted syphilis, and their children born with congenital syphilis.

The study produced numerous presentations at medical meetings and more than 13 scientific papers. But it was scientifically flawed from the start, and most of the subjects received some treatment in order to render them noninfectious early in the study.

The ethical conflicts of using patients for inhumane and unethical purposes and the practice of overusing Black American patients for medical studies continued. Dr. John A. Kenny, one of the most influential and highly respected physicians in the United States, exposed the enormity of the problem and gave his perspective during this era. In his 1941 plea “that a monument be raised and dedicated to the nameless Negroes who have contributed so much to surgery by the “guinea pig route, he said:

In our discussion of the negro’s contribution to surgery, there is one phase not to be overlooked. That is what I May vulgarly, but at the same time seriously, term the “guinea pig” phase . . . one of that practically endless list of “guinea pigs.” . . . (Un)told thousands of . . . Negroes have been used to promote the cause of science. Many a heroic operation performed for the first time on a nameless Negro, is today classical. Even Negro physicians, surgeons and nurses, at times wince at the scenes . . . of Negroes used for experimental and teaching surgery.

One of the dark secrets of the American biomedical and health care systems between 1960 and 1980 was the “epidemic” of forced sterilizations and unethical surgery. Why the Tuskegee syphilis experiment raised such an outcry and the forced sterilizations “rated hardly a whimper” is a story of medical racism. Dr. Kenny went on to say, most of the victims of this sterilization and the preponderance of unethical surgeries “were the traditional targets of the scientific racists, Galton eugenics-oriented, hereditarian, social Darwinist, biological-determinist influenced U.S. scientific and health system—Black Americans, Hispanic/Latino Americans, lower middle-class and working class white families unable to afford the cost of proper medical care, the mentally challenged, the disabled, the incarcerated, the indigent, institutionalized children, and the unemployed.”

By 1980 the United States’ sterilization laws had been in place for more than 70 years; the first such law had been enacted in Indiana in 1907. Driven by Sir Francis Galton’s International Eugenics movement, 30 states and Puerto Rico ultimately passed similar forced sterilization laws. Most of these sterilization laws were based on the Model Eugenical Sterilization Law drafted before 1922 by Harry H. Laughlin, superintendent of the Eugenics Record Office (ERO) and coeditor of the Eugenical News; he also authored a book entitled, Eugenical Sterilization in the United States.

The Model Law required each state to appoint a state eugenicist responsible for enforcing compulsory sterilization laws. These laws were directed at the “feebleminded,” insane, and criminalistics (including the delinquent and the wayward); epileptic; inebriate (including alcoholics and drug addicts); diseased (including patients with tuberculosis, the syphilitic, the leprous, and others with chronic infectious and legally segregatable diseases); blind, deaf, deformed and physically disabled, and dependent orphans; “ne’er-do-wells,” the homeless, tramps and paupers—the state eugenicist’s jurisdiction included all the above people he judged to be members of “socially inadequate classes.”

According to Laughlin, of the 63,678 Americans sterilized under the eugenic laws between 1907 and 1964, 33,374 (52.4%) “were sterilized against their will for being adjudged feebleminded or mentally retarded, which in most of these states was defined as having an IQ test score of 70 or lower (this included the illiterate).”

Beginning with the Great Depression and World War II, “involuntary sterilization in the American South had increasingly been performed on institutionalized Blacks.” As Dorothy Roberts reported in her 1997 book Killing the Black Body: Race, Reproduction, and the Meaning of Liberty: “The demise of Jim Crow (laws) had ironically opened the doors of state institutions to Blacks, who (then) took the place of the poor Whites as the main target of the eugenicist’s (doctor’s) scalpel.”

In 1955 South Carolina’s State Hospital reported that all 23 persons sterilized over the previous year were Black women. Of the nearly 8,000 “mentally deficient persons” sterilized by the North Carolina Eugenics Commission between the 1930s, 5,000 were Black. The State Hospital for Negroes in Goldsboro, where all of the doctors and most of the staff were white, routinely operated on black patients confined there for being criminally insane, feebleminded, or epileptic.

Before World War II, black men there were castrated or given vasectomies for being convicted of attempted rape, for being considered “unruly” by hospital authorities, or to make them “easier to handle.” None were asked for their consent.

According to Chase: “These victims of Galton’s obsessive fantasies represented . . . the smallest part of the actual number of Americans who have in the (20th) century been subjected to forced eugenic sterilization operations by (doctors employed by) state and federal agencies.” Ironically, by the 1960s, when the first generation of mandatory sterilization laws were repealed, there was a wave of new laws assaulting reproductive rights and a massive unprecedented wave of forced sterilizations. Many of them were paid for by the government and facilitated by new health financing mechanisms, often hidden under the cloak of “expanded health services for the poor, and carried out by (doctors employed by) health delivery system institutions already in place swept the country.” These programs differentially affected and were executed on black American women.

In 1974 it was argued before Federal District Judge Gerhard Gesell, in a case brought on behalf of poor victims of involuntary sterilizations performed in hospitals and clinics participating in federally funded family-planning programs, that: “over the last few years, an estimated 100,000 to 150,000 low-income persons have been sterilized annually under federally funded programs.” A study discovered that nearly half of the women sterilized were black.

Dorothy Roberts revealed that in the 1970s, “Most sterilizations of black women were not performed under the auspices of eugenic laws. The violence was committed by doctors paid by the government to provide health care for these women.”

These operations were occurring at the same time that sterilization became the fastest growing form of birth control in the United States, reaching a peak of 1,102,000 in 1972 dropping off to 936,000 in 1974.

Government officials estimated that an additional 250,000 sterilizations annually, hidden in hospital records as hysterectomies, could be added to the previous total. Blacks were disproportionately represented in these populations, and a new dimension compounded the system’s potential for abuse: “Teaching hospitals performed unnecessary hysterectomies on poor black women as practice for their medical residents. This type of abuse was so pervasive in the American South that these operations came to be known as “Mississippi appendectomies.”

The last ten year American census (2010) showed that the American population is the third largest in the world at 315 million, surpassed in sheer numbers only by China listed as number one with 1.5 billion and India second at 1.2 billion.

Also, and even more importantly, the last ten-year census reported in 2010 showed that the average black man lives to be 62, the average white man, 75, and the average Hispanic lives to be 80.5. Why is there an eighteen year difference between the life span of a black man and a Hispanic man? Doctors will say genetics and lifestyle. In reality the black man has been taught to overuse the medical system because of his medically created fears of his “terrible black gene,” whereas the Hispanic man is still using grandma’s home remedies and they just don’t get killed as often by doctors as the blacks who overuse the medical system.

Two thirds of the men in American prisons are black, seventy percent of black kids under the age of twelve are overweight, and almost forty percent are obese. The rate of dementia, obesity, and type 2 diabetes is higher in the black community than in the white and Hispanic communities, the rate of ADD, ADHD and autism is higher in Black kids, and the rates of gluten intolerance, skin problems, asthma is higher in black kids.

Amish, Mennonites, and Hutterites

The Amish can trace their heritage back to the Swiss Anabaptists of 16th century Europe. Unhappy with the faith and practice of the Catholic Church in Europe, Martin Luther lodged a protest in 1517. His revolt ushered in the Protestant Reformation, resulting in Protestantism becoming a permanent branch or sect of Christendom.

After a few years, restless students of the Protestant Pastor Ulrich Zwingli of Zurich, became frustrated with the agonizingly slow pace of the Protestant Reformation. The young revolutionaries chastised Pastor Zwingli and the Zurich City Council for continuing baptism of infants and conducting the Catholic Mass.

Shortly after a confrontation with the city council, the young revolutionaries illegally baptized each other in a secret gathering on January 21, 1525. The simple religious service in a private home-birthed movement would become a permanent branch of the Protestant Reformation.

The young rebels were given the name Anabaptists (“rebaptizers”), as they had already been baptized as babies in the Catholic Church.

The civil authorities and local leaders of both the Protestant and Catholic Churches continued to insist that they alone held authority over the citizens. They believed that Scripture was the final authority on how they conducted their lives.

The civil authorities and local leaders of both the Protestant and Catholic Churches continued to insist that they held authority over the citizens and within five months of the initial rebaptism, Hans Landis, the leader of the Anabaptists was “killed for sedition.” He was beheaded September 30, 1614, on St. Michaels Day. The followers of Anabaptism fled for their lives, and when there were gatherings they were held in the dark of remote caves. The Anabaptist movement spread to Germany and then into the Netherlands.

In the following two centuries, literally thousands of Anabaptists were executed by both civil and religious authorities. Anabaptist hunters were commissioned to track down, torture, brand, put them to death at the stake by burning, drown, imprison, dismember, and generally harass “the religious heretics.”

Describing the persecution of the Anabaptists between 1635 and 1645, an observer reported, “It is awful to read and speak about it, how they treated pregnant mothers, women nursing infants, the old, the young, husbands, wives, virgins, and children, and how they took their homes and houses, farms and goods. Yes, and much more, how they made widows and orphans, and without mercy drove them from their homes and scattered them among strangers . . . with some the father died in jail for lack of food and drink.”

The persecutions ebbed and flowed until the early 18th century and the Anabaptists were able to find refuge in Moravia, Alsace, the Palatine, the Netherlands, and later in North America. The Martyrs Mirror, a thousand page book, documents the decades of persecution.

In 1527 the persecution drove the Anabaptists leaders to put down their beliefs as a universal guide for their daily lives:

1.             Literal obedience to the teachings of Christ

2.             The church as a covenant community

3.             Adult, or “believers,” baptism after age 18 years of age

4.             Social separation from the evil world

5.             The exclusion of errant members from communion

6.             The rejection of violence

7.             The refusal to swear oaths

One scholar rendered the core of the Anabaptists belief down to three features:

1.             A devoted obedience to teachings and example of Christ

2.             A new concept of the church as a voluntary body of believers accountable to one another and separate from the larger world

3.             An ethic of love which rejects violence in all spheres of human life

In the Netherlands, Menno Simons became an influential supporter of Anabaptism. Ordained as a Catholic priest in 1524, Simons soon found himself caught between the authority of the Catholic Church and the teachings of the Anabaptists. By 1531 Simons chose the Anabaptist’s interpretation of Scripture; however, he did not officially forsake the Catholic Church until 1536.

Simons rose to the status of a leader, writer, advocate, and preacher for the Anabaptist believers. He rose to such a level of influence that many of his supporters and followers were referred to as Mennonists (Mennonites).

By the late part of the 17th century a group of Anabaptists moved northward from Switzerland to the Alsace region, which is found in what is now modern France between the Rhine River and the Vosges Mountains. A theological feud broke out between the Alsatian immigrants and those who stayed behind in Switzerland. The disagreement resulted in the formation of the Amish church in 1693.

The Amish take their name from their founder, Jacob Ammann, a young Anabaptist leader in Alsace. For the most part, the Amish, Mennonites, and Hutterites own their own subsistence farms and small manufacturing businesses, and take care of the majority of their own health issues.

Several factors opened the Amish, Mennonites, and Hutterites to being victims of predatory medical doctors. Firstly, the Amish are extremely trusting, as their word is their bond, and they would like to believe that is true also of others; secondly they self-insure and their colony pays their bills in cash; and thirdly they primarily get the majority of their food supply from the fruits of their own farms and labor.

In 1995 in Lancaster County, PA, an obstetrician/gynecologist brought civil and criminal charges against an Amish midwife for practicing medicine without a license. His purpose was to legally force the Amish community to use medical doctors for the delivery of their babies.

On the appointed day, the little county court house was surrounded by hundreds of Amish buggies that had come to support the Amish midwife. They knew that if a precedent were to be established in Lancaster County, eventually all Amish births in America would require a medical doctor in attendance.

Just before the trial officially opened, a white-haired bishop from the local colony approached the judge and asked if he could make a statement. The wise judge agreed. The bishop said that the midwife in question was his wife and if she were found guilty, his daughters would deliver the communities babies, and if they were arrested, the neighbor’s wife would deliver the communities babies and so on. He further stated that they had been delivering their own babies for hundreds of years and that they wanted to continue delivering their own babies—and they would move out of the state rather than have the medical doctors deliver their babies.

The judge summarily dismissed all charges, and the Amish with buggies from outside the county and outside of the state pulled out and left to spread the good news.

The medical doctors who do get to see an Amish patient enjoy the cash payment for their services and they enter the Amish communities several times each year to “raise money for research to look for the cure for the genetic diseases” that plague the Amish communities.

Ninety-nine percent or more of all of the “genetic diseases” that plague the Amish, Mennonite, and Hutterite communities are in fact simple, congenital nutritional deficiencies of the embryo (for example: congenital deafness, cleft palate, cleft lip, spina bifida, Down’s syndrome, cerebral palsy, limb defects, heart defects, hernias, cystic fibrosis, muscular dystrophy, celiac disease, asthma, skin problems, etc.) or are acquired nutritional deficiencies later in life (for example: MS, ALS, Parkinson’s disease, all four dementias, heart disease, high blood pressure, type 2 diabetes, cataracts, macular degenerations, cancer, arthritis, lupus, IBS, Crohn’s disease, etc.).

All of the birth defects are totally preventable with proper preconception nutrition and many of these, including cystic fibrosis and muscular dystrophy, can be reversed later in life.

Certainly all of the diseases acquired as an adult can be prevented and most can be reversed.

The medical community claims that “the reason why there are so many birth defects amongst the Amish, Mennonites and Hutterites is that they commonly marry their relatives; they inbreed, and as a result they have accumulated terrible genes over the years”—all false beliefs.

An example of misinformation for profit comes from the Windows of Hope Foundation in Holmes County, Ohio. Each year the Amish alone raise millions of dollars for them to “do research to find the gene” or treatments for what they are told are genetic diseases. If the Mafia were to do such a thing to a community of people the Untouchables would arrest them for racketeering under the RICO laws.

Windows of Hope Foundation’s list of “Genetic Diseases”

Disease

Actual Cause

Amyotrophic Lateral Sclerosis 2 (*ALS)

Free-radical damage to Brain

Hypertrophic Cardiomyopathy

Selenium Deficiency

Cerebellar Hypoplasia (Cerebral Palsy)

Congenital Zinc Deficiency

Congenital Hypothyroidism

Nitrate Toxicity

Cystic Fibrosis

Selenium Deficiency

Deafness (cochlear)

Congenital Manganese Deficiency

Hutterite Malformation Syndrome

Multiple Deficiencies

Limb-Girdle Muscular Dystrophy, Type 2A

Selenium Deficiency

Spinal Muscular Atrophy, Type 1

Selenium Deficiency

Sudden Infant Death Syndrome

Selenium Deficiency

Troyer Syndrome (lower Limb Stiffness)

Multiple Deficiencies

One of the extremely common contributing factors to a high rate of birth defects in the Amish, Mennonite, and Hutterite communities is that actual analysis of the soil on their farms by the state university’s agriculture departments has revealed that they have soil deficiencies of certain nutrients, which then results in deficiencies of those nutrients in the local crops. Many Amish know they need to supplement the feed of their livestock to prevent losing them from nutritional-deficiency diseases or having to call a veterinarian to come out to their farm for a high-priced visit.

But as a rule the Amish, Mennonite, and Hutterite farmers believe that they can get everything they need from their food and that many of their degenerative diseases are genetic because that’s what doctors have taught them.

Another extremely common contributing factor to familial clusters, increased local birth defects, and increased rates of adult-onset disease is gluten intolerance. The Amish, Mennonites and Hutterites suffer from this at a higher level than the average American population as a result of a high-grain diet.

According to a 2009 study published by the Mayo Clinic, thirty-one percent, almost a full third of Americans (115 million), suffer from gluten intolerance. The rate is higher (80%) in the Amish, Mennonite, Hutterite, Mormon, and Seventh Day Adventists communities because they consume large quantities of wheat, barley, rye, and oats each day, and so their numbers might be as high as having fifty to seventy-five percent of their populations suffering from gluten intolerance.

Gluten intolerance, over time, produces a gradual loss of villi from the small intestine, resulting in a significant reduction in absorptive capacity. Fifty percent absorption of nutrients from food that might have from zero to ten percent of one’s nutritional requirement for selenium to begin with, will significantly increase the risk for cardiomyopathy heart disease, liver cirrhosis, cancer, cataracts, macular degeneration, MS, dementia, infertility, muscular dystrophy, cystic fibrosis, fibromyalgia, lupus, thyroid disease, and other illness.

Putting an afflicted individual on a gluten-free diet and supplementing the person correctly with the basic platform of the 90 essential nutrients and therapeutic levels of the deficient nutrient will solve many health problems, save the community an enormous amount of unnecessary misery, save an enormous amount of unnecessary spending, and add many healthful years to the individual’s life.



If you find an error or have any questions, please email us at admin@doctorlib.info. Thank you!