Sunday, January 3, 2021

Part 4 : The Invisible Rainbow...Porphyrins and the Basis of Life...Irritable Heart

The INVISIBLE RAINBOW 

A History of Electricity and Life 

Arthur Firstenberg

10. 

Porphyrins and the Basis of Life

I see little hope to be able to explain the subtle difference between a normal and a sick cell as long as we do not understand the basic difference between a cat and a stone..... ALBERT SZENT-GYÖRGYI 

STRANGELY ENOUGH, “porphyrin” is not a household word. It is not a sugar, fat, or protein, nor is it a vitamin, mineral, or hormone. But it is more basic to life than any other of life’s components, because without it we would not be able to breathe. Plants could not grow. There would not be any oxygen in the atmosphere. Wherever energy is transformed, wherever electrons flow, there look for porphyrins. When electricity alters nerve conduction, or interferes with the metabolism of our cells, porphyrins are centrally involved. 

As I write this chapter, a dear friend has just died. For the last seven years she had had to live without electricity, hardly ever seeing the sun. She seldom ventured out in the daytime; when she did, she covered herself from head to foot in thick leather clothing, a broad-brimmed leather hat hiding her face, and glasses bearing two layers of dark lenses concealing her eyes. A former dancer who loved music, nature, and the outdoors, Bethany was virtually abandoned by a world in which she no longer belonged. 

Her condition, probably caused by her years of work for a computer company, was a classic example of an illness that has been known to medicine only since 1891, its emergence at that time being one of the side effects of the sudden worldwide expansion of electrical technology. Its connection with electricity was discovered a century later. Although it is now considered an extremely rare genetic disease, affecting as few as one person in fifty thousand, porphyria was originally thought to affect as many as ten percent of the population. Its supposed rarity is due in large part to the ostrich-like behavior of the medical profession after World War II. 

In the late 1940s, medical practitioners were staring at an impossible contradiction. Most synthetic chemicals were known poisons. But one of the legacies of the war was the ability to manufacture products from petroleum, easily and cheaply, to substitute for almost every consumer product imaginable. Now, thanks to the fledgling petrochemical industry, bringing us “Better Living Through Chemistry,” synthetic chemicals were going to be literally everywhere. We were going to be wearing them, sleeping on them, washing our clothes, our hair, our dishes, and our homes with them, bathing in them, insulating our houses with them, carpeting our floors with them, spraying our crops, our lawns, and our pets with them, preserving our food with them, coating our cookware with them, packing our groceries in them, moisturizing our skin with them, and perfuming our bodies with them. 

The medical profession had two choices. It could have attempted to study the health effects, singly and in combination, of the hundreds of thousands of new chemicals that were kaleidoscoping over our world, a virtually impossible task. The attempt itself would have put the profession on a collision course with the mushrooming petrochemical industry, threatening the banning of most new chemicals and the strangling of the economic boom of the next two decades. 

The other alternative was for the profession to bury its collective head in the sand and pretend that the world’s population was not actually going to become poisoned. 

Environmental medicine was born as a medical specialty in 1951, founded by Dr. Theron Randolph.1 It had to be created: the scale of the poisoning was too great to go completely ignored. The sheer numbers of sickened patients, abandoned by mainstream medicine, produced an urgent need for practitioners trained to recognize at least some of the effects of the new chemicals and to treat the resulting diseases. But the specialty was ignored by the mainstream as though it didn’t exist, its practitioners ostracized by the American Medical Association. When I attended medical school from 1978 to 1982, environmental medicine wasn’t even on the curriculum. Chemical sensitivity, the unfortunate name that has been given to the millions of poisoned patients, was never mentioned in school. Neither was porphyria, arguably a more appropriate name. It still isn’t mentioned, not in any medical school in the United States. 

Heightened sensitivity to chemicals, we recall, was first described by New York physician George Miller Beard, who considered it a symptom of a new disease. The initial electrification of society through telegraph wires brought with it the constellation of health complaints known as neurasthenia, two of which were a tendency to develop allergies and a drastically reduced tolerance for alcohol and drugs. 

By the late 1880s, insomnia, another prominent symptom of neurasthenia, had become so rampant in western civilization that the sale of sleeping pills and potions became big business, with new formulations coming on the market almost every year. Bromides, paraldehyde, chloral, amyl hydrate, urethane, hypnol, somnal, cannabinon, and other hypnotics flew off pharmacists’ shelves to satisfy the frustrated urge to sleep—and the addiction that so often followed the long term use of these drugs. 

In 1888, one more drug was added to the list. Sulfonal was a sleeping medication that had a reputation for its prompt effect, its non- addictive nature, and its relative lack of side effects. There was just one problem, which only became widely known after three years of its popularity: it killed people. 

But its effects were quirky, unexpected. Nine people could take sulfonal, even in large doses and for a long time, with no untoward effects, but the tenth person, sometimes after only a few or even one small dose, would become critically ill. He or she would typically be confused, so weak as to be unable to walk, constipated, with pain in the abdomen, sometimes with a skin rash, and reddish urine often described as the color of port wine. The reactions were idiosyncratic, liable to affect almost any organ, and the patients were apt to die of heart failure without warning. Between four and twenty percent of the general population were reported to be subject to such side effects from taking sulfonal.2 

During the ensuing decades the chemistry of this surprising disease was worked out. 

Porphyrins are light-sensitive pigments that play pivotal roles in the economy of both plants and animals, and in the ecology of planet Earth. In plants a porphyrin bound to magnesium is the pigment called chlorophyll, that makes plants green and is responsible for photosynthesis. In animals an almost identical molecule bound to iron is the pigment called heme, the essential part of hemoglobin that makes blood red and enables it to carry oxygen. It is also the essential part of myoglobin, the protein that makes muscles red and delivers oxygen from our blood to our muscle cells. Heme is also the central component of cytochrome c and cytochrome oxidase, enzymes that are contained in every cell of every plant, animal and bacterium, that transport electrons from nutrients to oxygen so that our cells can extract energy. And heme is the main component of the cytochrome P-450 enzymes in our liver that detoxify environmental chemicals for us by oxidizing them. 

In other words, porphyrins are the very special molecules that interface between oxygen and life. They are responsible for the creation, maintenance, and recycling of all of the oxygen in our atmosphere: they make possible the release of oxygen from carbon dioxide by plants, the extraction of oxygen back out of the air by both plants and animals, and the use of that oxygen by living things to burn carbohydrates, fats, and proteins for energy. The high reactivity of these molecules, which makes them transformers of energy, and their affinity for heavy metals, also makes them toxic when they accumulate in excess in the body, as happens in the disease called porphyria—a disease that is not really a disease at all, but a genetic trait, an inborn sensitivity to environmental pollution. 

Our cells manufacture heme from a series of other porphyrins and porphyrin precursors in a series of eight steps, catalyzed by eight different enzymes. Like workers on an assembly line, each enzyme has to work at the same rate as all the others in order to keep up with the demand for the final product, heme. A slowdown by any one enzyme creates a bottleneck, and the porphyrins and precursors that accumulate behind the bottleneck get deposited all over the body, causing disease. Or if the first enzyme is working harder than the rest, it produces precursors faster than the enzymes down the line can handle, with the same result. Their accumulation in the skin can cause mild to disfiguring skin lesions, and mild to severe light sensitivity. Their accumulation in the nervous system causes neurological illness, and their accumulation in other organs causes corresponding illness. And when excess porphyrins spill into the urine, it takes on the color of port wine. 

Because porphyria is assumed to be so rare, it is almost always misdiagnosed as some other disease. It is fairly called “the little imitator” because it can affect so many organs and mimic so many other conditions. Since patients usually feel so much sicker than they look, they are sometimes wrongly thought to have psychiatric disorders and too often wind up on mental wards. And since most people don’t carefully examine their own urine, they usually fail to notice its reddish hue, particularly since the color may be evident only during severe disabling attacks. 

The enzymes of the heme pathway are among the most sensitive elements of the body to environmental toxins. Porphyria, therefore, is a response to environmental pollution and was indeed extremely rare in an unpolluted world. Except for one severe, disfiguring congenital form, of which only a few hundred cases are known in the world, porphyrin enzyme deficiencies do not normally cause disease at all. Human beings are genetically diverse, and in times past most people with relatively lower levels of one or more porphyrin enzymes were simply more sensitive to their environment. In an unpolluted world this was a survival advantage, allowing the possessors of this trait to easily avoid places and things that might do them harm. But in a world in which toxic chemicals are inescapable, the porphyrin pathway is to some degree always stressed, and only those with high enough enzyme levels tolerate the pollution well. Sensitivity has become a curse. 

Because of the way it was discovered, and the lack of synthetic chemicals in the environment at that time, porphyria became known as a rare disease that was triggered in genetically susceptible people by certain drugs, such as sulfonal and barbiturates, which these patients had to avoid. It was not until another century had passed, in the early 1990s, that Dr. William E. Morton, professor of occupational and environmental medicine at Oregon Health Sciences University, realized that because ordinary synthetic chemicals were far more widespread in the modern environment than pharmaceuticals, they had to be the most common triggers of porphyric attacks. Morton proposed that the controversial disease called multiple chemical sensitivity (MCS) was in most cases identical with one or more forms of porphyria. And when he began testing his MCS patients he found that, indeed, 90 percent of them were deficient in one or more porphyrin enzymes. He then investigated a number of their family trees, looking for the same trait, and succeeded in demonstrating a genetic basis for MCS—something no one had attempted before because MCS had never before been connected to a testable biological marker.3 Morton also found that most people with electrical sensitivity had porphyrin enzyme deficiencies, and that electrical and chemical sensitivities appeared to be manifestations of the same disease. Porphyria, Morton showed, is not the extremely rare illness it is currently thought to be, but has to affect at least five to ten percent of the world’s population.

Morton was courageous, because the rare-disease world of porphyria had come to be dominated by a handful of clinicians who controlled virtually all research and scholarship in their small, inbred field. They tended to diagnose porphyria only during acute attacks with severe neurological symptoms and to exclude cases of milder, smoldering illness. They generally would not make the diagnosis unless porphyrin excretion in urine or stool was at least five to ten times normal. “This makes no sense,” wrote Morton in 1995, “and would be analogous to restricting the diagnosis of diabetes mellitus to those who have ketoacidosis or restricting the diagnosis of coronary artery disease to those who have myocardial infarction.”

The higher numbers reported by Morton agree with the numbers reported over a century ago— the proportion of the population that became ill when they took the sleeping medication sulfonal. They are consistent with the finding, in the 1960s, of “mauve factor,” a lavender-staining chemical, not only in the urine of patients diagnosed with porphyria, but in the urine of five to ten percent of the general population.6 Mauve factor was eventually identified as a breakdown product of porphobilinogen, one of the porphyrin precursors.7 Morton also found, in agreement with recent reports from England, the Netherlands, Germany, and Russia, that persistent neurological problems occur during the chronic, smoldering phase of every type of porphyria—even those types which were previously supposed to cause only skin lesions.8 

Hans Günther, the German doctor who, in 1911, gave porphyria its name, stated that “such individuals are neuropathic and suffer from insomnia and nervous irritability.”9 Morton has brought us back to the original view of porphyria: it is not only a fairly common disease but exists most often in a chronic form with comparatively mild symptoms. And its principal cause is the synthetic chemicals and electromagnetic fields that pollute our modern environment. 

Porphyrins are central to our story not only because of a disease named porphyria, which affects a few percent of the population, but because of the part porphyrins play in the modern epidemics of heart disease, cancer, and diabetes, which affect half the world, and because their very existence is a reminder of the role of electricity in life itself, a role which a few courageous scientists have slowly elucidated. 

As a child, Albert Szent-Györgyi (pronounced approximately like “Saint Georgie”) hated books and needed a tutor’s help to pass his exams. But later, having graduated from Budapest Medical School in 1917, he went on to become one of the world’s greatest geniuses in the field of biochemistry. In 1929 he discovered Vitamin C, and during the next few years he worked out most of the steps in cellular respiration, a system now known as the Krebs cycle. For these two discoveries he was awarded the Nobel Prize in Physiology or Medicine in 1937. He then spent the next two decades figuring out how muscles function. After emigrating to the United States and settling at Woods Hole, Massachusetts, he received the Albert Lasker Award of the American Heart Association in 1954 for his work on muscles.

 

Albert Szent-Györgyi, M.D., Ph.D. (1893-1986) 

But perhaps his greatest insight is one for which he is least known, although he devoted almost half his life to the subject. For on March 12, 1941, in a lecture delivered in Budapest, he boldly stood up before his peers and suggested to them that the discipline of biochemistry was obsolete and should be brought into the twentieth century. Living organisms, he told them, were not simply bags of water in which molecules floated like tiny billiard balls, forming chemical bonds with other billiard balls with which they happened to collide. Quantum theory, he said, had made such old ideas invalid; biologists needed to study solid state physics. 

In his own specialty, although he had worked out the structures of the molecules involved in muscular contraction, he could not begin to fathom why they had those particular structures, nor how the molecules communicated with one another to coordinate their activities. He saw such unsolved problems everywhere he looked in biology. “One of my difficulties within protein chemistry,” he bluntly told his colleagues, “was that I could not imagine how such a protein molecule can ‘live.’ Even the most involved protein structural formula looks ‘stupid,’ if I may say so.” 

The phenomena that had forced Szent-Györgyi to face these questions were the porphyrin-based systems of life. He pointed out that in plants, 2,500 chlorophyll molecules form a single functional unit, and that in dim light at least 1,000 chlorophyll molecules have to cooperate simultaneously in order to split one molecule of carbon dioxide and create one molecule of oxygen. 

He spoke about the “enzymes of oxidation”—the cytochromes in our cells—and wondered, again, how the prevailing model could be correct. How could a whole series of large protein molecules be arranged geometrically so that electrons could wander directly from one to the other in a precise sequence? “Even if we could devise such an arrangement,” he said, “it would still be incomprehensible how the energy liberated by the passing of an electron from one substance to the other, viz., from one iron atom to the other, could do anything useful.” 

Szent-Györgyi proposed that organisms are alive because thousands of molecules form single systems with shared energy levels, such as physicists were describing in crystals. Electrons don’t have to pass directly from one molecule to another, he said; instead of being attached to only one or two atoms, electrons are mobile, belong to the whole system, and transmit energy and information over large distances. In other words, the stuff of life is not billiard balls but liquid crystals and semiconductors. 

Szent-Györgyi’s sin was not that he was incorrect. He wasn’t. It was his failure to respect the old animosity. Electricity and life were long divorced; the industrial revolution had been running full bore for a century and a half. Millions of miles of electric wires clothed the earth, exhaling electric fields that permeated all living things. Thousands of radio stations blanketed the very air with electromagnetic oscillations that one could not avoid. Skin and bones, nerves and muscles were not allowed to be influenced by them. Proteins were not permitted to be semiconductors. The threat to industry, economics, and modern culture would be too great. 

So biochemists continued to think of proteins, lipids, and DNA as though they were little marbles drifting in a watery solution and colliding with one another at random. They even thought of the nervous system this way. When forced to, they admitted parts of quantum theory, but only on a limited basis. Biological molecules were still only permitted to interact with their immediate neighbors, not to act at a distance. It was okay to acknowledge modern physics only that much, like opening a small hole in a dam for knowledge to leak through one drop at a time, while the main structure is reinforced lest a flood demolish it. 

Old knowledge about chemical bonds and enzymes in a water solution must now coexist with new models of electron transport chains. It was necessary to invent these to explain phenomena that were most central to life: photosynthesis and respiration. Large porphyrin-containing protein molecules no longer had to move and physically interact with one another in order for anything useful to happen. These molecules could stay put and electrons could shuttle between them instead. Biochemistry was becoming that much more alive. But it still had a long way to go. For even in the new models, electrons were constrained to move only, like little messenger boys, between one protein molecule and its immediate neighbor. They could cross the street, so to speak, but they couldn’t travel down a highway to a distant town. Organisms were still pictured essentially as bags of water containing very complex solutions of chemicals. 

The laws of chemistry had explained a lot about metabolic processes, and electron transport now explained even more, but there was not yet an organizing principle. Elephants grow from tiny embryos, which grow from single brainless cells. Salamanders regenerate perfect limbs. When we are cut, or break a bone, cells and organs throughout our body mobilize and coordinate their activities to repair the damage. How does the information travel? How, borrowing Szent-Györgyi’s words, do protein molecules “live”? 

Despite Szent-Györgyi’s sin, his predictions have proven correct. Molecules in cells do not drift at random to collide with one another. Most are firmly anchored to membranes. The water inside cells is highly structured and does not resemble the free-flowing liquid that sloshes around in a glass before you drink it. Piezoelectricity, a property of crystals that makes them useful in electronic products, that transforms mechanical stress into electrical voltages and vice versa, has been found in cellulose, collagen, horn, bone, wool, wood, tendon, blood vessel walls, muscle, nerve, fibrin, DNA, and every type of protein examined.10 In other words—something most biologists have been denying for two centuries—electricity is essential to biology. 

Szent-Györgyi was not the first to challenge conventional thinking. It was Otto Lehmann, already in 1908, who, noticing the close resemblance between the shapes of known liquid crystals and many biological structures, proposed that the very basis of life was the liquid crystalline state. Liquid crystals, like organisms, had the ability to grow from seeds; to heal wounds; to consume other substances, or other crystals; to be poisoned; to form membranes, spheres, rods, filaments and helical structures; to divide; to “mate” with other forms, resulting in offspring that had characteristics of both parents; to transform chemical energy into mechanical motion.

After Szent-Györgyi’s daring Budapest lecture, others pursued his ideas. In 1949, Dutch researcher E. Katz explained how electrons could move through a semiconducting chlorophyll crystal during photosynthesis. In 1955, James Bassham and Melvin Calvin, working for the U.S. Atomic Energy Commission, elaborated on this theory. In 1956, William Arnold, at Oak Ridge National Laboratory, confirmed experimentally that dried chloroplasts—the particles in green plants that contain chlorophyll—have many of the properties of semiconductors. In 1959, Daniel Eley, at Nottingham University, proved that dried proteins, amino acids, and porphyrins are indeed semiconductors. In 1962, Roderick Clayton, also at Oak Ridge, found that photosynthetic tissues in living plants behave like semiconductors. In 1970, Alan Adler, at the New England Institute, showed that thin films of porphyrins do also. In the 1970s, biochemist Freeman Cope, at the United States Naval Air Development Center in Warminster, Pennsylvania, emphasized the importance of solid state physics for a true understanding of biology, as did biologist Allan Frey, the most active American researcher into the effects of microwave radiation on the nervous system at that time. Ling Wei, professor of electrical engineering at the University of Waterloo in Ontario, stated baldly that a nerve axon is an electrical transmission line and that its membrane is an ionic transistor. He said that the equivalent circuitry “can be found in any electronics book today,” and that “one can easily derive the nerve behavior from semiconductor physics.” When he did so, his equations predicted some of the properties of nerves that were, and still are, puzzling to physiologists. 

In 1979, a young professor of bioelectronics at the University of Edinburgh published a book titled Dielectric and Electronic Properties of Biological Materials. The earlier work of Eley and Arnold had been criticized because the activation energies they had measured—the amount of energy necessary to make proteins conduct electricity—seemed to be too large. Supposedly there was not enough energy available in living organisms to lift electrons into the conduction band. Proteins might be made to conduct electricity in the laboratory, said the critics, but this could not happen in the real world. Eley and Arnold, however, had done all their work on dried proteins, not living ones. The young professor, Ronald Pethig, pointed out the obvious: water is essential to life, and proteins become more conductive if you added water to them. In fact, studies had shown that adding only 7.5 percent water increased the conductivity of many proteins ten thousandfold or more! Water, he proposed, is an electron donor that “dopes” proteins and turns them into good semiconductors. 

The electronic role of living water had already been noted by others. Physiologist Gilbert Ling, realizing that cell water is a gel and not a liquid, developed his theory of the electronic nature of cells in 1962. More recently, Gerald Pollack, professor of bioengineering at the University of Washington, has taken up this line of investigation. He was inspired by Ling when they met at a conference in the mid-1980s. Pollack’s most recent book, The Fourth Phase of Water: Beyond Solid, Liquid, and Vapor, was published in 2011. 

The late geneticist Mae-Wan Ho, in London, has clothed Szent-Györgyi’s ideas in garments that all can see. She developed a technique using a polarizing microscope that displayed, in vivid color, the interference patterns generated by the liquid crystalline structures that make up living creatures. The first animal she put under her microscope was a tiny worm—a fruit fly larva. “As it crawls along, it weaves its head from side to side flashing jaw muscles in blue and orange stripes on a magenta background,” she wrote in 1993 in her book, The Rainbow and the Worm: The Physics of Organisms. She and many others have urged that the liquid crystalline properties of our cells and tissues not only teach us about our chemistry, but have something special to tell us about life itself. 

Włodzimierz Sedlak, pursuing Szent-Györgyi’s ideas in Poland, developed the discipline of bioelectronics within the Catholic University of Lublin during the 1960s. Life, he said, is not only a collection of organic compounds undergoing chemical reactions, but those chemical reactions are coordinated with electronic processes that take place in an environment of protein semiconductors. Other scientists working at the same university are continuing to develop this discipline theoretically and experimentally today. Marian Wnuk has focused on porphyrins as key to the evolution of life. He states that the principal function of porphyrin systems is an electronic one. Józef Zon, head of the Department of Theoretical Biology at the University, has focused on the electronic properties of biological membranes. 

Oddly enough, the use of porphyrins in electronic products instructs us about biology. Adding thin films of porphyrins to commercially available photovoltaic cells increases the voltage, current, and total power output.11 Prototype solar cells based on porphyrins have been produced,12 as have organic transistors based on porphyrins.13 

The properties that make porphyrins suitable in electronics are the same properties that make us alive. As everyone knows, playing with fire is dangerous; oxidation releases tremendous energy quickly and violently. How, then, do living organisms make use of oxygen? How do we manage to breathe and metabolize our food without being destroyed in a conflagration? The secret lies in the highly pigmented, fluorescent molecule called porphyrin. Strong pigments are always efficient energy absorbers, and if they are also fluorescent, they are also good energy transmitters. As SzentGyörgyi taught us in his 1957 book, Bioenergetics, “fluorescence thus tells us that the molecule is capable of accepting energy and does not dissipate it. These are two qualities any molecule must have to be able to act as an energy transmitter.”14 

Porphyrins are more efficient energy transmitters than any other of life’s components. In technical terms, their ionization potential is low, and their electron affinity high. They are therefore capable of transmitting large amounts of energy rapidly in small steps, one low-energy electron at a time. They can even transmit energy electronically from oxygen to other molecules, instead of dissipating that energy as heat and burning up. That’s why breathing is possible. On the other side of the great cycle of life, porphyrins in plants absorb the energy of sunlight and transport electrons that change carbon dioxide and water into carbohydrates and oxygen. 

Porphyrins, the Nervous System, and the Environment 

There is one more place these surprising molecules are found: in the nervous system, the organ where electrons flow. In fact, in mammals, the central nervous system is the only organ that shines with the red fluorescent glow of porphyrins when examined under ultraviolet light. These porphyrins, too, perform a function that is basic to life. They occur, however, in a location where one might least expect to find them—not in the neurons themselves, the cells that carry messages from our five senses to our brain, but in the myelin sheaths that envelop them—the sheaths whose role has been almost totally neglected by researchers and whose breakdown causes one of the most common and least understood neurological diseases of our time: multiple sclerosis. It was orthopedic surgeon Robert O. Becker who, in the 1970s, discovered that myelin sheaths are really electrical transmission lines. 

In a state of health the myelin sheaths contain primarily two types of porphyrins— coproporphyrin III and protoporphyrin—in a ratio of two to one, complexed with zinc. The exact composition is crucial. When environmental chemicals poison the porphyrin pathway, excess porphyrins, bound to heavy metals, build up in the nervous system as in the rest of the body. This disrupts the myelin sheaths and changes their conductivity which, in turn, alters the excitability of the nerves they surround. The entire nervous system becomes hyper reactive to stimuli of all kinds, including electromagnetic fields. 

The cells surrounding our nerves were hardly even studied until recently. In the nineteenth century, anatomists, finding no apparent function for them, supposed that they must have only a “nutritive” and “supportive” role, protecting the “real” nerves that they surrounded. They named them glial cells after the Greek word for “glue.” The discovery of the action potential, which transmits signals along each neuron, and of neurotransmitters, the chemicals that carry signals from one neuron to the next, had ended the discussion. From then on, glial cells were thought to be little more than packing material. Most biologists ignored the fact, discovered by German physician Rudolf Virchow in 1854, that myelin is a liquid crystal. They did not think it was relevant. 

However, working from the 1960s to the early 1980s and author, in 1985, of The Body Electric, Becker found quite another function for the myelin-containing cells and took another step toward restoring electricity to its proper role in the functioning of living things. 

When he began his research in 1958, Becker was simply looking for a solution to orthopedists’ greatest unsolved problem: nonunion of fractures. Occasionally, despite the best medical care, a bone would refuse to heal. Surgeons, believing that only chemical processes were at work, simply scraped the fracture surfaces, devised complicated plates and screws to hold the bone ends rigidly together, and hoped for the best. Where this did not work, limbs had to be amputated. “These approaches seemed superficial to me,” Becker recalled. “I doubted that we would ever understand the failure to heal unless we truly understood healing itself.”15 

Becker began to pursue the ideas of Albert Szent-Györgyi, thinking that if proteins were semiconductors, maybe bones were too, and maybe electron flow was the secret to the healing of fractures. Ultimately he proved that this was correct. Bones were not just made of collagen and appatite, as he was taught in medical school; they were also doped with tiny amounts of copper, much as silicon wafers in computers are doped with tiny amounts of boron or aluminum. The presence of greater or lesser amounts of metal atoms regulates the electrical conductivity of the circuitry—in bones as in computers. With this understanding, Becker designed machines that delivered miniscule electric currents—as small as 100 trillionths of an ampere—to fractured bones to stimulate the healing process, with great success: his devices were the forerunners of machines that are used today by orthopedic surgeons in hospitals throughout the world. 

Becker’s work on the nervous system is less well known. As already mentioned, the functioning of neurons had been worked out, up to a point, in the nineteenth century. They transmit enormous amounts of information to and from the brain at high speed, including data about one’s environment, and instructions to one’s muscles. They do this via the familiar action potential and neurotransmitters. And since the action potential is an all-or-nothing event, neuron signaling is an on-off digital system like today’s computers. But Becker thought that this could not explain the most important properties of life; there had to be a slower, more primitive, and more sensitive analog system that regulates growth and healing, that we inherited from lower forms of life—a system that might be related to the acupuncture meridians of Chinese medicine, which western medicine also made no attempt to understand. 

A number of researchers before Becker, among them Harold Saxton Burr at Yale, Lester Barth at Columbia, Elmer Lund at the University of Texas, Ralph Gerard and Benjamin Libet at the University of Chicago, Theodore Bullock at U.C.L.A., and William Burge at the University of Illinois, had measured DC voltages on the surfaces of living organisms, both plants and animals, and embryos. Most biologists paid no attention. After all, certain DC currents, called “currents of injury,” were well known, and were thought to be well understood. They had been discovered by Carlo Matteucci as long ago as the 1830s. Biologists had assumed, for a century, that these currents were meaningless artifacts, caused simply by ions leaking out of wounds. But when, in the 1930s and 1940s, a growing number of scientists, using better techniques, began to find DC voltages on all surfaces of all living things, and not just on the surfaces of wounds, a few began to wonder whether those “currents of injury” just might be a bit more important than they had learned in school. 

The accumulated work of these scientists showed that trees,16 and probably all plants, are polarized electrically, positive to negative, from leaves to roots, and that animals are similarly polarized from head to feet. In humans potential differences of up to 150 millivolts or more could sometimes be measured between one part of the body and another.17 

Becker was the first to map the charge distribution in an animal in some detail, accomplishing this with salamanders in 1960. The places of greatest positive voltage, he found, as measured from the back of the animal, were the center of the head, the upper spine over the heart, and the lumbosacral plexus at the lower end of the spine, while the places of greatest negative voltage were the four feet and the end of the tail. In addition, the head of an alert animal was polarized from back to front, as though an electric current were always flowing in one direction through the middle of its brain. However, when an animal was anesthetized the voltage diminished as the anesthetic took effect, and then the head reversed polarity when the animal lost consciousness. This suggested to him a novel method of inducing anesthesia, and when Becker tried it, it worked like a charm. In the salamander, at least, passing an electric current of only 30 millionths of an ampere from front to back through the center of its head caused the animal to become immediately unconscious and unresponsive to pain. When the current was turned off, the animal promptly woke up. He observed the same back-to-front polarity in alert humans, and the same reversal during sleep and anesthesia.18 

While Becker did not try it himself, even tinier electric currents have been used in psychiatry to put humans to sleep since about 1950 in Russia, Eastern Europe, and Asian countries that were once part of the Soviet Union. In these treatments, current is sent from front to back through the midline of the head, reversing the normal polarity of the brain, just as Becker did with his salamanders. The first publications describing this procedure specified short pulses of 10 to 15 microamperes each, 5 to 25 times per second, which gave an average current of only about 30 billionths of an ampere. Although larger currents will cause immediate unconsciousness in a human, just like in a salamander, those tiny currents are all that is necessary to put a person to sleep. This technique, called “electrosleep,” has been used for over half a century to treat mental disorders, including manic-depressive illness and schizophrenia, in that part of the world.19 

The normal electrical potentials of the body are also necessary for the perception of pain. The abolition of pain in a person’s arm, for example, whether caused by a chemical anesthetic, hypnosis, or acupuncture, is accompanied by a reversal of electrical polarity in that arm.20 

By the 1970s it had become clear to the researchers who were looking into such things that the DC potentials they were measuring played a key role in organizing living structures. They were necessary for growth and development.21 They were also needed for regeneration and healing. 

Tweedy John Todd demonstrated as long ago as 1823 that a salamander cannot regenerate a severed leg if you destroy that leg’s nerve supply. So for a century and a half, scientists searched for the chemical signal that must be transmitted by nerves to trigger growth. No one ever found one. Finally, embryologist Sylvan Meryl Rose, in the mid-1970s at Tulane University, proposed that maybe there was no such chemical, and that the long-sought signal was purely electrical. Could the currents of injury, he asked, that had previously been considered mere artifacts, themselves play a central role in healing? 

Rose found that they did. He recorded the patterns of the currents in the wound stumps of salamanders as they regenerated their severed limbs. The end of the stump, he found, was always strongly positive during the first few days after injury, then reversed polarity to become strongly negative for the next couple of weeks, finally reestablishing the weakly negative voltage found on all healthy salamander legs. Rose then found that salamanders would regenerate their legs normally, even without a nerve supply, provided he carefully duplicated, with an artificial source of current, the electrical patterns of healing that he had observed. Regeneration would not take place if the polarity, magnitude, or sequence of currents were not correct. 

Once having established that the signals that trigger regeneration are electrical and not chemical in nature, these scientists were in for yet another surprise. For the DC potentials of the body that, as we have seen, are necessary not just for regeneration but for growth, healing, pain perception, and even consciousness, seemed to be generated not in the “real” nerves but in the myelin-containing cells that surround them—the cells that also contain porphyrins. Proof came by accident while Becker was again working on the problem of why some bone fractures fail to mend. Since he had already learned that nerves were essential to healing, he tried, in the early 1970s, to create an animal model for fractures that do not heal by severing the nerve supply to a series of rats’ legs before breaking them. 

To his surprise, the leg bones still healed normally—with a six-day delay. Yet six days was not nearly enough time for a rat to regenerate a severed nerve. Could bones be an exception, he wondered, to the rule that nerves are needed for healing? “Then we took a more detailed look at the specimens,” wrote Becker. “We found that the Schwann cell sheaths were growing across the gap during the six-day delay. As soon as the perineural sleeve was mended, the bones began to heal normally, indicating that at least the healing, or output, signal, was being carried by the sheath rather than the nerve itself. The cells that biologists had considered merely insulation turned out to be the real wires.”22 It was the Schwann cells, Becker concluded—the myelin-containing glial cells—and not the neurons they surrounded, that carried the currents that determined growth and healing. And in a much earlier study Becker had already shown that the DC currents that flow along salamander legs, and presumably along the limbs and bodies of all higher animals, are of semiconducting type.23 

Which brings us full circle. The myelin sheaths—the liquid crystalline sleeves surrounding our nerves—contain semiconducting porphyrins,24 doped with heavy metal atoms, probably zinc.25 It was Harvey Solomon and Frank Figge who, in 1958, first proposed that these porphyrins must play an important role in nerve conduction. The implications of this are especially important for people with chemical and electromagnetic sensitivities. Those of us who, genetically, have relatively less of one or more porphyrin enzymes, may have a “nervous temperament” because our myelin is doped with slightly more zinc than our neighbors’ and is more easily disturbed by the electromagnetic fields (EMFs) around us. Toxic chemicals and EMFs are therefore synergistic: exposure to toxins further disrupts the porphyrin pathway, causing the accumulation of more porphyrins and their precursors, rendering the myelin and the nerves they surround still more sensitive to EMFs. According to more recent research, a large excess of porphyrin precursors can prevent the synthesis of myelin and break apart the myelin sheaths, leaving the neurons they surround naked and exposed.26 

The true situation is undoubtedly more complex than this, but to put all the pieces correctly together will require researchers who are willing to step outside our cultural blinders and acknowledge the existence of electrical transmission lines in the nervous systems of animals. Already, mainstream science has taken the first step by finally acknowledging that glial cells are much more than packing material.27 In fact, a discovery by a team of researchers at the University of Genoa is currently revolutionizing neurology. Their discovery is related to breathing.28 

Everyone knows that the brain consumes more oxygen than any other organ, and that if a person stops breathing, the brain is the first organ to die. What the Italian team confirmed in 2009 is that as much as ninety percent of that oxygen is consumed not by the brain’s nerve cells, but by the myelin sheaths that surround them. Traditional wisdom has it that the consumption of oxygen for energy takes place only in tiny bodies inside cells called mitochondria. That wisdom has now been turned on its head. In the nervous system, at least, most of the oxygen appears to be consumed in the multiple layers of fatty substance called myelin, which contain no mitochondria at all, but which forty-year-old research showed contains non-heme porphyrins and is semiconducting. Some scientists are even beginning to say that the myelin sheath is, in effect, itself a giant mitochondrion, without which the huge oxygen needs of our brain and nervous system could never be met. But to truly make sense of this collection of facts will also require the recognition that both the neurons, as Ling Wei proposed, and the myelin sheaths that envelop them, as Robert Becker proposed, work together to form a complex and elegant electrical transmission line system, subject to electrical interference just like transmission lines built by human engineers. 

The exquisite sensitivity of even the normal nervous system to electromagnetic fields was proven in 1956 by zoologists Carlo Terzuolo and Theodore Bullock—and then ignored by everyone since. In fact, even Terzuolo and Bullock were astonished by the results. Experimenting on crayfish, they found that although a substantial amount of electric current was needed to cause a previously silent nerve to fire, incredibly tiny currents could cause an already firing nerve to alter its firing rate tremendously. A current of only 36 billionths of an ampere was enough to increase or decrease a nerve’s rate of firing by five to ten percent. And a current of 150 billionths of an ampere— thousands of times less than is widely assumed, still today, by developers of modern safety codes, to have any biological effect whatever—would actually double the rate of firing, or silence the nerve altogether. Whether it increased or decreased the activity of the nerve depended only on the direction in which the current was applied to the nerve. 

The Zinc Connection 

The role of zinc was discovered in the 1950s by Henry Peters, a porphyrinologist at the University of Wisconsin Medical School. Like Morton after him, Peters was impressed by the number of people who seemed to have mild or latent porphyria, and thought the trait was far more prevalent that was commonly believed.29 

Peters discovered that his porphyria patients who had neurological symptoms were excreting very large amounts of zinc in their urine—up to 36 times normal. In fact, their symptoms correlated better with the levels of zinc in their urine than with the levels of porphyrins they were excreting. With this information, Peters did the most logical thing: in scores of patients, he tried chelation to reduce the body’s load of zinc, and it worked! In patient after patient, when courses of treatment with BAL or EDTA had reduced the level of zinc in their urine to normal, their illness resolved, and the patient remained symptom-free for up to several years.30 Contrary to conventional wisdom, which assumes that zinc deficiency is common and should be supplemented, Peters’ patients, because of their genetics and their polluted environment, were actually zinc-poisoned—as at least five to ten percent of the population, with hidden porphyria, may also be. 

For the next forty years Peters found tremendous resistance to his idea that zinc toxicity was at all common, but growing evidence is now accumulating that this is so. Large amounts of zinc are in fact entering our environment, our homes, and our bodies from industrial processes, galvanized metals, and even the fillings in our teeth. Zinc is in denture cream and in motor oil. There is so much zinc in automobile tires that their constant erosion makes zinc one of the main components of road dust—which washes into our streams, rivers, and reservoirs, eventually getting into our drinking water.31 Wondering whether this was perhaps poisoning us all, a group of scientists from Brookhaven National Laboratory, the United States Geological Survey, and several universities raised rats on water supplemented with a low level of zinc. By three months of age, the rats already had memory deficits. By nine months of age, they had elevated levels of zinc in their brains.32 In a human experiment, pregnant women in a slum area of Bangladesh were given 30 milligrams of zinc daily, in the expectation that this would benefit the mental development and motor skills of their babies. The researchers found just the opposite.33 In a companion experiment, a group of Bangladeshi infants were given 5 milligrams of zinc daily for five months, with the same surprising result: the supplemented infants scored more poorly on standard tests of mental development.34 And a growing body of literature shows that zinc supplements worsen Alzheimer’s disease,35 and that chelation therapy to reduce zinc improves cognitive functioning in Alzheimer’s patients.36 An Australian team who examined autopsy specimens found that Alzheimer’s patients had double the amount of zinc in their brains as people without Alzheimer’s, and that the more severe the dementia, the higher the zinc levels.37  136s

Nutritionists have long been misled by using blood tests to judge the body’s stores of zinc; scientists are finding out that blood levels are not reliable, and that unless you are severely malnourished there is no relation between the amount of zinc in your diet and the level of zinc in your blood.38 In some neurological diseases, including Alzheimer’s disease, it is common to have high levels of zinc in the brain while having normal or low levels of zinc in the blood.39 In a number of diseases including diabetes and cancer, urinary zinc is high while blood zinc is low.40 It appears that the kidneys respond to the body’s total load of zinc, and not to the levels in the blood, so that blood levels can become low, not because of a zinc deficiency but because the body is overloaded with zinc and the kidneys are removing it from the blood as fast as they can. It also appears to be much more difficult than we used to think for people to become deficient by eating a zinc-poor diet; the body is amazingly capable of compensating for even extremely low levels of dietary zinc by increasing intestinal absorption and decreasing excretion through urine, stool, and skin.41 While the recommended dietary allowance for adult males is 11 milligrams per day, a man can take in as little as 1.4 milligrams of zinc a day and still maintain homeostasis and normal levels of zinc in the blood and tissues.42 But a person who increases his or her daily intake beyond 20 milligrams may risk toxic effects in the long term. 

Canaries in the Mine 

In our cells, the manufacture of heme from porphyrins can be inhibited by a large variety of toxic chemicals, and not—so far as we know—by electricity. But we will see in the coming chapters that electromagnetic fields interfere with the most important job that this heme is supposed to do for us: enabling the combustion of our food by oxygen so that we can live and breathe. Like rain on a campfire, electromagnetic fields douse the flames of metabolism. They reduce the activity of the cytochromes, and there is evidence that they do so in the simplest of all possible ways: by exerting a force that alters the speed of the electrons being transported along the chain of cytochromes to oxygen. 

Every person on the planet is affected by this invisible rain that penetrates into the fabric of our cells. Everyone has a slower metabolism, is less alive, than if those fields were not there. We will see how this slow asphyxiation causes the major diseases of civilization: cancer, diabetes, and heart disease. There is no escape. Regardless of diet, exercise, lifestyle, and genetics, the risk of developing these diseases is greater for every human being and every animal than it was a century and a half ago. People with a genetic predisposition simply have a greater risk than everyone else, because they have a bit less heme in their mitochondria to start with. 

In France, liver cancer was found to be 36 times as frequent in people carrying a gene for porphyria as in the general population.43 In Sweden and Denmark the rate was 39 times as high, and the lung cancer rate triple the general rate.44 Chest pain, heart failure, high blood pressure, and EKGs suggestive of oxygen starvation are well known in porphyria.45 Porphyria patients with normal coronary arteries often die of heart arrhythmias 46 or heart attacks.47 Glucose tolerance tests and insulin levels are usually abnormal.48 In one study, 15 of 36 porphyria patients had diabetes.49 The protean manifestations of this disease, capable of affecting almost any organ, are widely blamed on impaired cellular respiration due to a deficiency of heme.50 Indeed, no porphyrin expert has offered a better explanation. 

The five to ten percent of the population who have lower porphyrin enzyme levels are the so called canaries in the coal mine, whose songs of warning, however, have been tragically ignored. They are the people who came down with neurasthenia in the last half of the nineteenth century when telegraph wires swept the world; the victims of sleeping pills in the late 1880s, of barbiturates in the 1920s, and of sulfa drugs in the 1930s; the men, women, and children with multiple chemical sensitivity, poisoned by the soup of chemicals that have rained on us since World War II; the abandoned souls with electrical sensitivity left behind by the computer age, forced into lonely exile by the inescapable radiation of the wireless revolution. 

In Part Two of this book we will see just how extensively the general population of the world has been affected as a result of the failure to heed their warnings.

Part 2

11

Irritable Heart

ON THE FIRST DAY OF AUTUMN, 1998, Florence Griffith Joyner, former Olympic track gold medalist, died in her sleep at the age of thirty-eight when her heart stopped beating. That same fall, Canadian ice hockey player Stéphane Morin, age twenty-nine, died of sudden heart failure during a hockey game in Germany, leaving behind a wife and newborn son. Chad Silver, who had played on the Swiss national ice hockey team, also age twenty-nine, died of a heart attack. Former Tampa Bay Buccaneers nose tackle Dave Logan collapsed and died from the same cause. He was forty-two. None of these athletes had any history of heart disease. 

A decade later, responding to mounting alarm among the sports community, the Minneapolis Heart Institute Foundation created a National Registry of Sudden Deaths in Athletes. After combing through public records, news reports, hospital archives, and autopsy records, the Foundation identified 1,049 American athletes in thirty eight competitive sports who had suffered sudden cardiac arrest between 1980 and 2006. The data confirmed what the sports community already knew. In 1980, heart attacks in young athletes were rare: only nine cases occurred in the United States. The number rose gradually but steadily, increasing about ten percent per year, until 1996, when the number of cases of fatal cardiac arrest among athletes suddenly doubled. There were 64 that year, and 66 the following year. In the last year of the study, 76 competitive athletes died when their hearts gave out, most of them under eighteen years of age.1 

The American medical community was at a loss to explain it. But in Europe, some physicians thought they knew the answer, not only to the question of why so many young athletes’ hearts could no longer stand the strain of exertion, but to the more general question of why so many young people were succumbing to diseases from which only old people used to die. On October 9, 2002, an association of German doctors specializing in environmental medicine began circulating a document calling for a moratorium on antennas and towers used for mobile phone communications. Electromagnetic radiation, they said, was causing a drastic rise in both acute and chronic diseases, prominent among which were “extreme fluctuations in blood pressure,” “heart rhythm disorders,” and “heart attacks and strokes among an increasingly younger population.” 

Three thousand physicians signed this document, named the Freiburger Appeal after the German city in which it was drafted. Their analysis, if correct, could explain the sudden doubling of heart attacks among American athletes in 1996: that was the year digital cell phones first went on sale in the United States, and the year cell phone companies began building tens of thousands of cell towers to make them work. 

Although I knew about the Freiburger Appeal and the profound effects electricity could have on the heart, when I first conceived this book I did not intend to include a chapter on heart disease, for I was still in denial despite the abundant evidence. 

We recall from chapter 8 that Marconi, the father of radio, had ten heart attacks after he began his world-changing work, including the one that killed him at the young age of 63. 

“Anxiety disorder,” which is rampant today, is most often diagnosed from its cardiac symptoms. Many suffering from an acute “anxiety attack” have heart palpitations, shortness of breath, and pain or pressure in the chest, which so often resemble an actual heart attack that hospital emergency rooms are visited by more patients who turn out to have nothing more than “anxiety” than by patients who prove to have something wrong with their hearts. And yet we recall from chapter 6 that “anxiety neurosis” was an invention of Sigmund Freud, a renaming of a disease formerly called neurasthenia, that became prevalent only in the late nineteenth century following the building of the first electrical communication systems. 

Radio wave sickness, described by Russian doctors in the 1950s, includes cardiac disturbances as a prominent feature. 

Not only did I know all this, but I myself have suffered for thirty- five years from palpitations, abnormal heart rhythm, shortness of breath, and chest pain, related to exposure to electricity. 

Yet when my friend and colleague Jolie Andritzakis suggested to me that heart disease itself had appeared in the medical literature for the first time at the beginning of the twentieth century and that I should write a chapter about it, I was taken by surprise. In medical school I had had it so thoroughly drilled into me that cholesterol is the main cause of heart disease that I had never before questioned the wisdom that bad diet and lack of exercise are the most important factors contributing to the modern epidemic. I had no doubt that electromagnetic radiation could cause heart attacks. But I did not yet suspect that it was responsible for heart disease. 

Then another colleague, Dr. Samuel Milham, muddied the waters some more. Milham is an M.D. and an epidemiologist, retired from the Washington State Department of Health. He wrote an article in 2010, followed by a short book, suggesting that the modern epidemics of heart disease, diabetes, and cancer are largely if not entirely caused by electricity. He included solid statistics to back up these assertions. 

I decided to dive in. 

I first became aware of Milham’s work in 1996, when I was asked to help with a national lawsuit against the Federal Communications Commission. I was still living in Brooklyn, and knew only that the telecommunications industry was promising a “wireless revolution.” The industry wanted to place a cell phone in the hands of every American, and in order to make those devices work in the urban canyons of my home town they were applying for permission to erect thousands of microwave antennas close to street level throughout New York. Advertisements for the newfangled phones were beginning to appear on radio and television, telling the public why they needed such things and that they would make ideal Christmas gifts. I did not have any idea how radically the world was about to change. 

Then came a phone call from David Fichtenberg, a statistician in Washington State, who told me the FCC had just released human exposure guidelines for microwave radiation, and asked if I wanted to join a nationwide legal challenge against them. The new guidelines, I came to find out, had been written by the cell phone industry itself and did not protect people from any of the effects of microwave radiation except one: being cooked like a roast in a microwave oven. None of the known effects of such radiation, apart from heat—effects on the heart, nervous system, thyroid gland, and other organs—were taken into consideration. 

Worse, Congress had passed a law that January that actually made it illegal for cities and states to regulate this new technology on the basis of health. President Clinton had signed it on February 8. The industry, the FCC, Congress, and the President were conspiring to tell us that we should all feel comfortable holding devices that emit microwave radiation directly against our brains, and that we should all get used to living in close quarters with microwave towers, because they were coming to a street near you whether you liked it or not. A giant biological experiment had been launched, and we were all going to be unwitting guinea pigs. 

Except that the outcome was already known. The research had been done, and the scientists who had done it were trying to tell us what the new technology was going to do to the brains of cell phone users, and to the hearts and nervous systems of people living in the vicinity of cell towers— which one day soon was going to be everybody. 

Samuel Milham, Jr. was one of those researchers. He had not done any of the clinical or experimental research on individual humans or animals; such work had been done by others in previous decades. Milham is an epidemiologist, a scientist who proves that the results obtained by others in the laboratory actually happen to masses of people living in the real world. In his early studies he had shown that electricians, power line workers, telephone linesmen, aluminum workers, radio and TV repairmen, welders, and amateur radio operators—those whose work exposed them to electricity or electromagnetic radiation—died far more often than the general public from leukemia, lymphoma, and brain tumors. He knew that the new FCC standards were inadequate, and he made himself available as a consultant to those who were challenging them in court. 

Samuel Milham, M.D., M.P.H 

In recent years, Milham turned his skills to the examination of vital statistics from the 1930s and 1940s, when the Roosevelt administration made it a national priority to electrify every farm and rural community in America. What Milham discovered surprised even him. Not only cancer, he found, but also diabetes and heart disease seemed to be directly related to residential electrification. Rural communities that had no electricity had little heart disease—until electric service began. In fact, in 1940, country folk in electrified regions of the country were suddenly dying of heart disease four to five times as frequently as those who still lived out of electricity’s reach. “It seems unbelievable that mortality differences of this magnitude could go unexplained for over 70 years after they were first reported,” wrote Milham.2 He speculated that early in the twentieth century nobody was looking for answers. 

But when I began reading the early literature I found that everyone was looking for answers. Paul Dudley White, for example, a well-known cardiologist associated with Harvard Medical School, puzzled over the problem in 1938. In the second edition of his textbook, Heart Disease, he wrote in amazement that Austin Flint, a prominent physician practicing internal medicine in New York City during the last half of the nineteenth century, had not encountered a single case of angina pectoris (chest pain due to heart disease) for one period of five years. White was provoked by the tripling of heart disease rates in his home state of Massachusetts since he had begun practicing in 1911. “As a cause of death,” he wrote, “heart disease has assumed greater and greater proportions in this part of the world until now it leads all other causes, having far outstripped tuberculosis, pneumonia, and malignant disease.” In 1970, at the end of his career, White was still unable to say why this was so. All he could do was wonder at the fact that coronary heart disease—disease due to clogged coronary arteries, which is the most common type of heart disease today—had once been so rare that he had seen almost no cases in his first few years of practice. “Of the first 100 papers I published,” he wrote, “only two, at the end of the 100, were concerned with coronary heart disease.”

Heart disease had not, however, sprung full-blown from nothing at the turn of the twentieth century. It had been relatively uncommon but not unheard of. The vital statistics of the United States show that rates of heart disease had begun to rise long before White graduated from medical school. The modern epidemic actually began, quite suddenly, in the 1870s, at the same time as the first great proliferation of telegraph wires. But that is to jump ahead of myself. For the evidence that heart disease is caused primarily by electricity is even more extensive than Milham suspected, and the mechanism by which electricity damages the heart is known. 

To begin with, we need not rely only on historical data for evidence supporting Milham’s proposal, for electrification is still going on in a few parts of the world. 

From 1984 to 1987, scientists at the Sitaram Bhartia Institute of Science and Research decided to compare rates of coronary heart disease in Delhi, India, which were disturbingly high, with rates in rural areas of Gurgaon district in Haryana state 50 to 70 kilometers away. Twenty-seven thousand people were interviewed, and as expected, the researchers found more heart disease in the city than in the country. But they were surprised by the fact that virtually all of the supposed risk factors were actually greater in the rural districts. 

City dwellers smoked much less. They consumed fewer calories, less cholesterol, and much less saturated fat than their rural counterparts. Yet they had five times as much heart disease. “It is clear from the present study,” wrote the researchers, “that the prevalence of coronary heart disease and its urban-rural differences are not related to any particular risk factor, and it is therefore necessary to look for other factors beyond the conventional explanations.”4 The most obvious factor that these researchers did not look at was electricity. For in the mid-1980s the Gurgaon district had not yet been electrified.5 

In order to make sense of these kinds of data it is necessary to review what is known—and what is still not known—about heart disease, electricity, and the relationship between the two. 

My Hungarian grandmother, who was the main cook in my family while I was growing up, had arteriosclerosis (hardening of the arteries). She fed us the same meals she cooked for herself and, at the advice of her doctor, they were low in fat. She happened to be a marvelous cook, so after I left home I continued eating in a similar style because I was hooked on the taste. For the past thirty eight years I have also been a vegetarian. I feel healthiest eating this way, and I believe that it is good for my heart. 

However, soon after I began to do research for this chapter, a friend gave me a book to read titled The Cholesterol Myths. It was published in 2000 by Danish physician Uffe Ravnskov, a specialist in internal medicine and kidney disease and a retired family practice doctor living in Lund, Sweden. I resisted reading it, because Ravnskov is not unbiased: he thinks vegetarians are pleasure-avoiding stoics who heroically deny themselves the taste of proper food in the mistaken belief that this will make them live longer. 

Ignoring his prejudices, I eventually read Ravnskov’s book and found it well-researched and thoroughly referenced. It demolishes the idea that people are having more heart attacks today because they are stuffing themselves with more animal fat than their ancestors did. On its surface, his thesis is contrary to what I was taught as well as to my own experience. So I obtained copies of many of the studies he quoted, and read them over and over until they finally made sense in light of what I knew about electricity. The most important thing to keep in mind is that the early studies did not have the same outcome as research being done today, and that there is a reason for this difference. Even recent studies from different parts of the world do not always agree with each other, for the same reason. 

Ravnskov, however, has become something of an icon among portions of the alternative health community, including many environmental physicians who are now prescribing high-fat diets— emphasizing animal fats—to their severely ill patients. They are misreading the medical literature. The studies that Ravnskov relied on show unequivocally that some factor other than diet is responsible for the modern scourge of heart disease, but they also show that cutting down on dietary fat in today’s world helps to prevent the damage caused by that other factor. Virtually every large study done since the 1950s in the industrialized world—agreeing with what I was taught in medical school—has shown a direct correlation between cholesterol and heart disease.6 And every study comparing vegetarians to meat eaters has found that vegetarians today have both lower cholesterol levels and a reduced risk of dying from a heart attack.7 

Ravnskov speculated that this is because people who eat no meat are also more health-conscious in other ways. But the same results have been found in people who are vegetarians only for religious reasons. Seventh Day Adventists all abstain from tobacco and alcohol, but only about half abstain from meat. A number of large long-term studies have shown that Adventists who are also vegetarians are two to three times less likely to die from heart disease.8 

Perplexingly, the very early studies—those done in the first half of the twentieth century—did not give these kinds of results and did not show that cholesterol was related to heart disease. To most researchers, this has been an insoluble paradox, contradicting present ideas about diet, and has been a reason for the mainstream medical community to dismiss the early research. 

For example, people with the genetic trait called familial hypercholesterolemia have extremely high levels of cholesterol in their blood—so high that they sometimes have fatty growths on their joints and are prone to gout-like attacks in toes, ankles, and knees caused by cholesterol crystals. In today’s world these people are prone to dying young of coronary heart disease. However, this was not always so. Researchers at Leiden University in the Netherlands traced the ancestors of three present-day individuals with this disorder until they found a pair of common ancestors who lived in the late eighteenth century. Then, by tracing all descendants of this pair and screening all living descendants for the defective gene, they were able to identify 412 individuals who either had definitely carried the gene and passed it on, or who were siblings who had a fifty percent chance of carrying it. 

They found, to their amazement, that before the 1860s people with this trait had a fifty percent lower mortality rate than the general population. In other words, cholesterol seemed to have had protective value and people with very high cholesterol levels lived longer than average. Their mortality rate, however, rose steadily during the late nineteenth century until it equaled the rate of the general population in about 1915. The mortality of this subgroup continued rising during the twentieth century, reaching double the average during the 1950s and then leveling off somewhat.9 One can speculate, based on this study, that before the 1860s cholesterol did not cause coronary heart disease, and there is other evidence that this is so. 

In 1965, Leon Michaels, working at the University of Manitoba, decided to see what historical documents revealed about fat consumption in previous centuries when coronary heart disease was extremely rare. What he found also contradicted current wisdom and convinced him that there must be something wrong with the cholesterol theory. One author in 1696 had calculated that the wealthier half of the English population, or about 2.7 million people, ate an amount of flesh yearly averaging 147.5 pounds per person—more than the national average for meat consumption in England in 1962. Nor did the consumption of animal fats decline at any time before the twentieth century. Another calculation made in 1901 had shown that the servant-keeping class of England consumed, on average, a much larger amount of fat in 1900 than they did in 1950. Michaels did not think that lack of exercise could explain the modern epidemic of heart disease either, because it was among the idle upper classes, who had never engaged in manual labor, and who were eating much less fat than they used to, that heart disease had increased the most. 

Then there was the incisive work of Jeremiah Morris, Professor of Social Medicine at the University of London, who observed that in the first half of the twentieth century, coronary heart disease had increased while coronary atheroma—cholesterol plaques in the coronary arteries—had actually decreased. Morris examined the autopsy records at London Hospital from the years 1908 through 1949. In 1908, 30.4 percent of all autopsies in men aged thirty to seventy showed advanced atheroma; in 1949, only 16 percent. In women the rate had fallen from 25.9 percent to 7.5 percent. In other words, cholesterol plaques in coronary arteries were far less common than before, but they were contributing to more disease, more angina, and more heart attacks. By 1961, when Morris presented a paper about the subject at Yale University Medical School, studies conducted in Framingham, Massachusetts 10 and Albany, New York 11 had established a connection between cholesterol and heart disease. Morris was sure that some other, unknown environmental factor was also important. “It is tolerably certain,” he told his audience, “that more than fats in the diet affect blood lipid levels, more than blood lipid levels are involved in atheroma formation, and more than atheroma is needed for ischemic heart disease.” 

That factor, as we will see, is electricity. Electromagnetic fields have become so intense in our environment that we are unable to metabolize fats the way our ancestors could. 

Whatever environmental factor was affecting human beings in America during the 1930s and 1940s was also affecting all the animals in the Philadelphia Zoo. 

The Laboratory of Comparative Pathology was a unique facility founded at the zoo in 1901. And from 1916 to 1964, laboratory director Herbert Fox and his successor, Herbert L. Ratcliffe, kept complete records of autopsies performed on over thirteen thousand animals that had died in the zoo. 

During this period, arteriosclerosis increased an astonishing ten- to twenty-fold among all species of mammals and birds. In 1923, Fox had written that such lesions were “exceedingly rare,” occurring in less than two percent of animals as a minor and incidental finding at autopsy.12 The incidence rose rapidly during the 1930s, and by the 1950s arteriosclerosis was not only occurring in young animals, but was often the cause of their death rather than just a finding on autopsy. By 1964, the disease occurred in one-quarter of all mammals and thirty-five percent of all birds. 

Coronary heart disease appeared even more suddenly. In fact, before 1945 the disease did not exist in the zoo.13 And the first heart attacks ever recorded in zoo animals occurred ten years later, in 1955. Arteriosclerosis had been occurring with some regularity since the 1930s in the aorta and other arteries, but not in the coronary arteries of the heart. But sclerosis of the coronary arteries now increased so rapidly among both mammals and birds that by 1963, over 90 percent of all mammals and 72 percent of all birds that died in the zoo had coronary disease, while 24 percent of the mammals and 10 percent of the birds had had heart attacks. And a majority of the heart attacks were occurring in young animals in the first half of their expected life spans. Arteriosclerosis and heart disease were now occurring in 45 families of mammals and 65 families of birds residing in the zoo —in deer and in antelope; in prairie dogs and squirrels; in lions, and tigers, and bears; and in geese, storks, and eagles. 

Diet had nothing to do with these changes. The increase in arteriosclerosis had begun well before 1935, the year that more nutritious diets were introduced throughout the zoo. And coronary disease did not make its appearance until ten years later, yet the animals’ diets were the same at all times between 1935 and 1964. The population density, for mammals at least, remained about the same during all fifty years, as did the amount of exercise they got. Ratcliffe tried to find the answer in social pressures brought about by breeding programs that were begun in 1940. He thought that psychological stresses must be affecting the animals’ hearts. But he could not explain why, more than two decades later, coronary disease and heart attacks were continuing to increase, spectacularly, throughout the zoo, and among all species, whether or not they were being bred. Nor could he explain why sclerosis of arteries outside the heart had increased during the 1930s, nor why, thousands of miles away, researchers were finding arteriosclerosis in 22 percent of the animals in the London Zoo in 1960,14 and a similar number in the Zoo of Antwerp, Belgium in 1962.15 

The element that increased most spectacularly in the environment during the 1950s when coronary disease was exploding among humans and animals was radio frequency (RF) radiation. Before World War II, radio waves had been widely used for only two purposes: radio communication, and diathermy, which is their therapeutic use in medicine to heat parts of the body. 

Suddenly the demand for RF generating equipment was unquenchable. While the use of the telegraph in the Civil War had stimulated its commercial development, and the use of radio in World War I had done the same for that technology, the use of radar in World War II spawned scores of new industries. RF oscillators were being mass produced for the first time, and hundreds of thousands of people were being exposed to radio waves on the job—radio waves that were now used not only in radar, but in navigation; radio and television broad- casting; radio astronomy; heating, sealing and welding in dozens of industries; and “radar ranges” for the home. Not only industrial workers, but the entire population, were being exposed to unprecedented levels of RF radiation. 

For reasons having more to do with politics than science, history took opposite tracks on opposite sides of the world. In Western Bloc countries, science went deeper into denial. It had buried its head, ostrich-like, in the year 1800, as we saw in chapter 4, and now simply piled on more sand. When radar technicians complained of headaches, fatigue, chest discomfort, and eye pain, and even sterility and hair loss, they were sent for a quick medical exam and some blood work. When nothing dramatic turned up, they were ordered back to work.16 The attitude of Charles I. Barron, medical director of the California division of Lockheed Aircraft Corporation, was typical. Reports of illness from microwave radiation “had all too often found their way into lay publications and newspapers,” he said in 1955. He was addressing representatives of the medical profession, the armed forces, various academic institutions, and the airline industry at a meeting in Washington, DC. “Unfortunately,” he added, “the publication of this information within the past several years coincided with the development of our most powerful airborne radar transmitters, and considerable apprehension and misunderstanding has arisen among engineering and radar test personnel.” He told his audience that he had examined hundreds of Lockheed employees and found no difference between the health of those exposed to radar and those not exposed. However, his study, which was subsequently published in the Journal of Aviation Medicine, was tainted by the same see-no-evil attitude. His “unexposed” control population were actually Lockheed workers who were exposed to radar intensities of less than 3.9 milliwatts per square centimeter—a level that is almost four times the legal limit for exposure of the general public in the United States today. Twenty-eight percent of these “unexposed” employees suffered from neurological or cardiovascular disorders, or from jaundice, migraines, bleeding, anemia, or arthritis. And when Barron took repeated blood samples from his “exposed” population—those who were exposed to more than 3.9 milliwatts per square centimeter—the majority had a significant drop in their red cell count over time, and a significant increase in their white cell count. Barron dismissed these findings as “laboratory errors.”17 

The Eastern Bloc experience was different. Workers’ complaints were considered important. Clinics dedicated entirely to the diagnosis and treatment of workers exposed to microwave radiation were established in Moscow, Leningrad, Kiev, Warsaw, Prague, and other cities. On average, about fifteen percent of workers in these industries became sick enough to seek medical treatment, and two percent became permanently disabled.18

The Soviets and their allies recognized that the symptoms caused by microwave radiation were the same as those first described in 1869 by American physician George Beard. Therefore, using Beard’s terminology, they called the symptoms “neurasthenia,” while the disease that caused them was named “microwave sickness” or “radio wave sickness.” 

Intensive research began at the Institute of Labor Hygiene and Occupational Diseases in Moscow in 1953. By the 1970s, the fruits of such investigations had produced thousands of publications.19 Medical textbooks on radio wave sickness were written, and the subject entered the curriculum of Russian and Eastern European medical schools. Today, Russian textbooks describe effects on the heart, nervous system, thyroid, adrenals, and other organs.20 Symptoms of radio wave exposure include headache, fatigue, weakness, dizziness, nausea, sleep disturbances, irritability, memory loss, emotional instability, depression, anxiety, sexual dysfunction, impaired appetite, abdominal pain, and digestive disturbances. Patients have visible tremors, cold hands and feet, flushed face, hyperactive reflexes, abundant perspiration, and brittle fingernails. Blood tests reveal disturbed carbohydrate metabolism and elevated triglycerides and cholesterol. 

Cardiac symptoms are prominent. They include heart palpitations, heaviness and stabbing pains in the chest, and shortness of breath after exertion. The blood pressure and pulse rate become unstable. Acute exposure usually causes rapid heartbeat and high blood pressure, while chronic exposure causes the opposite: low blood pressure and a heartbeat that can be as slow as 35 to 40 beats per minute. The first heart sound is dulled, the heart is enlarged on the left side, and a murmur is heard over the apex of the heart, often accompanied by premature beats and an irregular rhythm. The electrocardiogram may reveal a blockage of electrical conduction within the heart, and a condition known as left axis deviation. Signs of oxygen deprivation to the heart muscle—a flattened or inverted T wave, and depression of the ST interval—are extremely frequent. Congestive heart failure is sometimes the ultimate outcome. In one medical textbook published in 1971, the author, Nikolay Tyagin, stated that in his experience only about fifteen percent of workers exposed to radio waves had normal EKG's.21 

Although this knowledge has been completely ignored by the American Medical Association and is not taught in any American medical school, it has not gone unnoticed by some American researchers.

Trained as a biologist, Allan H. Frey became interested in microwave research in 1960 by following his curiosity. Employed at the General Electric Company’s Advanced Electronics Center at Cornell University, he was already exploring how electrostatic fields affect an animal’s nervous system, and he was experimenting with the biological effects of air ions. Late that year, while attending a conference, he met a technician from GE’s radar test facility at Syracuse, who told Frey that he could hear radar. “He was rather surprised,” Frey later recalled, “when I asked if he would take me to a site and let me hear the radar. It seemed that I was the first person he had told about hearing radars who did not dismiss his statement out of hand.”22 The man took Frey to his work site near the radar dome at Syracuse. “And when I walked around there and climbed up to stand at the edge of the pulsating beam, I could hear it, too,” Frey remembers. “I could hear the radar going zip-zipzip.”23  

This chance meeting determined the future course of Frey’s career. He left his job at General Electric and began doing full-time research into the biological effects of microwave radiation. In 1961, he published his first paper on “microwave hearing,” a phenomenon that is now fully recognized although still not fully explained. He spent the next two decades experimenting on animals to determine the effects of microwaves on behavior, and to clarify their effects on the auditory system, the eyes, the brain, the nervous system, and the heart. He discovered the blood brain barrier effect, an alarming damage to the protective shield that keeps bacteria, viruses, and toxic chemicals out of the brain—damage that occurs at levels of radiation that are much lower than what is emitted by cell phones today. He proved that nerves, when firing, emit pulses of radiation themselves, in the infrared spectrum. All of Frey’s pioneering work was funded by the Office of Naval Research and the United States Army. 

When scientists in the Soviet Union began reporting that they could modify the rhythm of the heart at will with microwave radiation, Frey took a special interest. N. A. Levitina, in Moscow, had found that she could either speed up an animal’s heart rate or slow it down, depending on which part of the animal’s body she irradiated. Irradiating the back of an animal’s head quickened its heart rate, while irradiating the back of its body, or its stomach, slowed it down.24 

Frey, in his laboratory in Pennsylvania, decided to take this research one step farther. Based on the Russian results and his knowledge of physiology he predicted that if he used brief pulses of microwave radiation, synchronized with the heartbeat and timed to coincide precisely with the beginning of each beat, he would cause the heart to speed up, and might disrupt its rhythm. 

It worked like magic. He first tried the experiment on the isolated hearts of 22 different frogs. The heart rate increased every time. In half the hearts, arrhythmias occurred, and in some of the experiments the heart stopped. The pulse of radiation was most damaging when it occurred exactly one-fifth of a second after the beginning of each beat. The average power density was only six-tenths of a microwatt per square centimeter—roughly ten thousand times weaker than the radiation that a person’s heart would absorb today if he or she kept a cell phone in a shirt pocket while making a call. 

Frey conducted the experiments with isolated hearts in 1967. Two years later, he tried the same thing on 24 live frogs, with similar though less dramatic results. No arrhythmias or cardiac arrests occurred, but when the pulses of radiation coincided with the beginning of each beat, the heart speeded up significantly.25 

The effects Frey demonstrated occur because the heart is an electrical organ and microwave pulses interfere with the heart’s pacemaker. But in addition to these direct effects, there is a more basic problem: microwave radiation, and electricity in general, starves the heart of oxygen because of effects at the cellular level. These cellular effects were discovered, oddly enough, by a team that included Paul Dudley White. In the 1940s and 1950s, while the Soviets were beginning to describe how radio waves cause neurasthenia in workers, the United States military was investigating the same disease in military recruits. 

The job that was assigned to Dr. Mandel Cohen and his associates in 1941 was to determine why so many soldiers fighting in the Second World War were reporting sick because of heart symptoms. Although their research spawned a number of shorter articles in medical journals, the main body of their work was a 150-page report that has been long forgotten. It was written for the Committee of Medical Research of the Office of Scientific Research and Development—the office that was created by President Roosevelt to coordinate scientific and medical research related to the war effort. The only copy I located in the United States was on a single deteriorating roll of microfilm buried in the Pennsylvania storage facility of the National Library of Medicine.26 

Unlike their predecessors since the time of Sigmund Freud, this medical team not only took these anxiety-like complaints seriously, but looked for and found physical abnormalities in the majority of these patients. They preferred to call the illness “neurocirculatory asthenia,” rather than “neurasthenia,” “irritable heart,” “effort syndrome,” or “anxiety neurosis,” as it had variously been known since the 1860s. But the symptoms confronting them were the same as those first described by George Miller Beard in 1869 (see chapter 5). Although the focus of this team was the heart, the 144 soldiers enrolled in their study also had respiratory, neurological, muscular, and digestive symptoms. Their average patient, in addition to having heart palpitations, chest pains, and shortness of breath, was nervous, irritable, shaky, weak, depressed, and exhausted. He could not concentrate, was losing weight, and was troubled by insomnia. He complained of headaches, dizziness, and nausea, and sometimes suffered from diarrhea or vomiting. Yet standard laboratory tests—blood work, urinalysis, X-rays, electrocardiogram, and electroencephalogram—were usually “within normal limits.” 

Cohen, who directed the research, brought to it an open mind. Raised in Alabama and educated at Yale, he was then a young professor at Harvard Medical School who was already challenging delivered wisdom and lighting one of the earliest sparks of what would eventually be a revolution in psychiatry. For he had the courage to call Freudian psychoanalysis a cult back in the 1940s when its practitioners were asserting control in every academic institution, capturing the imagination of Hollywood, and touching every aspect of American culture.27 

Paul White, one of the two chief investigators—the other was neurologist Stanley Cobb—was already familiar with neurocirculatory asthenia from his civilian cardiology practice, and thought, contrary to Freud, that it was a genuine physical disease. Under the leadership of these three individuals, the team confirmed that this was indeed the case. Using the techniques that were available in the 1940s, they accomplished what no one in the nineteenth century, when the epidemic began, had been able to do: they demonstrated conclusively that neurasthenia had a physical and not a psychological cause. And they gave the medical community a list of objective signs by which the illness could be diagnosed. 

Most patients had a rapid resting heart rate (over 90 beats per minute) and a rapid respiratory rate (over 20 breaths per minute), as well as a tremor of the fingers and hyperactive knee and ankle reflexes. Most had cold hands, and half the patients had a visibly flushed face and neck. 

It has long been known that people with disorders of circulation have abnormal capillaries that can be most easily seen in the nail fold—the fold of skin at the base of the fingernails. White’s team routinely found such abnormal capillaries in their patients with neurocirculatory asthenia. 

They found that these patients were hypersensitive to heat, pain and, significantly, to electricity —they reflexively pulled their hands away from electric shocks of much lower intensity than did normal healthy individuals. 

When asked to run on an inclined treadmill for three minutes, the majority of these patients could not do it. On average, they lasted only a minute and a half. Their heart rate after such exercise was excessively fast, their oxygen consumption during the exercise was abnormally low and, most significantly, their ventilatory efficiency was abnormally low. This means that they used less oxygen, and exhaled less carbon dioxide, than a normal person even when they breathed the same amount of air. To compensate, they breathed more air more rapidly than a healthy person and were still not able to continue running because their bodies were still not using enough oxygen. 

A fifteen-minute walk on the same treadmill gave similar results. All subjects were able to complete this easier task. However, on average, the patients with neurocirculatory asthenia breathed fifteen percent more air per minute than healthy volunteers in order to consume the same amount of oxygen. And although, by breathing faster, the patients with neurocirculatory asthenia managed to consume the same amount of oxygen as the healthy volunteers, they had twice as much lactic acid in their blood, indicating that their cells were not using that oxygen efficiently.

Compared to healthy individuals, people with this disorder were able to extract less oxygen from the same amount of air, and their cells were able to extract less energy from the same amount of oxygen. The researchers concluded that these patients suffered from a defect of aerobic metabolism. In other words, something was wrong with their mitochondria—the powerhouses of their cells. The patients correctly complained that they could not get enough air. This was starving all of their organs of oxygen and causing both their heart symptoms and their other disabling complaints. Patients with neurocirculatory asthenia were consequently unable to hold their breath for anything like a normal period of time, even when breathing oxygen.28 

During the five years of Cohen’s team’s study, several types of treatment were attempted with different groups of patients: oral testosterone; massive doses of vitamin B complex; thiamine; cytochrome c; psychotherapy; and a course of physical training under a professional trainer. None of these programs produced any improvement in symptoms or endurance. 

“We conclude,” wrote the team in June 1946, “that neurocirculatory asthenia is a condition that actually exists and has not been invented by patients or medical observers. It is not malingering or simply a mechanism aroused during war time for purposes of evading military service. The disorder is quite common both as a civilian and as a service problem.”29 They objected to Freud’s term “anxiety neurosis” because anxiety was obviously a result, and not a cause, of the profound physical effects of not being able to get enough air. 

In fact, these researchers virtually disproved the theory that the disease was caused by “stress” or “anxiety.” It was not caused by hyperventilation.30 Their patients did not have elevated levels of stress hormones—17-ketosteroids—in their urine. A twenty-year follow-up study of civilians with neurocirculatory asthenia revealed that these people typically did not develop any of the diseases that are supposed to be caused by anxiety, such as high blood pressure, peptic ulcer, asthma, or ulcerative colitis.31 However, they did have abnormal electrocardiograms that indicated that the heart muscle was being starved of oxygen, and that were sometimes indistinguishable from the EKGs of people who had actual coronary artery disease or actual structural damage to the heart.32 

The connection to electricity was provided by the Soviets. Soviet researchers, during the 1950s, 1960s, and 1970s, described physical signs and symptoms and EKG changes, caused by radio waves, that were identical to those that White and others had first reported in the 1930s and 1940s. The EKG changes indicated both conduction blocks and oxygen deprivation to the heart.33 The Soviet scientists—in agreement with Cohen and White’s team—concluded that these patients were suffering from a defect of aerobic metabolism. Something was wrong with the mitochondria in their cells. And they discovered what that defect was. Scientists that included Yury Dumanskiy, Mikhail Shandala, and Lyudmila Tomashevskaya, working in Kiev, and F. A. Kolodub, N. P. Zalyubovskaya and R. I. Kiselev, working in Kharkov, proved that the activity of the electron transport chain—the mitochondrial enzymes that extract energy from our food—is diminished not only in animals that are exposed to radio waves,34 but in animals exposed to magnetic fields from ordinary electric power lines.35 

The first war in which the electric telegraph was widely used—the American Civil War—was also the first in which “irritable heart” was a prominent disease. A young physician named Jacob M. Da Costa, visiting physician at a military hospital in Philadelphia, described the typical patient. 

“A man who had been for some months or longer in active service,” he wrote, “would be seized with diarrhoea, annoying, yet not severe enough to keep him out of the field; or, attacked with diarrhoea or fever, he rejoined, after a short stay in hospital, his command, and again underwent the exertions of a soldier’s life. He soon noticed that he could not bear them as formerly; he got out of breath, could not keep up with his comrades, was annoyed with dizziness and palpitation, and with pain in his chest; his accoutrements oppressed him, and all this though he appeared well and healthy. Seeking advice from the surgeon of the regiment, it was decided that he was unfit for duty, and he was sent to a hospital, where his persistently quick acting heart confirmed his story, though he looked like a man in sound condition.”36 

Exposure to electricity in this war was universal. When the Civil War broke out in 1861, the east and west coasts had not yet been linked, and most of the country west of the Mississippi was not yet served by any telegraph lines. But in this war, every soldier, at least on the Union side, marched and camped near such lines. From the attack on Fort Sumter on April 12, 1861, until General Lee’s surrender at Appomattox, the United States Military Telegraph Corps rolled out 15,389 miles of telegraph lines on the heels of the marching troops, so that military commanders in Washington could communicate instantly with all of the troops at their encampments. After the war all of these temporary lines were dismantled and disposed of.37 

“Hardly a day intervened when General Grant did not know the exact state of facts with me, more than 1,500 miles off as the wires ran,” wrote General Sherman in 1864. “On the field a thin insulated wire may be run on improvised stakes, or from tree to tree, for six or more miles in a couple of hours, and I have seen operators so skillful that by cutting the wire they would receive a message from a distant station with their tongues.”38 

Because the distinctive symptoms of irritable heart were encountered in every army of the United States, and attracted the attention of so many of its medical officers, Da Costa was puzzled that no one had described such a disease in any previous war. But telegraphic communications were never before used to such an extent in war. In the British Blue Book of the Crimean War, a conflict which lasted from 1853-56, Da Costa found two references to some troops being admitted to hospitals for “palpitations,” and he found possible hints of the same problem reported from India during the Indian Rebellion of 1857-58. These were also the only two conflicts prior to the American Civil War in which some telegraph lines were erected to connect command headquarters with troop units.39 Da Costa wrote that he searched through medical documents from many previous conflicts and did not find even a hint of such a disease prior to the Crimean War. 

During the next several decades, irritable heart attracted relatively little interest. It was reported among British troops in India and South Africa, and occasionally among soldiers of other nations.40 But the number of cases was small. Even during the Civil War, what Da Costa considered “common” did not amount to many cases by today’s standards. In his day, when heart disease was practically non-existent, the appearance of 1,200 cases of chest pain among two million young soldiers 41 caught his attention like an unfamiliar reef, suddenly materialized in a well-traveled shipping lane across an otherwise calm sea—a sea that was not further disturbed until 1914. 

But shortly after the First World War broke out, in a time when heart disease was still rare in the general population and cardiology did not even exist as a separate medical specialty, soldiers began reporting sick with chest pain and shortness of breath, not by the hundreds, but by the tens of thousands. Out of the six and a half million young men that fought in the British Army and Navy, over one hundred thousand were discharged and pensioned with a diagnosis of “heart disease.”42 Most of these men had irritable heart, also called “Da Costa’s syndrome,” or “effort syndrome.” In the United States Army such cases were all listed under “Valvular Disorders of the Heart,” and were the third most common medical cause for discharge from the Army.43 The same disease also occurred in the Air Force, but was almost always diagnosed as “flying sickness,” thought to be cause by repeated exposure to reduced oxygen pressure at high altitudes.44 

Similar reports came from Germany, Austria, Italy, and France.45 

So enormous was the problem that the United States Surgeon-General ordered four million soldiers training in the Army camps to be given cardiac examinations before being sent overseas. Effort syndrome was “far away the commonest disorder encountered and transcended in interest and importance all the other heart affections combined,” said one of the examining physicians, Lewis A. Conner.46 

Some soldiers in this war developed effort syndrome after shell shock, or exposure to poison gas. Many more had no such history. All, however, had gone into battle using a newfangled form of communication. 

The United Kingdom declared war on Germany on August 4, 1914, two days after Germany invaded its ally, France. The British army began embarking for France on August 9, and continued on to Belgium, reaching the city of Mons on August 22, without the aid of the wireless telegraph. While in Mons, a 1500-watt mobile radio set, having a range of 60 to 80 miles, was supplied to the British army signal troops.47 It was during the retreat from Mons that many British soldiers first became ill with chest pain, shortness of breath, palpitations, and rapid heart beat and were sent back to England to be evaluated for possible heart disease.48 

Exposure to radio was universal and intense. A knapsack radio with a range of five miles was used by the British army in all trench warfare on the front lines. Every battalion carried two such sets, each having two operators, in the front line with the infantry. One or two hundred yards behind, back with the reserve, were two more sets and two more operators. A mile further behind at Brigade Headquarters was a larger radio set, two miles back at Divisional Headquarters was a 500-watt set, and six miles behind the front lines at Army Headquarters was a 1500-watt radio wagon with a 120- foot steel mast and an umbrella-type aerial. Each operator relayed the telegraph messages received from in front of or behind him.49

All cavalry divisions and brigades were assigned radio wagons and knapsack sets. Cavalry scouts carried special sets right on their horses, that were called “whisker wireless” because of the antennae that sprouted from the horses’ flanks like the quills of a porcupine.50 

Most aircraft carried lightweight radio sets, using the metal frame of the airplane as the antenna. German war Zeppelins and French dirigibles carried much more powerful sets, and Japan had wireless sets in its war balloons. Radio sets on ships made it possible for naval battle lines to be spread out in formations 200 or 300 miles long. Even submarines, while cruising below the surface, sent up a short mast, or an insulated jet of water, as an antenna for the coded radio messages they broadcast and received.51 

In the Second World War irritable heart, now called neurocirculatory asthenia, returned with a vengeance. Radar joined radio for the first time in this war, and it too was universal and intense. Like children with a new toy, every nation devised as many uses for it as possible. Britain, for example, peppered its coastline with hundreds of early warning radars emitting more than half a million watts each, and outfitted all its airplanes with powerful radars that could detect objects as small as a submarine periscope. More than two thousand portable radars, accompanied by 105-foot tall portable towers, were deployed by the British army. Two thousand more “gun-laying” radars assisted anti-aircraft guns in tracking and shooting down enemy aircraft. The ships of the Royal Navy sported surface radars with a power of up to one million watts, as well as air search radars, and microwave radars that detected submarines and were used for navigation. 

The Americans deployed five hundred early-warning radars on board ships, and additional early warning radars on aircraft, each having a power of one million watts. They used portable radar sets at beachheads and airfields in the South Pacific, and thousands of microwave radars on ships, aircraft, and Navy blimps. From 1941 to 1945 the Radiation Laboratory at the Massachusetts Institute of Technology was kept busy by its military masters developing some one hundred different types of radar for various uses in the war. 

The other powers fielded radar installations with equal vigor on land, at sea, and in air. Germany deployed over one thousand ground-based early warning radars in Europe, as well as thousands of shipborne, airborne, and gun-laying radars. The Soviet Union did likewise, as did Australia, Canada, New Zealand, South Africa, the Netherlands, France, Italy, and Hungary. Wherever a soldier was asked to fight he was bathed in an ever-thickening soup of pulsed radio wave and microwave frequencies. And he succumbed in large numbers, in the armies, navies, and air forces of every nation.52 

It was during this war that the first rigorous program of medical research was conducted on soldiers with this disease. By this time Freud’s proposed term “anxiety neurosis” had taken firm hold among army doctors. Members of the Air Force who had heart symptoms were now receiving a diagnosis of “L.M.F.,” standing for “lack of moral fiber.” Cohen’s team was stacked with psychiatrists. But to their surprise, and guided by cardiologist Paul White, they found objective evidence of a real disease that they concluded was not caused by anxiety. 

Largely because of the prestige of this team, research into neurocirculatory asthenia continued in the United States throughout the 1950s; in Sweden, Finland, Portugal, and France into the 1970s and 1980s; and even, in Israel and Italy, into the 1990s.53 But a growing stigma was attached to any doctor who still believed in the physical causation of this disease. Although the dominance of the Freudians had waned, they left an indelible mark not only on psychiatry but on all of medicine. Today, in the West, only the “anxiety” label remains, and people with the symptoms of neurocirculatory asthenia are automatically given a psychiatric diagnosis and, very likely, a paper bag to breathe into. Ironically, Freud himself, although he coined the term “anxiety neurosis,” thought that its symptoms were not mentally caused, “nor amenable to psychotherapy.”54 

Meanwhile, an unending stream of patients continued to appear in doctors’ offices suffering from unexplained exhaustion, often accompanied by chest pain and shortness of breath, and a few courageous doctors stubbornly continued to insist that psychiatric problems could not explain them all. In 1988, the term “chronic fatigue syndrome” (CFS) was coined by Gary Holmes at the Centers for Disease Control, and it continues to be applied by some doctors to patients whose most prominent symptom is exhaustion. Those doctors are still very much in the minority. Based on their reports, the CDC estimates that the prevalence of CFS is between 0.2 percent and 2.5 percent of the population,55 while their counterparts in the psychiatric community tell us that as many as one person in six, suffering from the identical symptoms, fits the criteria for “anxiety disorder” or “depression.” 

To confuse the matter still further, the same set of symptoms was called myalgic encephalomyelitis (ME) in England as early as 1956, a name that focused attention on muscle pains and neurological symptoms rather than fatigue. Finally, in 2011, doctors from thirteen countries got together and adopted a set of “International Consensus Criteria” that recommends abandoning the name “chronic fatigue syndrome” and applying “myalgic encephalomyelitis” to all patients who suffer from “post-exertional exhaustion” plus specific neurological, cardiovascular, respiratory, immune, gastrointestinal, and other impairments.56 

This international “consensus” effort, however, is doomed to failure. It completely ignores the psychiatric community, which sees far more of these patients. And it pretends that the schism that emerged from World War II never occurred. In the former Soviet Union, Eastern Europe, and most of Asia, the older term “neurasthenia” persists today. That term is still widely applied to the full spectrum of symptoms described by George Beard in 1869. In those parts of the world it is generally recognized that exposure to toxic agents, both chemical and electromagnetic, often causes this disease. 

According to published literature, all of these diseases—neurocirculatory asthenia, radio wave sickness, anxiety disorder, chronic fatigue syndrome, and myalgic encephalomyelitis—predispose to elevated levels of blood cholesterol, and all carry an increased risk of death from heart disease.57 So do porphyria 58 and oxygen deprivation.59 The fundamental defect in this disease of many names is that although enough oxygen and nutrients reach the cells, the mitochondria—the powerhouses of the cells—cannot efficiently use that oxygen and those nutrients, and not enough energy is produced to satisfy the requirements of heart, brain, muscles, and organs. This effectively starves the entire body, including the heart, of oxygen, and can eventually damage the heart. In addition, neither sugars nor fats are efficiently utilized by the cells, causing unutilized sugar to build up in the blood —leading to diabetes—as well as unutilized fats to be deposited in arteries. 

And we have a good idea of precisely where the defect is located. People with this disease have reduced activity of a porphyrin-containing enzyme called cytochrome oxidase, which resides within the mitochondria, and delivers electrons from the food we eat to the oxygen we breathe. Its activity is impaired in all the incarnations of this disease. Mitochondrial dysfunction has been reported in chronic fatigue syndrome 60 and in anxiety disorder.61 Muscle biopsies in these patients show reduced cytochrome oxidase activity. Impaired glucose metabolism is well known in radio wave sickness, as is an impairment of cytochrome oxidase activity in animals exposed to even extremely low levels of radio waves.62 And the neurological and cardiac symptoms of porphyria are widely blamed on a deficiency of cytochrome oxidase and cytochrome c, the heme-containing enzymes of respiration.63 

Recently zoologist Neelima Kumar at Panjab University in India proved elegantly that cellular respiration can be brought to a standstill in honey bees merely by exposing them to a cell phone for ten minutes. The concentration of total carbohydrates in their hemolymph, which is what bees’ blood is called, rose from 1.29 to 1.5 milligrams per milliliter. After twenty minutes it rose to 1.73 milligrams per milliliter. The glucose content rose from 0.218 to 0.231 to 0.277 milligrams per milliliter. Total lipids rose from 2.06 to 3.03 to 4.50 milligrams per milliliter. Cholesterol rose from 0.230 to 1.381 to 2.565 milligrams per milliliter. Total protein rose from 0.475 to 0.525 to 0.825 milligrams per milliliter. In other words, after just ten minutes of exposure to a cell phone, the bees practically could not metabolize sugars, proteins, or fats. Mitochondria are essentially the same in bees and in humans, but since their metabolism is so much faster, electric fields affect bees much more quickly. 

In the twentieth century, particularly after World War II, a barrage of toxic chemicals and electromagnetic fields (EMFs) began to significantly interfere with the breathing of our cells. We know from work at Columbia University that even tiny electric fields alter the speed of electron transport from cytochrome oxidase. Researchers Martin Blank and Reba Goodman thought that the explanation lay in the most basic of physical principles. “EMF,” they wrote in 2009, “acts as a force that competes with the chemical forces in a reaction.” Scientists at the Environmental Protection Agency—John Allis and William Joines—finding a similar effect from radio waves, developed a variant theory along the same lines. They speculated that the iron atoms in the porphyrin-containing enzymes were set into motion by the oscillating electric fields, interfering with their ability to transport electrons.64

It was the English physiologist John Scott Haldane who first suggested, in his classic book, Respiration, that “soldier’s heart” was caused not by anxiety but by a chronic lack of oxygen.65 Mandel Cohen later proved that the defect was not in the lungs, but in the cells. These patients continually gulped air not because they were neurotic, but because they really could not get enough of it. You might as well have put them in an atmosphere that contained only 15 percent oxygen instead of 21 percent, or transported them to an altitude of 15,000 feet. Their chests hurt, and their hearts beat fast, not because of panic, but because they craved air. And their hearts craved oxygen, not because their coronary arteries were blocked, but because their cells could not fully utilize the air they were breathing. 

These patients were not psychiatric cases; they were warnings for the world. For the same thing was also happening to the civilian population: they too were being slowly asphyxiated, and the pandemic of heart disease that was well underway in the 1950s was one result. Even in people who did not have a porphyrin enzyme deficiency, the mitochondria in their cells were still struggling, to some smaller degree, to metabolize carbohydrates, fats, and proteins. Unburned fats, together with the cholesterol that transported those fats in the blood, were being deposited on the walls of arteries. Humans and animals were not able to push their hearts quite as far as before without showing signs of stress and disease. This takes its clearest toll on the body when it is pushed to its limits, for example in athletes, and in soldiers during war. 

The real story is told by the astonishing statistics. 

When I began my research, I had only Samuel Milham’s data. Since he found such a large difference in rural disease rates in 1940 between the five least and five most electrified states, I wanted to see what would happen if I calculated the rates for all forty-eight states and plotted the numbers on a graph. I looked up rural mortality rates in volumes of the Vital Statistics of the United States. I calculated the percent of electrification for each state by dividing the number of its residential electric customers, as published by the Edison Electric Institute, by the total number of its households, as published by the United States Census. 

The results, for 1931 and 1940, are pictured in figures 1 and 2. Not only is there a five- to sixfold difference in mortality from rural heart disease between the most and least electrified states, but all of the data points come very close to lying on the same line. The more a state was electrified— i.e. the more rural households had electricity—the more rural heart disease it had. The amount of rural heart disease was proportional to the number of households that had electricity.66 

What is even more remarkable is that the death rates from heart disease in unelectrified rural areas of the United States in 1931, before the Rural Electrification Program got into gear, were still as low as the death rates for the whole United States prior to the beginning of the heart disease epidemic in the nineteenth century. 

In 1850, the first census year in which mortality data were collected, a total of 2,527 deaths from heart disease were recorded in the nation. Heart disease ranked twenty-fifth among causes of death in that year. About as many people died from accidental drowning as from heart disease. Heart disease was something that occurred mainly in young children and in old age, and was predominantly a rural rather than an urban disease because farmers lived longer than city-dwellers. 

In order to realistically compare nineteenth century statistics with those of today, I had to make some adjustments to the Census figures. The census enumerators for 1850, 1860, and 1870 had only the numbers reported to them from memory by the households they visited as to who had died during the previous year and from what causes. These numbers were estimated by the Census Office to be deficient, on average, by about 40 percent. In the census for 1880, the numbers were supplemented by reports from physicians and averaged only 19 percent short of the truth. By 1890 eight northeastern states plus the District of Columbia had passed laws requiring the official registration of all deaths, and the statistics for those registration states were considered accurate to within two to three percent. By 1910 the registration area had expanded to 23 states, and by 1930 only Texas did not require registration of deaths. 

Another complicating factor is that heart failure was sometimes not evident except for the edema it caused, and therefore edema, then called “dropsy,”67 was sometimes reported as the only cause of death, although the death was most likely to have been caused by either heart or kidney disease. Yet a further complication is the appearance of “Bright’s disease” for the first time in the tables for 1870. This was the new term for the type of kidney disease that caused edema. Its prevalence in 1870 was reported to be 4.5 cases per 100,000 population. 

With these complexities in mind, I have calculated the approximate rates of death from cardiovascular disease for each decade from 1850 to 2010, adding the figures for “dropsy” when that term was still in use (until 1900), and subtracting 4.5 per 100,000 for the years 1850 and 1860. I added a correction factor of 40 percent for 1850, 1860 and 1870, and 19 percent for 1880. I included reports of deaths from all diseases of the heart, arteries, and blood pressure. Beginning with 1890 I used only the figures for the death registration states, which by 1930 included the entire country except for Texas. The results are as follows: 

Death Rates from Cardiovascular Disease (per 100,000 population) 

1850 77 

1860 78 

1870 78 

1880 102 

1890 145 

1890 (Indians on reservations) 60 

1900 154 

1910 183 

1920 187 

1930 235 

1940 291 

1950 384 

1960 396 

1970 394 

1980 361 

1990 310 

2000 280 

2010 210 

2017 214 

1910 was the first year in which the mortality in cities surpassed that in the countryside. But the greatest disparities emerged in the countryside. In the northeastern states, which in 1910 had the greatest use of telegraphs, telephones, and now electric lights and power, and the densest networks of wires crisscrossing the land, the rural areas had as much mortality from cardiovascular disease, or more, than the cities. The rural mortality rate of Connecticut was then 234, of New York 279, and of Massachusetts 296. By contrast Colorado’s rural rate was still 100, and Washington’s 92. Kentucky’s rural rate, at 88.5, was only 44 percent of its urban rate, which was 202. 

Heart disease rose steadily with electrification, as we saw in figures 1 and 2, and reached a peak when rural electrification approached 100 percent during the 1950s. Rates of heart disease then leveled off for three decades and began to drop again—or so it seems at first glance. A closer look, however, shows the true picture. These are just the mortality rates. The number of people walking around with heart disease—the prevalence rate—actually continued to rise, and is still rising today. Mortality stopped rising in the 1950s because of the introduction of anticoagulants like heparin, and later aspirin, both to treat heart attacks and to prevent them.68 In the succeeding decades the ever more aggressive use of anticoagulants, drugs to lower blood pressure, cardiac bypass surgery, balloon angioplasty, coronary stents, pacemakers, and even heart transplants, has simply allowed an ever growing number of people with heart disease to stay alive. But people are not having fewer heart attacks. They are having more. 

The Framingham Heart Study showed that at any given age the chance of having a first heart attack was essentially the same during the 1990s as it was during the 1960s.69 This came as something of a surprise. By giving people statin drugs to lower their cholesterol, doctors thought they were going to save people from having clogged arteries, which was supposed to automatically mean healthier hearts. It hasn’t turned out that way. And in another study, scientists involved in the Minnesota Heart Survey discovered in 2001 that although fewer hospital patients were being diagnosed with coronary heart disease, more patients were being diagnosed with heart-related chest pain. In fact, between 1985 and 1995 the rate of unstable angina had increased by 56 percent in men and by 30 percent in women.70 

The number of people with congestive heart failure has also continued steadily to rise. Researchers at the Mayo Clinic searched two decades of their records and discovered that the incidence of heart failure was 8.3 percent higher during the period 1996-2000 than it had been during 1979-1984.71 

The true situation is much worse still. Those numbers reflect only people newly diagnosed with heart failure. The increase in the total number of people walking around with this condition is astonishing, and only a small part of the increase is due to the aging of the population. Doctors from Cook County Hospital, Loyola University Medical School, and the Centers for Disease Control examined patient records from a representative sample of American hospitals and found that the numbers of patients with a diagnosis of heart failure more than doubled between 1973 and 1986.72 A later, similar study by scientists at the Centers for Disease Control found that this trend had continued. The number of hospitalizations for heart failure tripled between 1979 and 2004, the age adjusted rate doubled, and the greatest increase occurred in people under 65 years of age.73 A similar study of patients at Henry Ford Hospital in Detroit showed that the annual prevalence of congestive heart failure had almost quadrupled from 1989 to 1999.74 

Young people, as the 3,000 alarmed doctors who signed the Freiburger Appeal affirmed, are having heart attacks at an unprecedented rate. In the United States, as great a percentage of forty year-olds today have cardiovascular disease as the percentage of seventy-year-olds that had cardiovascular disease in 1970. Close to one-quarter of Americans aged forty to forty-four today have some form of cardiovascular disease.75 And the stress on even younger hearts is not confined to athletes. In 2005, researchers at the Centers for Disease Control, surveying the health of adolescents and young adults, aged 15 to 34, found to their surprise that between 1989 and 1998 rates of sudden cardiac death in young men had risen 11 percent, and in young women had risen 30 percent, and that rates of mortality from enlarged heart, heart rhythm disturbances, pulmonary heart disease, and hypertensive heart disease had also increased in this young population.76 

In the twenty-first century this trend has continued. The number of heart attacks in Americans in their twenties rose by 20 percent between 1999 and 2006, and the mortality from all types of heart disease in this age group rose by one-third.77 In 2014, among patients between the ages of 35 and 74 who were hospitalized with heart attacks, one-third were below the age of 54.78 

Developing countries are no better off. They have already followed the developed countries down the primrose path of electrification, and they are following us even faster to the wholesale embrace of wireless technology. The consequences are inevitable. Heart disease was once unimportant in low-income nations. It is now the number one killer of human beings in every region of the world except one. Only in sub-Saharan Africa, in 2017, was heart disease still outranked by diseases of poverty—AIDS and pneumonia—as a cause of mortality. 

In spite of the billions being spent on conquering heart disease, the medical community is still groping in the dark. It will not win this war so long as it fails to recognize that the main factor that has been causing this pandemic for a hundred and fifty years is the electrification of the world.

next

PART 5

https://exploringrealhistory.blogspot.com/2021/01/part-5-invisible-rainbow-history-of.html

The Transformation of Diabetes

footnotes

Chapter 10. Porphyrins and the Basis of Life 

1. Randolph 1987, chap. 4. 

2. Leech 1888; Matthes 1888; Hay 1889; Ireland 1889; Marandon de Montyel 1889; Revue des Sciences Médicales 1889; Rexford 1889; Bresslauer 1891; Fehr 1891; Geill 1891; Hammond 1891; Lepine 1893; With 1980. 

3. Morton 2000. 

4. Morton 1995, 1998, 2000, 2001, personal communication. 

5. Morton 1995, p. 6. 

6. Hoffer and Osmond 1963; Huszák et al. 1972; Irvine and Wetterberg 1972; Pfeiffer 1975; McCabe 1983; Durkó et al. 1984; McGinnis et al. 2008a, 2008b; Mikirova 2015. 

7. Moore et al. 1987, pp. 42-43. 

8. Gibney et al. 1972; Petrova and Kuznetsova 1972; Holtmann and Xenakis 1978, 1978; Pierach 1979; Hengstman et al. 2009;.

 9. Quoted in Mason et al. 1933. 

10. Athenstaedt 1974; Fukuda 1974. 

11. Adler 1975. 

12. Kim et al. 2001; Zhou 2009; Hagemann et al. 2013. 

13. Aramaki et al. 2005. 

14. Szent-Györgyi 1957, p. 19. 

15. Becker and Selden 1985, p. 30. 

16. Burr 1945b, 1950, 1956. 

17. Ravitz 1953. 

18. Becker 1960; Becker and Marino 1982, p. 37; Becker and Selden 1985, p. 116. 

19. Gilyarovskiy et al. 1958. 

20. Becker 1985, pp. 238-39. 

21. Rose 1970, pp. 172-73, 214-15; Lund 1947 (comprehensive review and bibliography). 

22. Becker and Selden 1985, p. 237. 

23. Becker 1961a; Becker and Marino 1982, pp. 35-36. 

24. Klüver 1944a, 1944b; Harvey and Figge 1958; Peters et al. 1974; Becker and Wolfgram 1978; Chung et al. 1997; Kulvietis et al. 2007; Felitsyn et al. 2008. 

25. Peters 1993. 

26. Felitsyn et al. 2008. 

27. Soldán and Pirko 2012. 

28. Hargittai and Lieberman 1991; Ravera et al. 2009; Morelli et al. 2011; Morelli et al. 2012; Ravera, Bartolucci, et al. 2013; Rivera, Nobbio, et al. 2013; Ravera et al. 2015; Ravera and Panfoli 2015. 

29. Peters 1961. 

30. Peters et al. 1957; Peters et al. 1958; Peters 1961; see also Painter and Morrow 1959; Donald et al. 1965. 31. Lagerwerff and Specht 1970; Wong 1996; Wong and Mak 1997; Apeagyei et al. 2011; Tamrakar and Shakya 2011; Darus et al. 2012; Elbagermi et al. 2013; Li et al. 2014; Nazzal et al. 2014. 

32. Flinn et al. 2005. 

33. Hamadani et al. 2002. 

34. Hamadani et al. 2001.

35. Buh et al. 1994. 

36. McLachlan et al. 1991; Cuajungco et al. 2000; Regland et al. 2001; Ritchie et al. 2003; Frederickson et al. 2004; Religa et al. 2006; Bush and Tanzi 2008. 

37. Religa et al. 2006. 

38. Hashim et al. 1996. 

39. Cuajungco et al. 2000; Que et al. 2008; Baum et al. 2010; Cristóvão et al. 2016. 

40. Voyatzoglou et al. 1982; Xu et al. 2013. 

41. Milne et al. 1983; Taylor et al. 1991; Johnson et al. 1993; King et al. 2000. 

42. Johnson et al. 1993; King et al. 2000. 

43. Andant et al. 1998. See also Kauppinen and Mustajoki 1988. 

44. Linet et al. 1999. 

45. Halpern and Copsey 1946; Markovitz 1954; Saint et al. 1954; Goldberg 1959; Eilenberg and Scobie 1960; Ridley 1969; Stein and Tschudy 1970; Beattie et al. 1973; Menawat et al. 1979; Leonhardt 1981; Laiwah et al. 1983; Laiwah et al. 1985; Kordač et al. 1989. 

46. Ridley 1975. 

47. I. P. Bakšiš, A. I. Lubosevičute, and P. A. Lopateve, “Acute Intermittent Porphyria and Necrotic Myocardial Changes,” Terapevticheskiĭ arkhiv 8: 145-46 (1984), cited in Kordač et al. 1989. 

48. Sterling et al. 1949; Rook and Champion 1960; Waxman et al. 1967; Stein and Tschudy 1970; Herrick et al. 1990. 

49. Berman and Bielicky 1956. 

50. Labbé 1967; Laiwah et al. 1983; Laiwah et al. 1985; Herrick et al. 1990; Kordač et al. 1989; Moore et al. 1987; Moore 1990. 

Chapter 11. Irritable Heart 

1. Maron et al. 2009. 

2. Milham 2010a, p. 345. 

3. White 1938, pp. 171-72, 586; White 1971; Flint 1866, p. 303. 

4. Chadha et al. 1997. 

5. Milham 2010b. 

6. Dawber et al. 1957; Doyle et al. 1957; Kannel 1974; Hatano and Matsuzaki 1977; Rhoads et al. 1978; Feinleib et al. 1979; Okumiya et al. 1985; Solberg et al. 1985; Stamler et al. 1986; Reed et al. 1989; Tuomilehto and Kuulasmaa 1989; Neaton et al. 1992; Verschuren et al. 1995; Njølstad et al. 1996; Wilson et al. 1998; Stamler et al. 2000; Navas-Nacher et al. 2001; Sharrett et al. 2001; Zhang et al. 2003. 

7. Phillips et al. 1978; Burr and Sweetnam 1982; Frentzel-Beyme et al. 1988; Snowdon 1988; Thorogood et al. 1994; Appleby et al. 1999; Key et al. 1999; Fraser 1999, 2009. 

8. Phillips et al. 1978; Snowden 1988; Fraser 1999; Key et al. 1999. 

9. Sijbrands et al. 2001. 

10. Dawber et al. 1957. 

11. Doyle et al. 1957. 

12. Fox 1923, p. 71. 

13. Ratcliffe et al. 1960, p. 737. 

14. Rigg et al. 1960. 

15. Vastesaeger and Delcourt 1962. 

16. Daily 1943; Barron et al. 1955; McLaughlin 1962. 

17. Barron et al. 1955; Brodeur 1977, pp. 29-30. 

18. Sadchikova 1960, 1974; Klimková-Deutschová 1974. 

19. See Pervushin 1957; Drogichina 1960; Letavet and Gordon 1960; Orlova 1960; Gordon 1966; Dodge 1970 (review); Healer 1970 (review); Marha 1970; Gembitskiy 1970; Subbota 1970; Marha et al. 1971; Tyagin 1971; Barański and Czerski 1976; Bachurin 1979; Jerabek 1979; Silverman 1979 (review); McRee 1979, 1980 (reviews); Sadchikova et al. 1980; McRee et al. 1988 (review); Afrikanova and Grigoriev 1996. For bibliographies, see Kholodov 1966; Novitskiy et al. 1970; Presman 1970; Petrov 1970a; Glaser 1971-1976, 1977; Moore 1984; Grigoriev and Grigoriev 2013. 

20. Personal communication, Oleg Grigoriev and Yury Grigoriev, Russian National Committee on Non-Ionizing Radiation Protection. Russian textbooks include Izmerov and Denizov 2001; Suvorov and Izmerov 2003; Krutikov et al. 2003; Krutikov et al. 2004; Izmerov 2005, 2011a, 2011b; Izmerov and Kirillova 2008; Kudryashov et al. 2008. 

21. Tyagin 1971, p. 101. 22. Frey 1988, p. 787.

23. Brodeur 1977, p. 51. 

24. Presman and Levitina 1962a, 1962b; Levitina 1966. 

25. Frey and Seifert 1968; Frey and Eichert 1986. 

26. Cohen, Johnson, Chapman, et al. 1946. 

27. Cohen 2003. 

28. Haldane 1922, p. 56; Jones and Mellersh 1946; Jones and Scarisbrick 1946; Jones 1948. 

29. Cohen, Johnson, Chapman, et al. 1946, p. 121. 

30. See also Jones and Scarisbrick 1943; Jones 1948; Gorman et al. 1988; Holt and Andrews 1989; Hibbert and Pilsbury 1989; Spinhoven et al. 1992; Garssen et al. 1996; Barlow 2002, p. 162. 

31. Cohen and White 1951, p. 355; Wheeler et al. 1950, pp. 887-88. 

32. Craig and White 1934; Graybiel and White 1935; Dry 1938. See also Master 1943; Logue et al. 1944; Wendkos 1944; Friedman 1947, p. 23; Blom 1951; Holmgren et al. 1959; Lary and Goldschlager 1974. 

33. Orlova 1960; Bachurin 1979.

34. Dumanskiy and Shandala 1973; Dumanskiy and Rudichenko 1976; Zalyubovskaya et al. 1977; Zalyubovskaia and Kiselev 1978; Dumanskiy and Tomashevskaya 1978; Shutenko et al. 1981; Dumanskiy and Tomashevskaya 1982; Tomashevskaya and Soleny 1986; Tomashevskaya and Dumanskiy 1989; Tomashevskaya and Dumanskiy 1988. 

35. Chernysheva and Kolodub 1976; Kolodub and Chernysheva 1980. 

36. Da Costa 1871, p. 19. 

37. Plum 1882. 

38. Johnston 1880, pp. 76-77. 

39. Plum 1882, vol. 1, pp. 26-27. 

40. Oglesby 1887; MacLeod 1898. 

41. Smart 1888, p. 834. 

42. Howell 1985, p. 45; International Labour Office 1921, Appendix V, p. 50. 

43. Lewis 1918b, p. 1; Cohn 1919, p. 457. 

44. Munro 1919, p. 895. 

45. Aschenheim 1915; Brasch 1915; Braun 1915; Devoto 1915; Ehret 1915; Merkel 1915; Schott 1915; Treupel 1915; von Dziembowski 1915; von Romberg 1915; Aubertin 1916; Galli 1916; Korach 1916; Lian 1916; Cohn 1919. 

46. Conner 1919, p. 777. 

47. Scriven 1915; Corcoran 1917. 

48. Howell 1985, p. 37. 

49. Corcoran 1917. 

50. Worts 1915. 

51. Scriven 1915; Popular Science Monthly 1918. 

52. Lewis 1940; Master 1943; Stephenson and Cameron 1943; Jones and Mellersh 1946; Jones 1948. 

53. Mäntysaari et al. 1988; Fava et al. 1994; Sonimo et al. 1998. 

54. Freud 1895, pp. 97, 107; Cohen and White 1972. 

55. Reyes et al. 2003, Reeves et al. 2007. 

56. Caruthers and van de Sande 2011. 

57. Cholesterol in anxiety disorder: Lazarev et al. 1989; Bajwa et al. 1992; Freedman et al. 1995; Peter et al. 1999. Heart disease in anxiety disorder: Coryell et al. 1982; Coryell et al. 1986; Coryell 1988; Hayward et al. 1989; Weissman et al. 1990; Eaker et al. 1992; Nutzinger 1992; Kawachi et al. 1994; Rozanski et al. 1999; Bowen et al. 2000; Paterniti et al. 2001; Huffman et al. 2002; Grace et al. 2004; Katerndahl 2004; Eaker et al. 2005; Csaba 2006; Rothenbacher et al. 2007; Shibeshi et al. 2007; Vural and Başar 2007; Frasure-Smith et al. 2008; Phillips et al. 2009; Scherrer et al. 2010; Martens et al. 2010; Seldenrijk et al. 2010; Vogelzangs et al. 2010; Olafiranye et al. 2011; Soares-Filho et al. 2014. Cholesterol in chronic fatigue syndrome: van Rensburg et al. 2001; Peckerman et al. 2003; Jason et al. 2006. Heart disease in chronic fatigue syndrome: Lerner et al. 1993; Bates et al. 1995; Miwa and Fujita 2009. Heart disease in myalgic encephalomyelitis: Caruthers and van de Sande 2011. Cholesterol in radio wave sickness: KlimkovaDeutschova 1974; Sadchikova 1981. 

58. Heart disease in porphyria: Saint et al. 1954; Goldberg 1959; Eilenberg and Scobie 1960; Ridley 1969, 1975; Stein and Tschudy 1970; Beattie et al. 1973; Bonkowsky et al. 1975; Menawat et al. 1979; Leonhardt 1981; Kordač et al. 1989; Crimlisk 1997. Cholesterol in porphyria: Taddeini et al. 1964; Lees et al. 1970; Stein and Tschudy 1970; York 1972, pp. 61-62; Whitelaw 1974; Kaplan and Lewis 1986; Shiue et al. 1989; Fernández-Miranda et al. 2000; Blom 2011; Park et al. 2011. 

59. Chin et al. 1999; Newman et al. 2001; Coughlin et al. 2004; Robinson et al. 2004; Li et al. 2005; McArdle et al. 2006; Li et al. 2007; Savransky et al. 2007; Steiropoulous et al. 2007; Gozal et al. 2008; Dorkova et al. 2008; Lefebvre et al. 2008; Çuhadaroğlu et al. 2009; Drager et al. 2010; Nadeem et al. 2014. 

60. Behan et al. 1991; Wong et al. 1992; McCully et al. 1996; Myhill et al. 2009. 

61. Marazziti et al. 2001; Gardner et al. 2003; Fattal et al. 2007; Gardner and Boles 2008, 2011; Hroudová and Fišar 2011. 

62. See note 34. Also Ammari et al. 2008. 

63. Goldberg et al. 1985; Kordač et al. 1989; Herrick et al. 1990; Moore 1990; Thunell 2000. 

64. Sanders et al. 1984. 

65. Haldane 1922, pp. 56-57; Haldane and Priestley 1935, pp. 139-41. 

66. Numbers of residential electric customers for 1930-1931 were obtained from National Electric Light Association, Statistical Bulletin nos. 7 and 8, and for 1939-1940 from Edison Electric Institute, Statistical Bulletin nos. 7 and 8. For states east of the 100th meridian, “Farm Service” customers (1930-1931) or “Rural Rate” customers (1939-1940) were added to “Residential or Domestic” customers to get the true residential count, as recommended in the Statistical Bulletins. “Farm” and “Rural Rate” service in the west referred mainly to commercial customers, usually large irrigation systems. The same terms, east of the 100th meridian, were used for residential service on distinct rural rates. A discrepancy in the number of farm households in Utah was resolved by consulting Rural Electrification in Utah, published in 1940 by the Rural Electrification Administration. 

67. Johnson 1868. 

68. Koller 1962. 

69. Parikh et al. 2009. 

70. McGovern et al. 2001. 

71. Roger et al. 2004. 

72. Ghali et al. 1990. 

73. Fang et al. 2008. 

74. McCullough et al. 2002. 

75. Cutler et al. 1997; Martin et al. 2009. 

76. Zheng et al. 2005. 

77. National Center for Health Statistics 1999, 2006. 

78. Arora et al. 2019.



No comments:

Part 1 Windswept House A VATICAN NOVEL....History as Prologue: End Signs

Windswept House A VATICAN NOVEL  by Malachi Martin History as Prologue: End Signs  1957   DIPLOMATS schooled in harsh times and in the tough...