Monday, January 11, 2021

Part 6 : The Invisible Rainbow...Suspended Animation... You mean you can hear electricity?

: The Invisible Rainbow

A History of Electricity and Life 

by Arthur Firstenberg


We admonish mankind to observe and distinguish between what conduces to health, and what to a long life; for some things, though they exhilarate the spirits, strengthen the faculties, and prevent diseases, are yet destructive to life, and, without sickness, bring on a wasting old age; while there are others which prolong life and prevent decay, though not to be used without danger to health. ...SIR FRANCIS BACON 

Every animal has allotted to it a constant number of heartbeats per lifetime. If it lives fast and furiously like a shrew or a mouse, it will use up its quota of heartbeats in a much shorter time than if its metabolic personality is a more temperate one. ...DONALD R. GRIFFIN  Listening in the Dark 

14

Suspended Animation

IN 1880, GEORGE MILLER BEARD wrote his classic medical book on neurasthenia, titled A Practical Treatise on Nervous Exhaustion. He made an intriguing observation: “Although these difficulties are not directly fatal, and so do not appear in the mortality tables; although, on the contrary, they may tend to prolong life and to protect the system against febrile and inflammatory disease, yet the amount of suffering that they cause is enormous.” In American Nervousness: Its Causes and Consequences, written a year later for the general public, he reiterated the paradox: “Side by side with this increase of nervousness, and partly as a result of it, longevity has increased.” Along with migraine headaches, ringing in the ears, mental irritability, insomnia, fatigue, digestive disorders, dehydration, muscle and joint pains, heart palpitations, allergies, itching, intolerance of foods and medications—in addition to this general degradation in the public health, the world was witnessing an increase in the human lifespan. Those who were suffering the most tended to look young for their age and to live longer than average. 

At the end of American Nervousness appears a map showing the approximate geographic reach of neurasthenia. It was the same as the reach of railroads and telegraphs, being most prevalent in the northeast where the electric tangle was densest. “The telegraph is a cause of nervousness the potency of which is little understood,” wrote Beard. “Within but thirty years the telegraphs of the world have grown to half a million miles of line, and over a million miles of wire—or more than forty times the circuit of the globe.” Beard also noticed that a rare disease called diabetes was much more common among neurasthenes than in the general population.

What Beard—an electrotherapist and a friend of Thomas Edison, who was shortly to be diagnosed with diabetes—did not figure out was that the growing cloud of electromagnetic energy, which permeated air, water, and soil wherever telegraph lines advanced, had something to do with the growing numbers of neurasthenes and diabetics that sought his ministrations. He was astute enough, however, to make the connection between longevity and disease, and to understand that the modern expansion of lifespan did not necessarily mean better health or a more excellent life. The mysterious extension of years among individuals who were the sickest was in fact a warning that something was terribly wrong. 

Fasting and an austere diet have been recommended since antiquity for the rejuvenation of the body. The prolongation of life, said Francis Bacon, should be one of the purposes of medicine, along with the preservation of health and the cure of diseases. Sometimes, he added, one must make a choice: “The same things which conduce to health do not always conduce to longevity.” But he laid down one sure rule, for those who wished to follow it, that furthered all three goals of the physician: “A spare and almost Pythagorean diet, such as is prescribed by the stricter orders of monastic life or the institution of hermits, which regard want and penury as their rule, produces longevity.” 

Three hundred years later Bacon’s third arm of medicine was still sorely neglected. “What must one do, or rather what must one not do to attain the extreme limits of age?” asked Jean Finot in 1906. “What, after all, are the boundaries of life? These two series of questions together constitute a special science, gerocomy. It exists in name only.” Observing the animal world, Finot saw that the length of adolescence had something to do with the length of life. A guinea pig’s period of growth endured seven months; that of a rabbit, one year; of a lion, four years, of a camel, eight years, of a man, twenty years. Human initiative was misguided, said Finot. What conduces to health and vigor does not necessarily prolong life. “The education and instruction given to children,” he wrote, “are in flagrant contradiction to this law of gerocomy. All of our efforts tend towards the rapid advancement of physical and intellectual maturity.” To prolong life, it would be necessary to do just the opposite. And one method, he suggested, was to restrict one’s diet. 

In the early years of the twentieth century, Russell Chittendon at Yale University, who is often called the father of American biochemistry, experimented on himself and on volunteers at Yale. Over the course of two months he gradually eliminated breakfast, settling into a pattern that consisted of a substantial midday meal and a light supper at night. Although he was eating less than 40 grams of protein daily, one-third the amount then recommended by nutritionists, and only 2,000 calories, he not only suffered no ill effects but the rheumatism in his knee disappeared, as did his migraine headaches and attacks of indigestion. Rowing a boat left him with much less fatigue and muscle soreness than before. His weight dropped to 125 pounds and remained there. After one year on this diet, with funding from the Carnegie Institution and the National Academy of Sciences, he formally experimented on volunteers. They were: five professors and instructors at Yale; thirteen volunteers from the Hospital Corps of the Army; and eight students, “all thoroughly trained athletes, some with exceptional records in athletic events.” He restricted them to about 2,000 calories and no more than fifty grams of protein per day. Without exception his subjects’ health was as good as before or better at the end of half a year, with gains in strength, endurance, and well-being. 

While Chittendon proved nothing about lifespan, the ancient recommendations have since been thoroughly subjected to the scientific method and, in all species of animals from one-celled organisms on up to primates, proven accurate. Provided an animal receives the minimal nutrients necessary to maintain health, a severe reduction in calories will prolong life. And there is no other method known that will reliably do so. 

A severe restriction in calories will increase the lifespan of rodents by 60 percent, routinely producing four and five year old mice and rats. 

Calorie restricted rats are not senile. Quite the opposite: they look younger and are more vigorous than other animals their age. If they are female they reach sexual maturity very late and produce litters at impossibly old ages.

The annual fish Cynolebias adloffi lived three times as long when restricted in food.3 A wild population of brook trout doubled their lifespan, some trout living twenty-four years when food was scarce.

Spiders fed three flies a week instead of eight lived an average 139 days instead of 30.5 Underfed water fleas lived 60 days instead of 46.6 Nematodes, a type of worm, more than doubled their lifespan.7 The mollusc Patella vulgata lives two and a half years when food is abundant, and up to sixteen years when it is not.8 

Cows given half the normal amount of feed each winter lived twenty months longer. Their breathing rate was also one third lower, and their heart rate ten beats per minute less.9 

During a twenty-five-year-long study at the Wisconsin National Primate Research Center, the death rate of fully-fed adult rhesus monkeys from age-related causes was three times the death rate of calorie-restricted animals. When the study ended in 2013, twice as many diet-restricted monkeys as fully fed monkeys were still alive.10 

Calorie restriction works whether it is lifelong or only during a portion of life, and whether it is begun early, during adulthood, or relatively late in life. The longer the period of restriction, the longer the prolongation of life. Calorie restriction prevents age-related diseases. It delays or prevents heart disease and kidney disease, and drastically decreases the cancer rate: in one study, rats that were fed one fifth as much food had only seven percent as many tumors.11 In rhesus monkeys it reduces the cancer rate by half, heart disease by half, prevents diabetes, prevents atrophy of the brain, and reduces the incidence of endometriosis, fibrosis, amyloidosis, ulcers, cataracts, and kidney failure.12 Older diet- restricted monkeys have less wrinkled skin and fewer age spots, and their hair is less gray. 

A natural human experiment exists. In 1977, there lived 888 people over one hundred years old in Japan, the greatest concentration of whom lived on the southwestern coast and a few islands. The percentage of centenarians on Okinawa was the highest in Japan, forty times higher than in the northeastern prefectures. Yasuo Kagawa, professor of biochemistry at Jichi Medical School, explained: “People in areas of longevity have a lower caloric intake and smaller physique than those in the rest of Japan.” The daily diet of school boys and girls in Okinawa was about 60 percent of recommended caloric intake. 

The reason calorie restriction works is controversial, but the simplest explanation is that it slows metabolism. While the aging process is not fully understood, anything that slows the metabolism of cells must slow the aging process. The idea that we are each allotted a fixed number of heartbeats is ancient. In modern times, Max Rubner at the University of Berlin, in 1908, proposed a variation on this idea: instead of a fixed number of heartbeats, our cells are allotted a fixed amount of energy. The slower an animal’s metabolism, the longer it will live. Most mammals, Rubner calculated, use about 200 kilocalories per gram of body weight during their lifetime. For humans, assuming a lifespan of ninety years, the value is about 800. If an individual is able to delay the use of that amount of energy, his or her life will be correspondingly longer. Raymond Pearl, at The Johns Hopkins University, published a book along these lines in 1928 titled The Rate of Living. 

During 1916 and 1917 Jacques Loeb and John Northrop, at The Rockefeller Institute, experimented on fruit flies. Since flies are cold-blooded, their metabolism can be slowed merely by lowering the ambient temperature. The average duration of life, from egg to death, was 21 days at a temperature of 30° C; 39 days at 25° C; 54 days at 20° C; 124 days at 15° C; and 178 days at 10° C. The rule that low temperatures prolong life applies to all cold-blood animals. 

Another common way animals reduce their metabolism is by hibernating. Hibernating species of bats, for example, live on average six years longer than species that don’t. And bats live far longer than other animals their size because, in effect, they hibernate on a daily basis. Bats are active, on the wing hunting for dinner, for only a few hours each night. They sleep the rest of the time, and sleeping bats are not warm-blooded. “It is sometimes possible in the laboratory to keep a rectal thermocouple in place while a bat settles down for a nap,” wrote bat expert Donald Griffin, “and in one such case the body temperature fell in an hour from 40° when the bat was active to 1°, which was almost exactly the temperature of the air in which it was resting.”13 This explains why bats weighing only a quarter of an ounce can live more than thirty years, while no laboratory mouse has ever lived more than five. 

Calorie restriction, the only method of prolonging life that works for all animals—warm-blooded, cold-blooded, hibernators, and non- hibernators—obviously slows metabolism, as measured by how much oxygen an animal consumes. Food-restricted animals always use less oxygen. A controversy arose among gerontologists because food restricted animals also lose weight, and oxygen use per unit weight does not necessarily decline. But it declines where it counts. In humans, the internal organs, despite comprising less than 10 percent of our weight, are responsible for about 70 percent of our resting energy use. And it is our internal organs, not our fat or muscle tissues, that determine how long we will live.14 

As researchers into the aging process have emphasized, the engine of our lives is the electron transport system in the mitochondria of our cells.15 It is there that the oxygen we breathe and the food we eat combine, at a speed that determines our rate of living and our lifespan. That speed is in turn determined by our body temperature, and by the amount of food we digest. 

But there is a third way to slow our rate of living: by poisoning the electron transport chain. One way to do this is to expose it to an electromagnetic field. And since the 1840s, at a gradual but accelerating rate, we have immersed our world, and all biology, in a thickening fog of such fields, that exert forces on the electrons in our mitochondria and slow them down. Unlike calorie restriction, this does not promote health. It starves our cells not of calories, but of oxygen. Resting metabolic rate does not change, but maximum metabolism does. No cell—no brain cell, no heart cell, no muscle cell— can work to its capacity. Where calorie restriction prevents cancer, diabetes, and heart disease, electromagnetic fields promote cancer, diabetes, and heart disease. Where calorie restriction promotes well-being, oxygen deprivation promotes headaches, fatigue, heart palpitation, “brain fog,” and muscular aches and pains. But both will slow overall metabolism and prolong life. 

Industrial electricity in any of its forms always injures. If the injury is not too severe, it also prolongs life. 

In an experiment funded by the Atomic Energy Commission, exposure to simple electric shock for one hour each day throughout adulthood increase the average lifespan of mice by 62 days.16 

Radio waves also increase lifespan. 

In the late 1960s, a proton accelerator was being built at Los Alamos National Laboratory that was going to use radio waves at a frequency of 800 MHz. As a precaution, forty-eight mice were enrolled in an experiment to see if this radiation might be dangerous for workers in the facility. Twenty-four of the mice were irradiated at a power level of 43 milliwatts per square centimeter for two hours a day, five days a week, for three years. This is a huge exposure that is powerful enough to produce internal burns. And indeed four of the mice died from burn injuries. A fifth mouse became so obese that it could not be extracted from the exposure compartment and it died there. But the mice that weren’t directly killed by the experiment lived a long time—on average, 19 days longer than the unexposed mice.17 

In the late 1950s, Charles Süsskind at the University of California, Berkeley received funding from the Air Force to determine the lethal dose of microwave radiation in mice, and to investigate its effects on growth and longevity. At that time, the Air Force thought that 100 milliwatts per square centimeter was a safe dose; Süsskind soon found out that it was not. It killed most mice within nine minutes. So after that, Süsskind only exposed mice for four and a half minutes at a time. He irradiated one hundred mice for 59 weeks, five days per week for four and a half minutes a day at a power density of 109 milliwatts per square centimeter. Some of the irradiated mice, which subsequently died, developed extraordinarily high white blood cell counts, and had enlarge lymphoid tissue and enormous liver abscesses. Testicular degeneration occurred in 40 percent of the irradiated mice, and 35 percent developed leukemia. However, the unirradiated mice, although they were much healthier, did not live as long. After 15 months, half the control mice were dead, and only 36 percent of the irradiated ones. 

From 1980 to 1982, Chung-Kwang Chou and Arthur William Guy led a famous experiment at the University of Washington. They had a contract with the United States Air Force to investigate the safety of the early warning radar stations recently installed at Beale Air Force Base in California, and on Cape Cod in Massachusetts. Known as PAVE PAWS, these were the most powerful radar stations in the world, emitting a peak effective radiated power of about three billion watts and irradiating millions of Americans. The University of Washington team approximated the PAVE PAWS signals at a “very low” level, irradiating one hundred rats 21.5 hours a day, 7 days a week, for 25 months. The Specific Absorption Rate—approximately that of the average cell phone today—was 0.4 watts per kilogram. During the two years of the experiment the exposed animals developed four times as many malignant tumors as the control animals. But they lived, on average, 25 days longer. 

Recently gerontologists at the University of Illinois exposed cell cultures of mouse fibroblasts to radio waves (50 MHz, 0.5 watts) for either o, 5, 15, or 30 minutes at a time, twice a week. The treatments lowered the mortality rate of the cells. The greater the exposure time, the lower the mortality, so that the 30-minute exposure reduced cell death by one-third after seven days, and increased their average lifespan from 118 days to 138 days.18 

Even ionizing radiation—X-rays and gamma rays—will prolong life if not too intense. Everything from Paramecia to coddling moths to rats and mice to human embryo cells have had their average and/or maximal life spans increased by exposure to ionizing radiation. Even wild chipmunks have been captured, irradiated, and released—and had their average lifespans thereby extended.19 Rajindar Sohal and Robert Allen, who irradiated house flies at Southern Methodist University, discovered that at moderate doses, an increase in lifespan occurred only if the flies were placed in compartments small enough so that they could not fly. They concluded that radiation always produces two opposite kinds of effects: injurious effects that shorten the lifespan, and a reduction in basal metabolic rate that lengthens the lifespan. If the dose of radiation is low enough, the net effect is a lengthening of life despite obvious injuries. 

Loren Carlson and Betty Jackson at the University of Washington School of Medicine reported that rats exposed daily to moderate doses of gamma rays for a year had their lives extended, on average, by 50 percent, but suffered a significant increase in tumors. Their oxygen consumption was reduced by one-third. 

Egon Lorenz, at the National Cancer Institute, exposed mice to gamma rays—one-tenth of a roentgen per eight-hour day—beginning at one month of age and for the rest of their lives. The irradiated females lived just as long, and the irradiated males one hundred days longer, than the unirradiated animals. But the irradiated mice developed many more lymphomas, leukemias, and lung, breast, ovarian, and other types of cancers. 

Even extremely low doses of radiation will both injure and extend lifespan. Mice exposed to only 7 centigrays per year of gamma radiation—only 20 times higher than background radiation—had their lives extended by an average of 125 days.20 Human fibroblasts, exposed in cell culture once for only six hours to the same level of gamma rays that is received by astronauts in space, or during certain medical exams, lived longer than unexposed cells.21 Human embryo cells exposed to very low dose X-rays for ten hours a day had their lifespans increased by 14 to 35 percent, although most of the cells also suffered several kinds of damage to their chromosomes.22 

Modern medicine can take some but not all of the credit for the modern increase in the average human lifespan. For that increase began a century before the discovery of antibiotics, in a time when doctors still bled their patients and dosed them with medicines containing lead, mercury, and arsenic. But medicine can take none of the credit for the modern extension of the maximum human life span. For medicine still does not pretend to understand the aging process, and only a tiny minority of doctors are even beginning to try to do anything to reverse aging. Yet the maximum age at death, worldwide, has been steadily rising. 

Sweden has the most accurate and longest continuous records on the extreme limits of human age of any country, dating back to 1861. They reveal that the recorded maximum age at death was 100.5 years in 1861, that it rose gradually but steadily until 1969, when it was 105.5 years, and that it has risen more than twice as fast since then, reaching 109 years by the turn of the twenty-first century. 

In 1969, the trends in both Swedish longevity and Swedish cancer accelerated. It was the year color TV and UHF-TV were introduced into the country (see chapter 13). 

In 1994, Väinö Kannisto, former United Nations advisor on demographic and social statistics, showed that the number of people living more than one hundred years was increasing spectacularly in the twenty-eight countries for which good data existed. The number of centenarians in Sweden had risen from 46 in 1950 to 579 in 1990. During the same period, the number of centenarians had risen from 17 to 325 in Denmark; from 4 to 141 in Finland; from 265 to 4,042 in England and Wales; from 198 to 3,853 in France; from 53 to 2,528 in West Germany; from 104 to 2,047 in Italy; from 126 to 3,126 in Japan; from 14 to 196 in New Zealand. The number of centenarians in all these countries, roughly doubling every ten years, had far outraced the increase in population. 

Even in Okinawa, long known for its longevity, there lived only a single person over one hundred years old as late as 1960. In Japan as a whole, noted Kagawa in 1978, the number of male centenarians had quadrupled in only 25 years, while the number of female centenarians had sextupled. And yet he observed, in middle-aged Japanese, almost a doubling in the rates of breast cancer and colon cancer, a tripling of lung cancer, a 40 percent rise in heart disease, and an 80 percent rise in diabetes: “extended life expectancy but increased diseases.” 

The explanation for both phenomena is electricity— electricity that travels through wires as well as earth, that radiates through air as well as bones. We are all, to an extent that has been intensifying for one hundred and sixty years, in a mild state of suspended animation. We live longer, but are less alive, than our ancestors.

15 

You mean you can hear electricity?

IN 1962, A LOCAL WOMAN contacted the University of California, Santa Barbara for help tracking down a mysterious noise. She had moved into a newly-built home in a quiet neighborhood and this noise, whose location she could not find, was accompanying her wherever she went like an unwanted ghost. It was impairing her health, keeping her awake, and forcing her, in desperation, to abandon her home for large periods of time just to get relief. In response to her plea for help, an engineer showed up at her house with a load of electronic equipment. 

Clarence Wieske, who was with the Laboratory for the Study of Sensory Systems in Tucson, a military contractor that was working on the interface between man and machine, happened to be involved with a project at the University at Santa Barbara when the woman’s call came. His initial intention was to look for electric fields on her property that might be setting some metal object into vibration, creating the noise that was bothering her. He was startled by what he found.

His search coil, as he anticipated, did pick up unusually strong harmonic frequencies. They emanated not only from her electric wires, but also from her telephone wires, gas pipes, water pipes, and even the metal in her heating system. But his stethoscope could find no audible noise being emitted by any of these items. He therefore tried what he thought was a far-fetched experiment: he attached a tape recorder to his search coil, which recorded the electric frequency patterns and translated them into sounds, and then played the recording for the woman. When she put on the headphones and listened to the tape, she recognized the sounds as identical to the noise that was tormenting her. Wieske then took the experiment one step farther. He disconnected the headphones and played the tape directly back into his search coil. The woman said instantly, “You mean you cannot hear that?” She was hearing the same thing again directly from the search coil although it emitted only an electromagnetic field and no actual noise. 

In a further experiment Wieske, without telling the woman, connected a low-power frequency generator to the water pipe about one hundred feet from her house. She remarked that there was peculiar noise “like a barking dog.” When Wieske turned on the pickup equipment in her house and put on the headphones, he found that she was correct. He heard a sound like a barking dog! 

These experiments and others done at her home and at the university left no doubt that the woman was hearing electricity—and that the noise was not coming from her dental fillings. Wieske then set about to try to alleviate her problem. Electrically grounding her refrigerator, freezer case, door chimes, and other appliances reduced the noise level a bit but did not get rid of it. One day, during a power outage, she telephoned Wieske, ecstatic. The noise had stopped! But it returned as soon as the power came back on. Therefore Wieske contacted all the utility companies. With their cooperation, he put filters on her phone line, an isolation transformer on her electric line, and sections of non-conducting pipe into her water line and her gas line. These time-consuming, expensive measures prevented unwanted electric frequencies originating elsewhere in the neighborhood from being conducted over these paths. Finally, the noise was reduced to an endurable level and the woman could inhabit her home. 

After investigating a number of similar cases, Wieske predicted that with the continued electrification of society, complaints like hers would one day be common. His article about his experiences, published in Biomedical Sciences Instrumentation in 1963, concluded with a technical description of human hearing, including all of the places within the ear where electromagnetic fields might cause electric currents to flow. He speculated as to the reasons some people can hear them and not others: “If the nerve for some reason in some individuals is not as well insulated from these currents as in the normal individual, or if the cochlea is not as well insulated from these currents in some individuals, perhaps this could make them sensitive to these electrical fields.” 

Wieske’s prediction has come to pass. Today companies serving the population that can feel and hear electromagnetic fields form a significant cottage industry in every part of the United States. One organization, the International Institute for Building Biology and Ecology, lists sixty consultants, scattered throughout the United States and Canada, that it has trained in the methods of detecting and mitigating residential electromagnetic pollution. 

About eighty million Americans today have “ringing in the ears” to some degree. Some hear their sounds intermittently. Some hear them only when everything else is quiet. But for increasingly many, the sounds are so loud all of the time that they cannot sleep or function. Most of these people do not have tinnitus, which is an internally generated sound, often in one ear, usually accompanied by some degree of hearing loss. Most people today who have “ringing in the ears” hear it equally in both ears, have perfect hearing, and are hearing a pitch at the very top of their hearing range. They are hearing the electricity around them, and it is getting louder all the time. The clues to what is happening were planted over two centuries ago. 

French electrotherapist Jean Baptiste Le Roy, in 1755, was apparently the first to elicit an auditory response to static electricity. He was treating a man blind from cataracts by winding wire around the man’s head and giving him twelve shocks from a Leyden jar. The man reported hearing the explosion of “twelve pieces of cannon.” 

Experimentation began in earnest when Alessandro Volta invented the electric battery in 1800. The metals he first used, silver and zinc, with salt water for an electrolyte, generated about a volt per pair—less when he stacked them up in his original “pile.” Applying a single pair of metals to his own tongue produced either a sour or sharp taste, depending on the direction of the current. Applying a piece of silver to his eye, and touching it with a piece of zinc held in his moistened hand, produced a flash of light—a flash, he said, that was “much more beautiful” if he placed the second piece of metal, or both pieces, inside his mouth. 

Stimulating the sense of hearing proved more difficult. Volta tried in vain to elicit a noise with only a single pair of metal plates. But with thirty pairs, roughly equivalent to a twenty-volt battery, he succeeded. “I introduced, a considerable way into both ears,” he wrote, “two probes or metallic rods with their ends rounded, and I made them to communicate immediately with both extremities of the apparatus. At the moment when the circle was thus completed I received a shock in the head, and some moments after (the communication continuing without any interruption) I began to hear a sound, or rather noise, in the ear, which I cannot well define: it was a kind of crackling with shocks, as if some paste or tenacious matter had been boiling.” Being afraid of permanent injury to his brain, Volta did not repeat the attempt. 

But hundreds of other people did. After this report by one of the most famous men in the world, everyone wanted to see if they could hear electricity. Carl Johann Grapengiesser, a physician, was careful to use only small currents on his patients, and he was a much more careful observer than Volta. His subjects varied widely in their sensitivity and in the sounds they heard. “The noises, in respect of their quality and strength, are very variable,” he wrote. “Most often, it seems to the patient that he hears the hissing of a boiling teakettle; another hears ringing and bell-pealing, a third thinks that outside a storm wind blows; to a fourth it seems that in each ear a nightingale sings most lustily.”1 A few of his patients heard the electricity generated by only a single pair of metals applied to blister plaster wounds underneath their ears. 

Physicist Johann Ritter was not afraid of currents much greater than those risked by Volta. Using batteries containing 100, 200, and more pairs of metals, he was able to hear a pure musical tone that was approximately  above middle c, and that persisted as long as the current flowed through his ears. 

Many were the doctors and scientists who, in the heady years following Volta’s gift to the world of its first reliable source of steady electricity, stimulated the acoustic nerve with greater or lesser quantities of current. The following list, limited to German scientists who published their research, was compiled by Rudolf Brenner in 1868: 

Carl Johann Christian Grapengiesser (Attempts to Use Galvanism in the Healing of Some Diseases, 1801) 

Johann Wilhelm Ritter (Contributions to the Recent Knowledge of Galvanism and the Results of Research, 1802) 

Friedrich Ludwig Augustin (Attempt at a Complete Systematic History of Galvanic Electricity and its Medical Use, 1801; On Galvanism and its Medical Use, 1801) 

Johann Friedrich Alexander Merzdorff (Treatment of Tinnitus with the Galvanic Current, 1801) 

Carl Eduard Flies (Experiments of Dr. Flies, 1801) 

Christoph Friedrich Hellwag (Experiments on the Healing Powers of Galvanism, and Observations on its Chemical and Physiological Effects, 1802) 

Christian August Struve (System of Medical Electricity with Attention to Galvanism, 1802) 

Christian Heinrich Wolke (Report on Deaf and Dumb Blessed by the Galvani-Voltaic Hearing-Giving Art at Jever and on Sprenger’s Method of Treating Them with Voltaic Electricity, 1802) 

Johann Justus Anton Sprenger (Method of Using Galvani-Voltaic Metal Electricity as a Remedy for Deafness and Hearing Loss, 1802) 

Franz Heinrich Martens (Complete Instructions on the Therapeutic Use of Galvanism; Together with a History of This Remedy, 1803) 

Ironically, the man who laid the foundation for such research—Alessandro Volta—was also the man whose mechanistic world view has so dominated scientific thinking for more than two centuries that it has not been possible to understand the results of these experiments. They have been regarded as little more than parlor tricks, when they have been remembered at all. For Volta, we recall, pronounced that electricity and life are distinct and that there are no electric currents flowing in the body. To this day, as a result, in the teaching of biology, including the biology of the ear, chemistry is king and electricity is omitted.[well that's just plain ass wrong DC] 

By Brenner’s time the work of all these early scientists had already been forgotten. A physician who specialized in diseases of the ear, he described this state of affairs in terms that could just as easily apply to today: “Nothing can be more instructive for the history of scientific development than the fate of the old experiments on galvanic stimulation of the acoustic nerve. Among contemporary researchers who deny the possibility of such stimulation are names of the very best repute. One must therefore ask: do these men really believe that Volta, Ritter, and the other old galvanists only imagined the tones and noises they heard?” Brenner’s goal was to establish, once and for all, not only that electricity could be heard, but exactly how, why, and to what degree this occurs. “It is not established if, and it is unknown how the acoustic nerve reacts to the influence of electrical current,” he wrote.2 The results of his experiments filled a 264-page book. His apparatus contained 20 zinc-copper Daniell cells, each producing a maximum of about one volt, connected to a rheostat that could be adjusted to any of 120 positions. Any desired number of cells could be inserted into the circuit at the turn of a dial. He performed 47 different kinds of experiments on a large number of individuals. 

The average person, with 7 volts of direct current coursing through his or her ear canal, heard a clear metallic sound resembling a small dinner bell. The range of sensitivity of normal human beings, however, was enormous. Some heard nothing at all, even when all twenty Daniell cells were in the circuit. For others, who were deemed to have “acoustic nerve hyperaesthesia,” the sound from only one cell was intense. Some heard nothing unless their ear canal was filled with salt water, which helped conduct the electricity. Others, their ear canals dry, heard the ringing bell when the knob-shaped electrode was simply placed on the cheek in front of the ear, or on the mastoid process, the bony protrusion behind the ear.

The direction of the current was crucial. The sound—unless the person had “hyperaesthesia”— was heard only when the negative, not the positive, electrode was in the ear. With minimal current the sound typically resembled the “buzzing of a fly.” This was elevated to “distant wagon roll,” then to “cannon roll,” “striking of a metal plate,” and finally the “ringing of a silver dinner bell,” as the current was gradually raised. The greater the current, the purer the tone, and the greater the resemblance to a bell. When Brenner asked his subjects to sing the tone they heard, some, agreeing with Ritter’s report of 1802, heard a g above middle c. Others disagreed. But although the threshold of perception varied enormously, and the quality and exact pitch were different for everyone, each individual always heard the same thing. They always heard the identical sound and pitch, and had the same threshold, whenever they were tested, even at intervals years apart. 

After experimenting with different placements of the second, non-ear electrode on the skull, neck, torso, arms, and legs, Brenner became convinced that a sound was heard only when the inner ear was in the path of the current, and that direct stimulation of the acoustic nerve was the cause of the sensation of sound. 

American physician Sinclair Tousey, one of the last electro-therapists of the old school, wrote about electricity and the ear in the third edition of his textbook on Medical Electricity, published in 1921. Brenner’s results with direct current, completely forgotten today, were still at that time taught, accepted, and verified by every electrical practitioner. Sounds were normally caused by cathodic (negative) stimulation of the auditory nerve. The range of sensitivity was extraordinary. “Many individuals,” wrote Tousey, closely echoing Brenner’s words, “give no reactions whatever.” In others, the sound was so loud that the person was deemed to have “a distinct hyperesthesia of the auditory nerve.”3 

With the disappearance of the electro-therapist’s art and the dwindling of opportunities for the average physician to become familiar with the auditory response to electricity, the old knowledge was again almost forgotten. 

Then, around 1925, amateur radio enthusiasts thought they found a way to listen to the radio without a loudspeaker, by directly stimulating the acoustic nerve. “Thus, even deaf persons whose eardrums no longer function properly, but whose nerve centers are intact, can hear radio,” wrote Gustav Eichhorn. The device he patented, however—a kind of flat electrode held against the ear— was soon dismissed as being nothing more than a “condenser receiver.” Apparently, the surfaces of the skin and the electrode, vibrating, took the place of a loudspeaker, creating an ordinary sound that reached the inner ear by bone conduction.

Nevertheless, the experiments of the radio engineers spawned a spate of genuine efforts by biologists to stimulate the inner ear with alternating current. This was typically done after the manner of Brenner—by inserting one electrode in the ear canal, which was first filled with salt water, and completing the circuit with a second electrode on the back of the forearm or hand. The subjects most often heard a tone that corresponded in pitch to the frequency of the applied current. The sensitivity of the subjects, as before, varied tremendously. In experiments done in Leningrad, the most sensitive individual, when tested with a current of 1,000 cycles per second, heard a sound as soon as the voltage exceeded a fifth of a volt; the least sensitive subject required six volts—a thirty-fold difference in sensitivity. There was nothing wrong with the hearing of any of these people. The variations in their ability to hear electricity bore no relation to the subjects’ ability to hear ordinary sound.5 

In 1936, Stanley Smith Stevens, an experimental psychologist at Harvard University, gave the hearing phenomenon a new name: “electrophonic hearing.” Four years later, at his newly-created Psycho-Acoustics Laboratory, he proposed three different mechanisms of hearing by electrical stimulation. Most people with normal hearing, when stimulated by an electrode in their ear, heard a pitch that was exactly one octave higher than the frequency of the applied current. However, if a negative DC voltage was applied at the same time, they heard the fundamental frequency as well. His knowledge of physics led Stevens to conclude that the ear was responding like a condenser receiver, with the ear drum and the opposite wall of the middle ear being the vibrating “plates” of that condenser. 

People without ear drums, however, heard either the fundamental frequency or a “buzzing” noise, or both. None heard the higher octave. And as Brenner had also reported, eardrumless ears were much more sensitive to electricity than normal ears. One of Stevens’ subjects heard a pure tone when stimulated with only one-twentieth of a volt. Stevens proposed that the hearing of the fundamental frequency was caused by direct stimulation of the hair cells of the inner ear. For those that heard a buzzing sound, he proposed that the auditory nerve was being stimulated directly. 

Thus, by 1940, three different parts of the ear were being proposed as capable of turning electricity into sound: the middle ear, the hair cells of the inner ear, and the auditory nerve. All three mechanisms appeared to operate throughout the normal hearing range of human beings. 

Stevens tried one additional experiment, whose significance he failed to appreciate, and which was not repeated by anyone else for two decades: he exposed subjects to a low frequency, 100 kHz radio wave that was modulated at 400 Hz. Somehow the ear demodulated this signal and the person heard a 400-cycle pure tone, close to a g above middle c.6 

In 1960, biologist Allan Frey introduced yet another method of hearing electromagnetic energy, this time without placing electrodes on the body. A radar technician at Syracuse, New York swore to him that he could “hear” radar. Taking him at his word, Frey accompanied the man back to the Syracuse facility and found that he could hear it too. Frey was soon publishing papers about the effect, proving that even animals, and people with conduction deafness—but not nerve deafness— could hear brief pulses of microwave radiation at extremely low levels of average power. This phenomenon, known as “microwave hearing,” attracted a fair amount of publicity, but is probably not responsible for most of the sounds that torment so many people today. 

However, the 1960s would bring still more surprises. Renewed research into electrophonic hearing had both civilian and military goals. The medical community wanted to see if the deaf could be made to hear. The military community wanted to see if a new method of communication could be devised for soldiers or astronauts. 

In 1963, Gerhard Salomon and Arnold Starr, in Copenhagen, proved that the inner ear was far more sensitive to electrical energy than anyone had previously suspected. They placed electrodes directly adjacent to the cochlea in two patients who had had surgical reconstruction of their middle ear. One patient heard “clicks” or “cracklings” when stimulated by only three microamperes (millionths of an ampere) of direct current. The second patient required 35 microamperes to hear the same sound. As the current was gradually increased, the clicks changed to “walking on dry snow” or the rush of “blowing air.” Alternating current elicited pure tones whose pitch matched the applied frequency, but this required about a thousand times more current. 

Then the Electromagnetic Warfare and Communication Laboratory at Wright-Patterson Air Force Base in Ohio published a report written by Alan Bredon of Spacelabs, Inc., investigating both electrophonic hearing and microwave hearing for their potential use in space. The goal was to develop “an efficient, dual-purpose transducer which can be worn with an absolute minimum of discomfort during long missions in the confines of pressure clothing and aerospace environments.” Bredon found that electrophonic devices were unsuitable because the sound they produced was too faint to be useful in the noisy environment of aircraft or space vehicles. And microwave hearing was judged useless because it appeared to depend on short pulses of energy and did not produce continuous sound. But Patrick Flanagan’s Neurophone, which had been recently publicized in Life magazine,7 caught Bredon’s interest. This device, which Flanagan claimed to have invented at the age of 15, was a radio wave device almost identical to the one Eichhorn had patented in 1927, and appeared to work by skin vibration. It differed, however, in one crucial respect: Flanagan used a carrier frequency in the ultrasonic range, specified as being between 20,000 and 200,000 Hz. He had rediscovered the phenomenon that Stevens had briefly described back in 1937 and never followed up on. 

As a further result of the publicity surrounding Flanagan’s invention, Henry Puharich, a physician, and Joseph Lawrence, a dentist, under contract with the Air Force, investigated what they called “transdermal electrostimulation.” They delivered electromagnetic energy at ultrasonic frequencies via electrodes placed next to the ear. The audio signal, added to the ultrasonic carrier, was somehow demodulated by the body and heard like any other sound. Like Flanagan’s device, it appeared at first glance to work by skin vibration. However, several astonishing results were reported.

First, most people’s hearing range was significantly extended. Say the upper limit of a person’s hearing was normally 13,000 or 14,000 cycles per second. By using this device, they typically heard sounds as high in pitch as 18,000 cycles per second. Some even heard a true pitch as high as 25,000 cycles per second—5,000 cycles higher than most human beings are supposed to be able to hear. 

Second, the use of an ultrasonic carrier wave eliminated distortion. When the audio signal was fed directly into the electrodes without the carrier wave, speech could not be understood and music was unrecognizable. But when the speech or music was delivered only as a modulation to a high frequency carrier wave—in the same way that AM radio broadcasts deliver speech and music—the body, like a radio receiver, somehow decoded the signal and the person heard the speech or music perfectly without any distortion. The optimal carrier frequency, delivering the purest sound, was found to be between 30,000 and 40,000 Hz. 

Third, and most surprising, nine out of nine deaf people—even those with profound sensorineural deafness from birth—could hear sound in this way by transdermal stimulation. But the electrodes had to be pressed more firmly on the skin, and the deaf subject had to move the electrode around beneath or in front of the ear until he or she located the exact spot that stimulated hearing— as though the signal had to be focused on a target inside the head. The four subjects with residual hearing described the sensation as “sound,” not “vibration.” The two who were deaf from birth described it as something “new and intense.” The three who had acquired total deafness described it as hearing as they remembered it. 

When insulated electrodes were used, people with normal hearing responded to power levels as low as 100 microwatts (millionths of a watt). When bare metal electrodes were pressed directly against the skin, more current was required, but the deaf could hear as well or better, with this method, than hearing people. Once the proper skin pressure and location were found, the threshold electromagnetic stimulus was between one and ten milliwatts (thousandths of a watt) for both hearing and deaf people, while only the slightest increase in power brought the sound, as described by one of the deaf subjects, “from a comfortable level to one of great force.” 

Even more amazingly, ten out of ten profoundly deaf subjects, who had never heard speech before, were able to understand words, after very brief training, when delivered in this manner. And patients who had lesser sensorineural hearing loss, who could identify only 40 to 50 percent of words spoken through the air, scored 90 percent or better by transdermal stimulation, without training. 

For the first time in fifty years, there was evidence that an electrode carrying radio waves to the skin might be doing something more than just causing the skin to vibrate. These researchers speculated, based on measurements of cochlear microphonics (electrical signals generated by the inner ear), that transdermal stimulation produced a sound by a combination of acoustic and electrical effects—by both vibrating the skin and directly stimulating the hair cells in the inner ear. “However,” they wrote, “these two effects do not give a satisfactory explanation of word recognition response in those patients whose cochlea is non-functional.” 

The results of animal experiments were just as astonishing. Two dogs were rendered deaf—one through injections of streptomycin, which destroyed the cochlear hair cells, and one by surgical removal of the ear drums, middle ear bones, and cochleas. Both dogs had previously been conditioned to respond to transdermal stimulation by jumping over a divider in a box, and both had learned to respond correctly better than 90 percent of the time. Incredibly, both dogs continued to respond correctly 90 percent of the time to the high frequency stimulus when it was modulated with the audio signal, but only 1 percent of the time to the unmodulated high frequency signal alone. 

The implications of this research are profound. Since people and animals without any cochlear function at all, or even without any cochlea, can apparently hear this type of stimulation, either the brain is being stimulated directly—which is unlikely since the source of the sound always appears to the person to be coming from the direction of the electrode producing it—or there is another part of the inner ear besides the cochlea that responds to ultrasound, or to electromagnetic waves at ultrasonic frequencies. Since most hearing subjects were able to hear much higher frequencies than they could hear in the normal way, this is the most likely explanation. And we will see that there are good reasons to believe that most people who are bothered by electrical “tinnitus” are hearing electrically-delivered ultrasound. 

Puharich and Lawrence patented their device, and the Army acquired two prototype units for testing aboard Chinook helicopters and airboats used in Vietnam. The news editor for Electronic Design reported, after trying out one of the devices, that “the signals were almost, but not quite, like airborne sounds.”8 

In 1968, Garland Frederick Skinner repeated some of Puharich and Lawrence’s experiments at higher power, using a carrier frequency of 100 kHz, for his master’s thesis at the Naval Postgraduate School. He did not test his “Trans-Derma-Phone” on any deaf people, but like Puharich and Lawrence, he concluded that “be it the ear, the nerves, or the brain, an AM detection mechanism exists.” 

In 1970, Michael S. Hoshiko, under a post-doctoral fellowship from the National Institutes of Health, tested the device of Puharich and Lawrence at the Neuro-communications Laboratory at The Johns Hopkins University’s School of Medicine. Subjects not only heard pure tones from 30 Hz up to the remarkable frequency of 20,000 Hz equally well at low sound levels, but scored 94 percent in speech discrimination. The twenty-nine college students who were tested performed equally well whether the words were delivered through the air as ordinary sounds, or whether they were delivered electronically as modulations to a radio wave in the ultrasonic range. 

Two more efforts to make people hear modulated radio waves were made by members of the military, but probably because they did not use ultrasonic frequencies they were unable to identify any cause of hearing besides the vibrating skin. One of the reports, a master’s thesis submitted by Lieutenants William Harvey and James Hamilton to the Air Force Institute of Technology at Wright-Patterson Air Force Base, specified a carrier frequency of 3.5 MHz. The other project was undertaken by M. Salmansohn, Command and Control Division at the Naval Air Development Center in Johnsville, Pennsylvania. He also did not use an ultrasonic carrier, in fact he later dispensed with the carrier wave altogether and used direct audio-frequency current. 

Finally, in 1971, Patrick Woodruff Johnson, for his master’s thesis at the Naval Postgraduate School, decided to revisit “ordinary” electrophonic hearing. He wanted to see how little electricity it took to make people hear a sound. Most previous researchers had exposed their subjects’ heads to up to one watt of power, resulting in large and potentially dangerous levels of AC current. Johnson found that by using a silver disc plated with silver chloride as one of the electrodes, and simultaneously applying a positive direct current, an alternating current of as little as 2 microamperes (millionths of an ampere), delivered with only 2 microwatts (millionths of a watt) of power, could be heard. Johnson proposed that “an extremely small low cost hearing aid” could be developed using this system. 

In June 1971, at M.I.T., Edwin Charles Moxon reviewed the entire field for his Ph.D. dissertation and added the results of his own experiments on cats. By recording the activity of the cats’ auditory nerves while their cochleas were electrically stimulated, he proved definitely that two distinct phenomena were occurring at the same time. The electrical signal was somehow being converted into ordinary sound, which was being processed by the cochlea in the normal way. And in addition, the current itself was stimulating the auditory nerve directly, producing a second, abnormal component of the discharge pattern of the nerve. 

At this point efforts at understanding how electricity affects the normal ear ceased, as practically all funding was diverted to the development of cochlear implants for the deaf. This was a natural outcome of the development of computers, which were beginning to transform our world. The brain was being modeled as a fantastically elaborate digital computer. Hearing researchers thought that if they separated sounds into their different frequency components, they could feed those components in digital pulses to the appropriate fibers of the auditory nerve for direct processing by the brain. And, considering they are stimulating thirty thousand nerve fibers with only eight to twenty electrodes, they have been remarkably successful. By 2017, the number of cochlear implants worldwide exceeded five hundred thousand. But the results are robotic and do not duplicate normal sound. Most patients can learn to understand carefully articulated speech well enough to use the telephone in a quiet room. But they cannot distinguish voices, recognize music, or converse in average noisy environments. 

Meanwhile, progress in understanding electrophonic hearing came to a complete halt. Some research into microwave hearing continued for another decade or so, and then ceased as well. The peak power levels that appear to be required for microwave hearing make it unlikely to be the source of sounds that bother most people today. The phenomenon discovered by Puharich and Lawrence is a much more likely candidate. To understand why requires an excursion into the anatomy of one of the most complex and least understood parts of the body. 

The Electro-model of the Ear 

In the normal ear, the ear drum receives sound and passes the vibrations on to three tiny bones in the middle ear. They are the malleus, incus, and stapes (hammer, anvil, and stirrup), named after the implements they resemble. The stapes, the last bone in the chain, although only half the size of a grain of rice, funnels the world of vibrational sound to the bony cochlea, a snail-shaped structure which itself is a marvel of miniaturization. No bigger than a hazelnut, the cochlea is able to take the roar of a lion, the song of a nightingale, and the squeak of a mouse, and reproduce them all with perfect fidelity in the form of electrical signals sent to the brain. To this day no one knows exactly how this is accomplished. And what little is known is probably wrong. 

“It is unfortunate,” wrote Augustus Pohlman, an anatomy professor and dean of the school of medicine at the University of South Dakota, “that no machinery is available for deleting from the literature those interpretations which have proven to be incorrect.” Pohlman stood, in 1933, looking back on seventy years of research that had failed to eradicate what he regarded as a fundamentally flawed assumption about the operation of the liquid-filled cochlea. Another eighty years have still failed to eradicate it. 

The tiny cochlear spiral is divided along its length into an upper and lower chamber by a partition called the basilar membrane. Upon this membrane sits the organ of Corti, containing thousands of hair cells with their attached nerve fibers. And in 1863, the great German physicist Hermann Helmholtz had proposed that the cochlea was a sort of an underwater piano, and suggested that the ear’s resonant “strings” were the different length fibers of the basilar membrane. The membrane increases in width as it winds round the cochlea. The longest fibers at the apex, he suggested, like the long bass strings of a piano, resonate with the deepest tones, while the shortest fibers at the base are set into vibration by the highest tones. 

Helmholtz assumed that the transmission of sound was a simple matter of mechanics and levers, and subsequent research, for one and a half centuries, has simply built upon his original theory with remarkably little change. According to this model, the motion of the stapes, like a tiny piston, pumps the fluid in the two compartments of the cochlea back and forth, causing the membrane separating them to flex up and down, thereby stimulating the hair cells on top of it to send nerve impulses to the brain. 

Only those parts of the membrane that are tuned to the incoming sounds flex, and only those hair cells sitting on those parts send signals to the brain. But this model does not explain the hearing of electricity. It also fails to explain some of the most obvious features of the inner ear. Why, for example, is the cochlea shaped like a snail shell? Why are the thousands of hair cells lined up in four perfectly spaced rows, one behind the other like the keyboards of a pipe organ? Why is the cochlea encased in the hardest bone of the human body, the otic capsule? Why is the cochlea the precise size that it is, fully formed in the womb at six months of gestation, never to grow any larger? Why is the cochlea only marginally bigger in a whale than in a mouse? How is it possible to fit a full set of resonators, vibrating over a greater musical range than the largest pipe organ, into a space no bigger than the tip of your little finger? [serious? God DC]

Pohlman thought that the standard model of the ear was contradicted by modern physics, and a number of courageous scientists after him have agreed. By including electricity in their model of hearing, they have made progress in explaining the basic features of the ear. But they are up against a cultural barrier, which still does not permit electricity to play a fundamental role in biology. [This is actually full blown stupid given how many have been 'shocked' back into life

The ear is much too sensitive to work by a system of mechanics and levers, and Pohlman was the first to point out this obvious fact. The real resonators in the ear—the “piano strings”—had to be the thousands of hair cells, lined up in rows and graded in size from bottom to top of the cochlea, and not the fibers of the membrane they were sitting on. And the hair cells had to be pressure sensors, not motion detectors. The extreme sensitivity of the ear made that evident. This also explained why the cochlea is embedded in the densest bone in the human body. It is a soundproof chamber, and the function of the ear is to transmit sound, not motion, to the delicate hair cells. 

The next scientist to add pieces to the puzzle was an English physician and biochemist, Lionel Naftalin, who passed away in March 2011 at the age of 96 after working on the problem for half a century. He began by making precise calculations that proved conclusively that the ear is much too sensitive to work in the accepted fashion. It is a known fact that the quietest sound that a person can hear has an energy of less than 10 -16 watts (one ten-thousandth of one trillionth of a watt) per square centimeter, which, calculated Naftalin, produces a pressure on the eardrum that is only slightly greater than the pressure exerted by randomly moving air molecules. Naftalin stated flatly that the accepted theory of hearing was impossible. Such tiny energies could not move the basilar membrane. They could not even move the bones of the middle ear by the assumed lever mechanism. 

The absurdity of the standard theory was obvious. At the threshold of hearing the eardrum is said to vibrate through a distance (0.1 ångstrom) that is only one-tenth the diameter of a hydrogen atom. And the motion of the basilar membrane is calculated to be as small as ten trillionths of a centimeter—only slightly larger than the diameter of an atomic nucleus, and much smaller than the random motions of the molecules that make up the membrane. This “movement” of subatomic dimensions supposedly causes the hairs on the hair cells to “bend,” triggering an electric depolarization of the hair cells and the firing of the attached nerve fibers. 

Recently some scientists, realizing the foolishness of such a notion, have introduced various ad hoc assumptions that increase the distance the basilar membrane must move from subatomic to only atomic dimensions—which still doesn’t overcome the fundamental problem. Naftalin pointed out that the contents of the cochlea are not solid metal objects but liquids, gels, and flexible membranes, and that such infinitesimal distances could have no basis in physical reality. He then calculated that to move a resonant portion of the basilar membrane only one ångstrom—about the distance now claimed necessary to trigger a response from the hair cells 9—would require over ten thousand times more energy than is contained in a threshold sound wave that hits the ear drum. 

During his fifty years of work on hearing, Naftalin thoroughly demolished the prevailing mechanical theory and created a model in which electrical forces are central. Instead of focusing on the basilar membrane, on which the hair cells sit, he drew his attention to a much more unusual membrane—the one that covers the tops of the hair cells. It has a jelly-like consistency and composition that occurs nowhere else in the human body. It also has unusual electrical properties, and a large voltage is always present across it. Elsewhere in the body, voltages of this magnitude— about 100 to 120 millivolts—are usually found only across cell membranes. [I am starting to see they went to great lengths to put a big big gulf between life and electricity as if the two do not go together DC]

In 1965, Naftalin, thinking in terms of solid state physics, postulated that this membrane—called the tectorial membrane—is a semiconductor that is also piezoelectric. Piezoelectric substances, we recall, are those that convert mechanical pressure into electrical voltages, and vice versa. Quartz crystals are the most familiar example. Often used in radio receivers, they convert electrical vibrations into sound vibrations. Judging by its structure and chemical composition, Naftalin suggested that the tectorial membrane ought to have this property. He proposed that it is a piezoelectric liquid crystal that converts sound waves into electrical signals, which it communicates to the hair cell resonators embedded in it. He suggested that the large voltage across the membrane causes great amplification of these signals. 

Naftalin then built scale models of both the cochlea and the tectorial membrane, and began to find answers to some of the outstanding mysteries of the ear. He discovered that the snail-like shape of the cochlea is important to its function as a precision musical instrument. He also discovered that the makeup of the tectorial membrane has something to do with the instrument’s small size. While the speed of sound in air is 330 meters per second, and in water is 1500 meters per second, in ten percent gelatin it is only 5 meters per second, and in the tectorial membrane it is likely to be considerably less. By slowing the speed of sound, the jelly-like substance of the membrane contracts the wavelengths of sounds from meters to millimeters, allowing a millimeter-sized instrument like the cochlea to receive and play the world of sound we live in for our brain. 

George Offutt came to this problem as a marine biologist, and reached similar conclusions from an evolutionary perspective. His doctoral dissertation at the University of Rhode Island’s School of Oceanography dealt with codfish hearing. His theory of human hearing, first published in 1970, was later expanded into a book, The Electro-model of the Auditory System. I interviewed him in early 2013, shortly before his death. 

Like Naftalin, Offutt concluded that the tectorial membrane is a piezoelectric pressure sensor. And because of his background, he argued that human hair cells, by evolution and by function, are electroreceptors. 

The mammalian cochlea, after all, evolved from a fish organ called the lagena, which has hair cells not too unlike ours, covered by a gelatinous membrane, also similar to ours. But the fish’s membrane is in turn topped by structures called otoliths (“ear stones”), which are crystals of calcium carbonate and are known to be about one hundred times more piezoelectric than quartz. Offutt said that this is not accidental. The hair cells in the ears of fish, he said, are sensitive to the voltages generated by the otoliths in response to sound pressure.10 This, he said, explains why sharks can hear. Fish, being composed largely of water, are supposed to be transparent to water-borne sounds unless they have a swim bladder containing air. Therefore sharks, which have no swim bladder, ought to be deaf according to standard theories, but they aren’t. In 1974, Offutt elegantly solved this contradiction by introducing electricity into his model for how fish hear. And by extension, he said, there is no reason why human hearing should not still work in the same basic fashion. If the cochlea evolved from the lagena, then the tectorial membrane evolved from the otolithic membrane and ought still to be piezoelectric. And the hair cells, which are substantially the same, should still function as electroreceptors. 

In fact, fish have other, related hair cells that are known to be electroreceptors. Lateral line organs, for example, arranged in lines along the sides of every fish’s body in order to sense water currents, actually respond not only to water currents but also to low frequency sounds and to electric currents.11 These organs’ hair cells, too, are covered by a jellylike substance, called the cupula, and they, too, are supplied by a branch of the acoustic nerve. In fact, the lateral line and the inner ear are so closely related functionally, evolutionarily, and embryonically, that all such organs in all types of animals are referred to as the acoustico-lateralis system. 

Some fish have other organs, which evolved from this system, that are exquisitely and primarily sensitive to electrical currents. With these organs, sharks can detect the electric fields of other fish or animals, and can locate them in darkness, in murky water, or even when hidden in the sand or mud at the bottom. The hair cells of these electric organs lie beneath the surface of the body in sacs called ampullae of Lorenzini and are covered, again, with a gelatinous substance. 

All such fish organs, no matter their specialization, have proven to be sensitive to both pressure and electricity. Lateral line organs that primarily sense water currents also react to electrical stimuli, and ampullae of Lorenzini that primarily sense electric currents also react to mechanical pressure. Therefore marine biologists were once of the opinion that piezoelectricity was at play in both the lateral line and the ear.12 Hans Lissman, once the world’s foremost authority on electric fishes, thought that this was so. Later, anatomist Muriel Ross, who had a grant from NASA to study the effects of weightlessness on the ear, emphasized that the otoliths of fish, and the related otoconia (“hair sand”) of our own ears’ gravity sensors, are known to be piezoelectric. Mechanical and electrical energy, she said, are interchangeable, and feedback between hair cells and piezoelectric membranes will transform one form of energy into the other. 

In a related study in 1970, Dennis O’Leary exposed the gelatinous cupulas of frogs’ semicircular canals—the organs of balance in the inner ear—to infrared radiation. The response of the canals’ hair cells was consistent with the electrical and not the mechanical model of such organs.

Recently the outer hair cells of the cochlea have themselves proven to be piezoelectric. They acquire a voltage in response to pressure, and they lengthen or shorten in response to an electric current. Their sensitivity is extreme: one picoamp (one trillionth of an amp) of current is enough to cause a measurable change in a hair cell’s length.13 Electric currents, traveling in complex paths, have also been found traversing the tectorial membrane and coursing through the organ of Corti.14 And pulsating waves have been discovered, in the thin space between the tops of the hair cells and the bottom of the tectorial membrane, that reverberate between the outer hair cells, the tectorial membrane, and the inner hair cells.15 Australian biologist Andrew Bell has calculated that in the human cochlea these fluid waves should have wavelengths roughly between 15 and 150 microns (millionths of a meter)—just the right size to put hair cells 20 to 80 microns in length into musical resonance. Bell has compared these waves to surface acoustic waves, and the organ of Corti to a surface acoustic wave resonator, a common electronic device that has replaced quartz crystals in a wide variety of industries. 

In the electro-model of hearing that these scientists have constructed, there are several places where electricity can act directly on the ear. The inner hair cells are electroreceptors. The outer hair cells are piezoelectric. The tectorial membrane is piezoelectric. And since both direct and alternating current can act on any of these structures, many of the early reports of the hearing of electricity, said Offutt, including reports that were dismissed as being due to “skin vibration,” should be reevaluated. 

The exquisite sensitivity of the organ of Corti to electricity explains the nineteenth century reports of the hearing of direct current and the twentieth century reports of the hearing of alternating current. And it forms a basis for understanding the torment suffered half a century ago by Clarence Wieske’s client in Santa Barbara, and the suffering of so many millions today. But a piece of the auditory puzzle is still missing. 

Direct or alternating current applied to the ear canal requires about one milliampere (one thousandth of an ampere) to stimulate hearing.16 If an electrode is placed directly in the cochlear fluid, about one microampere (millionth of an ampere) suffices.17 If current is applied directly to a hair cell, one picoampere (trillionth of an ampere) is all that is necessary to cause a mechanical reaction.18 Clearly, sticking electrodes in your outer ear is an inefficient way to stimulate the hair cells. Very little of the applied current ever reaches those cells. But in today’s world, electrical energy is reaching the hair cells directly in the form of radio waves, to which bones and membranes are transparent. The hair cells are also bathed in electric and magnetic fields originating in the electric power grid and all the electronic appliances that are plugged into it. All those fields and radio waves penetrate the inner ear and induce electric currents to flow inside the cochlea itself. The question then becomes, why do we all not hear a constant cacophony of noise drowning out all conversation and music? Why is most electrical noise confined to either very low or very high frequencies? The answer very likely has to do with a part of the ear that is not ordinarily associated with hearing at all. 

Hearing Ultrasound 

Human ultrasonic hearing has been rediscovered more than a dozen times since the 1940s, most recently by Professor Martin Lenhardt at Virginia Commonwealth University. “So outlandish is the concept that humans can have the hearing range of specialized mammals, such as bats and toothed whales,” he writes, “that ultrasonic hearing has generally been relegated to the realm of parlor tricks rather than being considered the subject of scientific inquiry.”19 At the present time, apparently, ultrasonic hearing is being intensively investigated only by Lenhardt and by a small group of researchers in Japan. 

Yet it is a fact that most human beings—even many profoundly deaf human beings—can hear ultrasound by bone conduction, and that this ability encompasses the entire hearing range of bats and whales. It extends well beyond 100 kHz. Dr. Roger Maass reported to British Intelligence in 1945 that young people can hear up to 150 kHz,20 and one group in Russia reported in 1976 that the upper limit for ultrasonic hearing is 225 kHz.21

Bruce Deatherage, while doing shipboard research for the Department of Defense in the summer of 1952, rediscovered the ability to hear ultrasound by accident when he swam into a sonar beam broadcasting at 50 kHz. Repeating the experiment with volunteers, he reported that each subject heard a very high-pitched sound that was the same as the highest pitch that person could ordinarily hear. Recently scientists at the Naval Submarine Base in New London, Connecticut verified the hearing of underwater ultrasound up to a frequency of 200 kHz.22 

What is known today is this: 

Virtually everyone with normal hearing can hear ultrasound. Elderly people who have lost their high frequency hearing can still hear ultrasound. Many profoundly deaf people with little or no functioning cochlea can hear ultrasound. The perceived pitch varies from person to person but is usually between 8 and 17 kHz. Pitch discrimination does occur, but requires a greater change in frequency in the ultrasonic range than in the normal auditory range. And, most surprisingly, when speech is transposed into the ultrasonic range and spread out over that range, it can be heard and understood. Somehow the brain recondenses the signal, and instead of high pitched “tinnitus,” the person hears the speech as though it were normal sound. Speech can also be modulated onto an ultrasonic carrier, and it is demodulated by the brain and heard as normal sound. Lenhardt, who has built and patented bone conduction ultrasonic hearing aids based on these principles, reports that word comprehension is around 80 percent in normal hearing individuals, even in a noisy environment, and 50 percent in the profoundly deaf. 

Since even many deaf people can hear ultrasound, several investigators over the years, including Lenhardt and the Japanese group, have suggested that the ultrasound receptor lies not in the cochlea but in an older part of the ear, one which functions as a primary hearing organ in fish, amphibians and reptiles: the saccule. It still exists in humans, containing hair cells capped by a gelatinous membrane covered with piezoelectric calcium carbonate crystals. 

Although it is adjacent to the cochlea, and although its nerve fibers connect to both the vestibular and auditory cortex of the brain, the human saccule has usually been thought to be an exclusively vestibular, or balance organ, and to play no part in hearing. This dogma, however, has come under challenge periodically for the past eighty years. In 1932, Canadian physician John Tait presented a provocative paper, titled “Is all hearing cochlear?” at the 65th Annual Meeting of the American Otological Society in Atlantic City. He said that he and other investigators had failed to find any connection between the saccule and posture in fish, frogs, or rabbits, and proposed that even in humans the saccule is part of the hearing apparatus. Its construction, he said, indicates that the saccule is designed to detect vibrations of the head, including the vibrations that occur in speaking. The saccule in air-breathing animals, he proposed, “is a proprioceptor involved in the emission and regulation of voice. This would mean that we hear our own voice with the help of two kinds of receptors, while we hear the voice of our neighbors with only one.” In other words, Tait suggested that the cochlea is an innovation that allows air-breathing animals to hear airborne sounds, while the saccule retains its ancient function as a sensitive receptor for bone-conducted sounds. 

Since that time, saccular hearing has been proven to exist in a variety of mammals and birds, including guinea pigs, pigeons, cats, and squirrel monkeys. Elephants may use their saccule to hear low frequency vibrations received through the earth by bone conduction. Even in human beings, audiologists have developed a hearing test involving the electrical response of neck muscles to sound—called “vestibular evoked myogenic potentials” (VEMP)—to evaluate the functioning of the saccule. This test is often normal in people with profound sensorineural hearing loss. 

Lenhardt believes that ultrasonic hearing may be both cochlear and saccular in normally hearing people, while it is strictly saccular in the deaf. 

Many pieces of evidence indicate that what is tormenting people around the world today is electromagnetic energy in the ultrasonic range—from about 20 kHz to about 225 kHz—which is converted to sound in the cochlea and/or the saccule: 

1. Most frequently people are complaining of “loud tinnitus” at the highest pitch they can hear. 

2. Although airborne ultrasound is not audible, Puharich and Lawrence showed that electromagnetic energy at ultrasonic frequencies is audible, to both hearing and deaf people. 

3. The otoconia (calcium carbonate crystals) in the saccule, and the outer hair cells in the cochlea, are known to be piezoelectric, i.e. they will convert electric currents to sound. 

4. Electric and magnetic fields induce electric currents in the body whose strength is proportional to frequency. The higher the frequency, the greater the induced current. These principles of physics mean that the same field strength will produce 1,000 times more current at 50,000 Hz, in the ultrasonic range, than at 50 Hz, in the audible range. 

5. The measured threshold for hearing in the ultrasonic range is as low, or lower, than the threshold at 50 or 60 Hz. An exact comparison cannot be made because ultrasound is only audible by bone conduction, and extremely low frequencies are better heard by air conduction. But superimposing typical air, bone, and ultrasonic hearing threshold curves gives an overall hearing curve that looks something like this:23 

The inner ear looks to be about 5 to 10 times more sensitive to sound at 50 kHz than at 50 Hz. Therefore the ear may be 5,000 to 10,000 times more sensitive to electric and magnetic fields at ultrasonic frequencies than at power lines frequencies. The ear’s much greater sensitivity to sound in the middle of the hearing range is largely due to the resonant properties of the outer, middle and inner ear before they are transformed into electrical impulses.24 This means that the ear is much more sensitive to electric currents at ultrasonic frequencies than to currents in either the middle or low parts of its range. The ear’s insensitivity to electromagnetic fields at 50 or 60 Hz explains why, thankfully, we do not hear a 60-cycle buzz from the power grid at all times. 

By consulting charts published by the World Health Organization,25 it is possible to estimate the approximate minimum frequency at which we might expect to begin hearing an electromagnetic field. Since 1 picoampere of current is enough to stimulate one hair cell, and 50 picoamperes to trigger 50 hair cells—about the number required to stimulate hearing—one can look this amount of current up on the WHO’s charts. It turns out to be the amount of current per square centimeter that is induced in the ear at 20 kHz by either a magnetic field of about one microgauss or an electric field of about ten millivolts per meter. These are about the magnitudes of some of the ultrasonic electric and magnetic fields that pollute our modern environment.26 And one square centimeter is just about the area of the base of the human cochlea. 

In other words, given the dimensions of the cochlea, we can expect to hear electromagnetic fields in today’s environment that are roughly above 20 kHz and below 225 kHz, which is the upper limit of our ultrasonic hearing range. 

If the saccule is more sensitive to ultrasound than the cochlea, these estimates could be too conservative. As I was reminded some years ago by Canadian acoustic physicist Marek Roland Mieszkowski, the ear is sensitive to sound energies of less than 10-16 watts per square centimeter. Assuming, as he did, only a one percent efficiency in converting electrical energy into sound energy, the ear could be sensitive to magnetic fields of a hundredth of a microgauss, or to electric fields of 100 microvolts per meter. The ability of some people to hear the northern lights—said to resemble the sound of rustling silk 27—indicates a potential sensitivity of about that level.28 

SOURCES OF ELECTRICAL SOUND 

Electronic consumer devices 

On April 2, 2000, Dave Stetzer, a former electronics technician for the Air Force, testified about “nonlinear loads” before the Michigan Public Service Commission. By this, he explained, he meant “computers, fax machines, copiers, and many other electronic devices, as well as various utility equipment including capacitors, solid state monitoring and switching devices, and transformers.” All these devices—in other words, virtually all modern electronic equipment—were putting tremendous amounts of high frequencies onto the power grid, and the grid, which was designed to transmit only 60 Hz, could no longer contain what was on it. The electrons in the wires, he explained, once they pass through a computerized device, vibrate not only at 60 Hz, but at frequencies extending throughout the ultrasonic range and well into the radio frequency spectrum. Since as much as seventy percent of all electric power flowing on the wires at any given time had passed through one or more computerized devices, the entire grid was being massively polluted. 

Stetzer first described some of the technical problems this was causing. The high frequencies increased the temperature of the wires, shortened their life span, degraded their performance, and forced substantial amounts of electric current to return to the power plant through the earth instead of through a return wire. And the high frequencies and “transients” (spikes of high current) emanating from everyone’s electronic equipment were causing interference and damage to everyone else’s electronic equipment. This was becoming expensive for homeowners, businesses, and utility companies. 

Even worse, all of the high frequency currents that were coursing through the earth, and the high frequency electromagnetic fields vibrating through the air, were making millions of people sick. Society was, and is, in denial about that, and that was not of great interest to the Michigan Public Service Commission. However, these fields and earth currents were also making dairy cows sick, all over Michigan, which was a threat to the state’s economy. So the commissioners listened attentively while Stetzer spoke. 

“In my visits to the various farms,” he said, “I have observed over 6,000 dairy cows and some horses. I have observed damaged cows with swollen joints, open sores, and other maladies, as well as aborted and deformed calves. I have even observed aborted twin calves, one of which was fully developed while its twin was grossly deformed. Ironically, the grossly deformed twin was the one directly in the current flow pathway between the cow’s back legs.”

“In addition,” Stetzer told the stunned commissioners, “I have also observed stressed cows, cows reluctant to enter certain spaces, including barns and milking parlors, and even cows reluctant to drink water, such that they lap at the water instead of sucking it up as they normally do. I have seen numerous cows fall over dead for no apparent reason. I have observed cows whose entire sides and muscles spasm uncontrollably. The articles from the Wisconsin La Crosse Tribune accurately highlight and describe a few of the conditions that I have personally observed on farms in Wisconsin, Minnesota, and Michigan. These symptoms and impacts are not limited to Wisconsin; they appear everywhere I have found dirty power.” 

My first experience of a health nature with modern electronics occurred back in the mid-1960s, when my family junked its old vacuum tube television set and acquired a transistorized model. As soon as it was plugged in, I heard an awful high-pitched sound—even though I was in the other end of the house with walls and doors in between—that apparently no one else could hear. Such was my introduction to the electronics age. I took care of myself by not watching television, which is one of the reasons why, from the day I moved out on my own to the present, I have never owned one. 

Auditory unpleasantness of that sort was not a widespread problem—at least not for me—until the 1990s. As long as I avoided televisions and computers, the world, in the places I chose to live, contained mostly natural sounds, and complete silence was easy to find. 

But at some point in the 1990s—the change was so gradual that I can’t pinpoint when—I realized that I could not find silence any more. It happened after 1992, when I rented a cabin in northern Ontario—which was still silent—and before 1996, when I fled the new crop of digital cell towers in my native New York to save my life. Since at least 1996, I have found no escape, anywhere in North America, from the awful high-pitched sound that I first heard when I was about fifteen. In 1997, I sought silence in an underground cave in Clarksville, New York—and did not find it. The sound was greatly diminished underground but did not vanish. In 1998, I sought silence in Green Bank, West Virginia, the only place on earth that is legally protected from radio waves—and did not find it. The sound did not even diminish. I can make it louder by plugging in electronic devices, and softer again by unplugging them, but I cannot make it go away, not even by turning the power off in my house. I can hear appliances being turned on in a neighbor’s house. Without warning or explanation the sound sometimes becomes suddenly much louder all over my neighborhood. It becomes quieter when there is a power outage. But it never disappears. It matches 17,000 Hz, which is the highest pitch that I can hear.

Low frequency sounds 

The low frequency Hum is heard by between two and eleven percent of the population.29 This is fewer than hear the high frequency sound, but the effects of the Hum can be far more disturbing. At its best it sounds like a diesel engine idling somewhere in the distance. At its worst it vibrates one’s whole body, causes intense dizziness, nausea and vomiting, prevents sleep, and is completely incapacitating. It has driven people to suicide. 

The probable sources of the Hum are powerful ultrasonic radio broadcasts modulated at extremely low frequencies to communicate with submarines. To penetrate the oceans requires radio signals of immense power and long wavelengths, and the frequencies called VLF (very low frequency) and LF (low frequency)—corresponding to the ultrasonic range—fit the bill. The American military systems currently in use for this purpose include enormous antennas located in Maine, Washington, Hawaii, California, North Dakota, Puerto Rico, Iceland, Australia, Japan, and Italy, in addition to sixteen mobile antennas flown on aircraft whose locations at any given time are kept secret. Land stations of this type are also operated by Russia, China, India, England, France, Sweden, Japan, Turkey, Greece, Brazil, and Chile, and by NATO in Norway, Italy, France, the United Kingdom, and Germany. 

Since the wavelengths are so long, every VLF antenna is tremendous. The antenna array at Cutler, Maine, which has been operating since 1961, is in the form of two giant six-pointed stars, covering a peninsula of nearly five square miles and supported by 26 towers up to 1,000 feet tall. It broadcasts with a maximum power of 1.8 million watts. The facility at Jim Creek, Washington, built in 1953, has a 1.2-million-watt transmitter. Its broadcast antenna is strung between two mountaintops. 

The low frequencies that are required in order to penetrate the oceans limit the speed at which messages can be transmitted. The American stations send a binary code at 50 pulses per second, which is consistent with the frequency of the Hum that most people hear today. The enhanced system recently adopted by the Navy uses multiple channels to transmit more data, but each channel still pulses at 50 Hz. In addition, the binary code itself is created by two ultrasonic frequencies spaced 50 Hz apart. These signals are therefore doubly modulated at approximately the frequency that is tormenting people worldwide. 

Geology Professor David Deming at the University of Oklahoma, who was driven to investigate the Hum that he hears, has focused his attention on the mobile TACAMO (“Take Charge and Move Out”) system. TACAMO planes, which trail long antennas behind them, have been flying out of Tinker Air Force Base in Oklahoma City since 1963, and the maximum power of each airborne transmitter is 200,000 watts. They use a variety of frequencies between 17.9 and 29.6 kHz, which are doubly modulated at 50 Hz like all other VLF stations that communicate with submarines. Navy TACAMO planes are always in the sky whenever there is a Hum in Oklahoma. The aircraft head out from Oklahoma to Travis Air Force Base in California and Naval Air Station Patuxent River in Maryland. From there the planes fly six to ten hour missions in predetermined orbits over the Atlantic and Pacific Oceans. 

One other ultrasonic, pulsed communication network deserves mention here—one which ceased broadcasting in North America in 2010, but which is still functioning in some parts of the world and may yet be fully resurrected here: the LORAN-C system. LORAN, which stands for LOng RAnge Navigation, is an old network of extremely powerful land-based navigation beacons whose function is now duplicated by Global Positioning Satellites. LORAN may have been responsible for the earliest reports of a Hum in England as well as the famous Hum in Taos, New Mexico that was the subject of a government investigation launched in 1992. 

LORAN-C operates at 100 kHz and is pulsed at multiples of 10 to 17 Hz, depending on location. Placed under the control of the Coast Guard, the first LORAN-C stations were built along the east coast in 1957—in Martha’s Vineyard, Massachusetts; Jupiter, Florida; and Cape Fear, North Carolina. In the late 1950s, chains of LORAN-C stations were also built around the Mediterranean Sea and the Norwegian Sea, and by 1961 others were on the air in the Bering Sea, and in the Pacific Ocean centered on Hawaii. Although it was not the first long range navigation system, its predecessor, LORAN-A, operated at frequencies between 1850 and 1950 kHz and was not in the ultrasonic range. 

My own encounters with the Hum date from 1983, when I first moved to the remote, otherwise quiet sanctuary in the redwoods that is Mendocino, California. Although Cornell University is quite near the 800,000-watt Seneca LORAN station that began operating in 1978, I had graduated from college there in 1971 and never heard a Hum. But in Mendocino, I was kept awake by it. Like so many other people, I first thought I was hearing a distant motor or generator—until I realized that this noise followed me even on camping trips deep into roadless areas of wild, far-northern California. Its pitch was about a low E-flat—roughly 80 Hz—and I discovered that I could bring the Hum into my head, even on days when it was otherwise not there, by playing my piano in the key of E-flat—as though there were an E-flat piano string inside my head vibrating in sympathetic resonance. 

When, some years later, a Coast Guard official told me there was a LORAN antenna over in Middletown, I wondered if there was a connection to the annoying and puzzling Hum. The official had casually mentioned that the signal was so powerful that the people who worked at the facility could hear it. So I got in my car one morning and made the three-hour drive. As I approached within a half mile of the 63-story tower, my ears began to hurt. And I began to hear not only my accustomed 80 Hz pulsating Hum, but also a purer tone one octave lower. I obtained a copy of the LORAN-C User Handbook from the Coast Guard, and learned that the repetition rate for LORAN-C transmissions on the west coast was almost exactly 10 Hz. Apparently I was hearing the fourth and eighth harmonics. Further consultation with the Handbook provided an explanation. The West Coast Chain consisted of four stations—the one in Middletown, one in George, Washington, and two in Nevada—that transmitted once every tenth of a second in a precisely timed sequence. 

Fallon…George…Middletown…Searchlight.….….…..Fallon…George…Middletown… Searchlight.….….….. It took exactly one twentieth of a second to transmit the sequence of signals from the four beacons—corresponding to a repetition rate of 80 Hz, and reinforcing the eighth harmonic of the fundamental frequency. Taking the signals two at a time—Fallon-George and Middletown-Searchlight—gives a repetition rate of 40 Hz, reinforcing the fourth harmonic. The predominance of the Middletown signal, when one was close enough to the Middletown tower, apparently made the fourth harmonic audible. 

By this time the Taos Hum was well known, and I wondered if it, too, was caused by LORAN. It had been investigated by a team of scientists from Los Alamos and Sandia National Laboratories, the Air Force’s Phillips Laboratory, and the University of New Mexico—who predictably didn’t find anything. But three items in their report stood out. First, 161 of the 1,440 residents of the Taos area who responded to their survey heard the Hum. Second, the team heard back not only from Taos-area residents, but from people throughout the northern hemisphere who had heard about the investigation and contacted the team to report being tormented by the same sound. Third, the frequencies that hearers said matched the Hum ranged from 32 Hz to 80 Hz, and several trained musicians identified it as a tone near 41 Hz. The South Central LORAN Chain had a repetition rate of 10.4 Hz, and the fourth harmonic was 41.6 Hz. The third harmonic was 31.2 Hz. Apparently many people were hearing the eighth harmonic as well. 

The evidence that LORAN-C caused the Taos Hum is abundant. The South Central Chain was the only LORAN chain that had six transmitting beacons, and Taos was near the geographic center of them. The South Central Chain was built from 1989 to 1991 and fully commissioned in April 1991, precisely when residents of Taos began complaining. The combined electric field strength at Taos, from the six stations, was about 30 millivolts per meter, more than enough to trigger a hearing sensation.30 

Some of the other Hums around the world seem also to have been caused by LORAN-C. The LORAN-C chain in the Norwegian Sea, with stations in Norway, Jan Mayen Island, Iceland, and the Faeroe Islands, provided coverage to England since 1959. The British Hum, which has been reported for about that long, suddenly decreased in loudness around 1994—the same year Iceland turned off the most powerful LORAN station in that chain. It increased in loudness again in 1996— at the same time that a new station in Værlandet in southern Norway was put into operation to again give better coverage to the British Isles. The new station also provided coverage for the first time to the area around Vanern Lake, Sweden—where the Hum was first reported in 1996. 

I can also add another piece of my own experience. I now live in Santa Fe, New Mexico—not too far from Taos—where I hear the Hum only infrequently. It is not audible to me most of the time, and it is no longer a low E-flat. It is now closer to an A or A-flat, which corresponds to the frequencies used by the Navy in communicating with submarines. 

At this writing, an Enhanced LORAN-C, or eLORAN, network is being built in several areas of the world to ensure the operation of a backup navigation and timing system in case the GPS satellites fail or their broadcasts are jammed. eLORAN relies on the same immensely powerful long-wave radio transmissions as before, but the addition of a data channel provides much greater position accuracy. To achieve position accuracies to within 10 meters, networks of receiving stations, called differential-LORAN, or DLoran, are also being built. They monitor the powerful eLORAN signals and broadcast correction factors over the data channel, or over a cell tower network, to local mariners. South Korea is currently operating three eLORAN stations and plans to achieve full nationwide coverage in 2020. Iran has built an eLORAN system, and India, Russia, China, and Saudi Arabia are upgrading their existing LORAN-C stations to eLORAN. France, Norway, Denmark, and Germany ceased their LORAN-C transmissions at the end of 2015 and have dismantled their towers. The situation in the United States is less certain. The 625-foot LORAN-C tower at Wildwood, New Jersey went back on the air temporarily in 2015 under the aegis of the Department of Homeland Security. And in December 2018, President Trump signed into law the National Timing Resilience and Security Act, which mandates the establishment of a terrestrial backup system for the Global Positioning Satellites that will be able to penetrate underground and inside buildings throughout the United States. It authorizes the acquisition of the mothballed LORAN facilities for this purpose. 

To see if the shutoff of most of the European LORAN-C stations had any effect on the Hum in that part of the world, I consulted a worldwide database of Hum reports kept by Glen MacPherson, an instructor at the University of British Columbia. On January 1, 2016, the day after the planned LORAN-C shutoff, reports came in from Scotland and Northern Ireland saying that the Hum had suddenly stopped between 2:00 a.m. and 3:00 a.m. that morning. 

OTHER SOURCES OF ULTRASONIC RADIATION 

Time broadcasts 

The National Institute of Standards and Technology broadcasts a time-of-day signal that synchronizes “atomic” clocks and watches throughout North America. Transmitting from Fort Collins, Colorado, the 60-kHz signal of station WWVB is even usable in parts of South America and Africa at night. Time stations using ultrasonic frequencies also broadcast from Anthorn, England; Mount Hagane and Mount Oota Kadoya, Japan; Mainflingen, Germany; and Lintong, China. 

Energy efficient light bulbs 

In a contagious fit of insanity, countries are falling like dominoes for the myth that fluorescent lighting is good for the environment. Cuba, in 2007, was the first to ban outright all sales of ordinary incandescent bulbs—bulbs that have shed soft light into our dark evenings for a hundred and thirty-five years. Australia banned imports of incandescents in November 2008, and sales a year later. The European Union completed a three-year phase-out on September 1, 2012, and China banned 100-watt bulbs one month later, with total prohibition scheduled for 2016. Brazilians can no longer buy bulbs of 60 watts or greater as of July 1, 2015. Canada and the United States, which had planned to ban 100-watt bulbs in 2012, temporarily relented in the face of strong public opposition. 

And the public are right. Fluorescents give off harsh light, and they contain mercury vapor, which gives off ultraviolet radiation when it is energized by high voltage. The inside of the glass is coated with a chemical that emits visible light when it is hit by the ultraviolet radiation. All fluorescents, without exception, work this way. All homes and business that use fluorescents long enough will eventually break one and be contaminated with mercury dust and vapor. And landfills throughout the world are being heavily polluted with mercury from the disposal of billions of broken and used up fluorescent light bulbs. Not to mention the inconvenient fact that little, if any, energy is being saved if you live anywhere but the tropics. In summer, the heat given off by light bulbs is wasted and increases the demand for air conditioning. But in winter, we gain that cost back because the heat from light bulbs then warms our homes. When we lose that extra source of heat, we have to make up the difference by burning more oil and gas. In the United States, we have probably neither gained nor lost, environmentally. But in Canada, for example, which gets virtually all its electricity from hydro power, banning incandescent bulbs has been an unqualified mistake. It has done nothing but increase the consumption of fossil fuels, putting more carbon dioxide into the atmosphere and worsening global warming. 

And that mistake is being compounded. All manufacturers of fluorescent bulbs, under pressure from government regulators, are making a bad situation worse by attaching a miniature radio transmitter to each and every light bulb under the theory that this makes them even more energy efficient. The radio waves energize the mercury vapor without having to subject it to a high voltage. All compact fluorescent bulbs, and a large percentage of long fluorescent bulbs today are energized with these radio transmitters, which are called “electronic ballasts.” The frequencies used, between 20 and 60 kHz, are in the ultrasonic hearing range. The ubiquity of this type of lighting, and the growing difficulty of obtaining ordinary incandescents, even where they are still legal, means that these bulbs are a predominant source of ultrasonic radiation in homes and businesses, and on power lines throughout the world. Virtually all electricity that flows on the power grid and in the earth is contaminated to some extent with 20 to 60 kHz, having passed through hundreds or thousands of these radio transmitters on its way to the next consumer, or back to the utility’s generating plant. And because the electronic ballasts put out so much electrical distortion, today’s fluorescent bulbs also emit measurable energy far into the microwave range. The FCC’s rules allow each and every energy efficient bulb to emit microwave radiation, at frequencies up to 1,000 MHz, at a field strength of up to 20 microvolts per meter, as measured at a distance of 100 feet from the bulb. 

LED bulbs, which are being offered as another substitute for incandescents, are no better. They too give off harsh light, and they contain a variety of toxic metals and require special electronic components that convert the alternating current in our homes to low-voltage direct current. Most often, these components are switch mode power supplies which operate at ultrasonic frequencies and are discussed below in connection with computers. 

Sadly, the North American reprieve was only temporary. Canada officially banished most incandescent bulbs as of January 1, 2015, and the U.S. effort to further postpone its death knell ended at the same time. The last examples of Edison’s enduring invention vanished from the shelves of my local hardware stores a couple of months later. The gentle incandescent has disappeared from much of the world. Only specialty bulbs and halogen lamps are left, and many countries are prohibiting those also. Incandescents are still completely legal, however, in most of Africa, most of the Middle East, much of southeast Asia, and all the island nations of the Pacific.31 

Cell phones and cell towers 

Although cell phones and cell towers are best known as emitters of microwave radiation, that radiation is modulated at a bewildering array of much lower frequencies that the human body, as a radio receiver, perceives. For example, GSM (Global System for Mobile) is a telecommunications system long used by AT&T and T-Mobile in the United States, and by most companies in the rest of the world. The radiation from GSM cell phones and cell towers has components at 0.16, 4.25, 8, 217, 1733, 33,850 and 270,833 Hz. In addition, the microwave carrier is divided into 124 subcarriers, each 200 kHz wide, all of which can broadcast simultaneously, in order to accommodate up to about a thousand cell phone users at once in any given area. This generates many harmonics of 200,000 Hz. 

Although GSM is a “2G” technology, it has not gone away. Layered over it are “3G” and “4G” networks that smart phones of more recent vintage use. The 3G system, called Universal Mobile Telecommunications System, or UMTS, is completely different, containing modulation components at 100, 1500, 15,000, and 3,840,000 Hz. The 4G system, called Long-Term Evolution, or LTE, is modulated at yet another set of lower frequencies, including 100, 200, 1000, 2,000, and 15,000 Hz. In 4G, the carrier frequency is divided into hundreds of 15-kHz wide subcarriers, adding yet another set of harmonics. And since smart phones and flip phones of different vintages presently coexist, every cell tower has to emit all of the different modulation frequencies, old and new. Otherwise older phones would not continue to work. AT&T towers, for example, are therefore presently emitting modulation frequencies of 0.16, 4.25, 8.33, 100, 200, 217, 1000, 1500, 1733, 2,000, 15,000, 33,850, 270,833 and 3,840,000 Hz, plus harmonics of these frequencies and additional harmonics of 15,000 Hz and 200,000 Hz, not to mention the microwave carrier frequencies of 700 MHz, 850 MHz, 1700 MHz, 1900 MHz, and 2100 MHz. Like the proverbial boiled frog, we are all immersed in a giant pot of radiation, whose intensity is increasing, and whose effect, though unperceived, is nevertheless certain.32 

Cell phones spend a higher percentage of their energy on their low frequency components than do cell towers 33—which may explain the high prevalence of “tinnitus” among cell phone users with otherwise normal hearing. In 2003, at a time when cell phone use was not as universal as it is today, it was still possible to do epidemiological studies of users and non-users. A team of scientists led by Michael Kundi at the Medical University of Vienna, comparing people with and without tinnitus at an ear, nose, and throat clinic, found a greater prevalence of tinnitus—often in both ears—among cell phone users than among non-users, and a clear trend of more tinnitus with increasing intensity of cell phone use.34 The more minutes, the more tinnitus. 

Remote control devices 

Most remote control devices—the gadgets that open garages and car doors, and operate television sets—communicate using infrared radiation. But the infrared signals are pulsed between 30 and 60 thousand times per second, in the middle of the ultrasonic range. The most common frequency chosen by manufacturers is 36 kHz. 

The problem with computers 

In 1977, Apple gave the world a revolutionary new device. The personal computer, as it came to be known, was powered by a new type of gadget called a switch mode power supply. If you have a laptop, it’s the little transformer/charger that you plug into the wall. This gift from Apple was much lighter in weight, more efficient, and more versatile than previous methods of supplying low-voltage DC power to electrical equipment. It had only one glaring fault: instead of delivering only pure DC, it also polluted the electric power grid, the earth, the atmosphere, and even outer space with a broad range of frequencies. But its usefulness made it rapidly indispensable to the mushrooming electronics industry. Today computers, televisions, fax machines, cell phone chargers, and most other electronic equipment used in home and industry depend on it. 

Its method of operation makes it obvious why it causes such a huge amount of electrical pollution. Instead of regulating voltage in the traditional way with variable resistors, a switch mode power supply interrupts the current flow tens of thousands to hundreds of thousands of times per second. By chopping up the current into slightly more or fewer pieces, these little devices can regulate voltage very precisely. But they change 50- or 60-cycle current into something very different. The typical switch mode power supply operates at a frequency between 30 and 60 kHz. 

Computers, and all other electronic equipment that contains digital circuitry, also emit ultrasonic radiation from other components, as anyone can verify using an ordinary (non-digital) AM radio. Simply tune the radio to the beginning of the dial (about 530 kHz), bring it near a computer—or a cell phone, television, fax machine, or even a handheld calculator—and you will hear a variety of loud screaming noises coming from the radio. 

What you are hearing is called “radio frequency interference,” and much of that is harmonics of emissions that are in the ultrasonic range. A laptop computer produces such noise even when it is running off the battery. When it is plugged in, the switch mode power supply not only intensifies the noise, but communicates it to your house wiring. From your house wiring it travels onto the distribution line in your neighborhood and into everyone else’s homes, and down the ground wire attached to your electric meter into the earth. And the electric power grid, and the earth itself, contaminated with ultrasonic frequencies from billions of computers, becomes an antenna that radiates ultrasonic energy throughout the atmosphere and beyond. 

Dimmer switches 

Another device that chops up 50- or 60-cycle current is the ubiquitous dimmer switch. Here, too, the traditional variable resistor has been replaced with something else. The strategy is different than in your computer’s transformer—the modern dimmer switch interrupts the current only twice in each cycle—but the result is similar: the sudden starting and stopping of the current produces dirty power. Instead of a smooth flow of 50- or 60-cycle electricity, you get a tumultuous mixture of higher harmonics that flows through the light bulb, pollutes house wiring, and irritates the nervous system. A large portion of these unwanted frequencies are in the ultrasonic range.

Power lines 

As early as the 1970s, Hiroshi Kikuchi, at Nihon University in Tokyo, reported that significant amounts of high frequency currents were occurring on the power grid due to transformers, motors, generators, and electronic equipment. And some of it was radiating into space. On the ground, radiation in a continuous spectrum from 50 Hz to as high as 100 MHz was being measured at distances as far as one kilometer from both low and high power lines. Frequencies up to about 10 kHz, originating from power lines, were being measured by satellites. 

In 1997, Maurizio Vignati and Livio Giuliani, at the National Institute for Occupational Health and Prevention in Rome, reported that they were detecting radio frequency emissions as far as 50 meters (165 feet) from power lines, at frequencies ranging from 112 to 370 kHz, that were amplitude modulated and seemed to be carrying data. These frequencies, they discovered, were deliberately put on the electric power grid by Italian utility companies. And the same technology is being used worldwide. It is called Power Line Communications. The technology is not new but its use has exploded. 

Electric companies have been sending radio signals over power lines since about 1922, using frequencies ranging from 15 to 500 kHz, for monitoring and control of their substations and distribution lines. The signals, as powerful as 1,000 watts or more, travel hundreds of miles. 

In 1978, small devices appeared in Radio Shack stores that transmitted at 120 kHz. Consumers could plug them in and use the wiring in their walls to carry signals that enabled them to control lamps and other appliances remotely from command consoles. Later the HomePlug Alliance developed devices that use home wiring to connect computers. HomePlug devices work at 2 to 86 MHz, but have modulation components at 24.4 kHz and 27.9 kHz, in the ultrasonic range. 

Smart Meters 

The use of the power grid to deliver Internet to homes and businesses—called Broadband over Power Lines—has not been commercially successful. But the use of the power grid to transmit data between homes, businesses and power plants is now being implemented for something called the Smart Grid, presently under construction all over the world. When the Smart Grid is fully implemented, electricity will be automatically sent where it is needed, when it is needed—even rerouted from one region to another to satisfy instantaneous demand. Utilities will continuously monitor every major appliance in every home and business, and will have the ability to automatically regulate thermostats and turn their customers’ air conditioners and washing machines on and off during times of greater or lesser demand for electricity. In order to accomplish this, radio transmitters are being installed on everyone’s electric meters and appliances, which communicate not only with each other, but with the utility company, either wirelessly, or via fiber optic cable, or by radio signals sent over the power lines. The FCC has allocated frequencies from 10 to 490 kHz for this latter purpose, but utility companies most often use frequencies below 90 kHz, in the ultrasonic range, for long-distance communication over the power lines. 

The wireless version of smart meters, especially the variety called a “mesh network,” has spread around the world like technological wildfire in the past few years, rapidly becoming the single most intrusive source of electronic noise in modern life. The meters in a mesh network communicate not only with the utility company but with each other, each meter chattering loudly to its neighbors as frequently as two hundred and forty thousand times a day. And the chattering is not silent. Shrill, high-pitched ringing and a variety of hissing and clicking noises are so consistently reported by utility customers following the installation of these smart meters that cause and effect can no longer be denied. The symbol transmission frequency of 50 kHz for many of these systems, and the sheer power of the signal, outclassing other sources of radiation in the modern home, are likely responsible—that, and the pulsatile nature of the signal, like a woodpecker beating incessantly at all hours of the day and night. 

Tinnitus today 

Tinnitus rates have been rising for at least the last thirty years, and dramatically so for the last twenty. 

From 1982 to 1996, the National Health Interview Survey conducted by the United States Public Health Service included questions about both hearing impairment and tinnitus. Although the prevalence of hearing loss declined during those years, the rate of tinnitus climbed by one-third.35 Later, the National Health and Nutrition Examination Surveys (NHANES), conducted by the Centers for Disease Control, found that the rate continued to climb. In 1982, about 17 percent of the adult population complained of tinnitus; in 1996, about 22 percent; between 1999 and 2004, about 25 percent. The authors of the NHANES study estimated that by 2004, 50 million adults suffered from tinnitus.36 

In 2011, Sergei Kochkin, the Executive Director of the Better Hearing Institute in Washington, D.C., reported the very surprising result of a nationwide survey, conducted in 2010. What was so surprising was that 44 percent of Americans who complained of ringing in their ears said they had normal hearing. Kochkin simply did not believe it. “It is widely acknowledged that people with tinnitus almost always have hearing loss,” he said. He therefore assumed that millions of Americans who complain of ringing in their ears must have hearing loss but don’t know it. But his assumption is no longer valid. 

Researchers who wish to study real tinnitus have to be careful. If you put the average human being in a soundproof room for several minutes, he or she will begin to hear sounds that are not there. Veterans Administration doctors Morris Heller and Moe Bergman demonstrated this in 1953, and a research team at the University of Milan repeated the experiment fifty years later with the same result: over 90 percent of their subjects heard sounds.37 Therefore the results of tinnitus surveys may depend on the way the data are gathered as well as the way the questions are worded and even on the definition of “tinnitus.” To really find out if tinnitus is increasing, we need virtually identical studies done a number of years apart by the same researchers in the same place on the same population. And we have just such a series of studies. 

During the years 1993 to 1995, 3,753 residents of Beaver Dam, Wisconsin, aged 48 to 92, were enrolled in a hearing study at the University of Wisconsin, Madison. Follow-up examinations were done on these subjects at five, ten, and fifteen year intervals. In addition, the children of the original subjects were enrolled in a similar study between 2005 and 2008. As a result, data on the prevalence of tinnitus in this population are available almost continuously from 1993 to 2010. 

Since hearing disorders among older adults declined during this period, the researchers expected to see a corresponding decline in tinnitus. They found just the opposite: a steady increase in tinnitus in all age groups during the 1990s and 2000s. For example, the rate of tinnitus among people aged 55 to 59 increased from 7.6 percent (at the beginning of the study) to 11.0 percent, to 13.6 percent, to 17.5 percent (at the end of the study). Overall, the rate of tinnitus in this population increased by about 50 percent.38 

We also have a series of studies, conducted during these same years, on young children, who have long been assumed to have almost no tinnitus. 

Kajsa-Mia Holgers is a professor of audiology at the University of Jonkoping in Sweden. She conducted her first study in 1997 on 964 seven-year-old school children in Göteborg who were undergoing routine audiometry testing—470 girls and 494 boys. Twelve percent of the children said they had experienced ringing in their ears, the vast majority of whom had perfect hearing. Nine years later Holgers, using the same study design and the same tinnitus questions, conducted an identical study on another large group of seven-year-old school children in Göteborg who were undergoing audiometry testing. This time an astonishing 42 percent of the children reported ringing in their ears. “We face a several fold increase in the problem in just a few years,” an alarmed Holgers told the national daily newspaper, Dagens Nyheter. 

To further explore the problem, Holgers gave a detailed questionnaire to middle and high school students aged 13 to 16 during the 2003-2004 school year. More than half of these older students reported tinnitus in some form. Some experienced only “noise-induced tinnitus” (tinnitus after being exposed to loud noise), but almost one-third of the students had “spontaneous tinnitus” with some frequency. 

And in 2004, Holgers studied another group of school children aged 9 to 16, almost half of whom had spontaneous tinnitus. Even more alarming was the fact that 23 percent reported their tinnitus to be annoying, that 14 percent heard it everyday, and that hundreds of children were showing up at Holgers’ audiology clinic seeking help for their tinnitus. 

If what is occurring in Wisconsin and Sweden is also occurring in the rest of the world—and there is no reason to think otherwise—then in less than two decades, as computers, cell phones, fluorescent lights, and a crescendo of digital and wireless communication signals have penetrated every recess of our environment, at least a quarter of all adults and half of all children have entered a new world in which they must live, learn, and function while attempting to ignore an inescapable presence of intrusive electronic noise.

next

PART 7 OF 7

https://exploringrealhistory.blogspot.com/2021/01/part-7-of-7-invisible-rainbowbees-birds.html

16. Bees, Birds, Trees, and Humans

footnotes

Chapter 14. Suspended Animation 

1. Beard 1980, pp. 2-3; Beard 1881a, pp. viii, ix, 105. 

2. Weindruch and Walford 1988. 

3. Walford 1982. 

4. Riemers 1979.

5. Austad 1988.

6. Dunham 1938. 

7. Johnson et al. 1984.

 8. Fischer-Piette 1989. 

9. Hansson et al. 1953. 

10. Colman et al. 2013. 

11. Ross and Bras 1965; for other studies of tumors in rats, see Weindruch and Walford, pp. 76-84. 

12. Colman et al. 2009; Mattison et al. 2003. 

13. Griffin 1958, p. 35. 

14. Ramsey et al. 2000; Lynn and Wallwork 1992. 

15. Ramsey et al. 2000.

16. Ordy et al. 1967. 

17. Spalding et al. 1971. 

18. Perez et al. 2008. 

19. Tryon and Snyder 1971. 

20. Caratero et al. 1998. 

21. Okada et al. 2007. 

22. Suzuki et al. 1998. 

Chapter 15. You mean you can hear electricity? 

1. Grapengiesser 1801, p. 133. Quoted in Brenner 1868, p. 38. 

2. Brenner 1868, pp. 41, 45. 

3. Tousey 1921, p. 469. 

4. Meyer 1931. 

5. Gersuni and Volokhov 1936. 

6. Stevens and Hunt 1937, unpublished, described in Stevens and Davis 1938, pp. 354-55. 

7. Moeser, W. “Whiz Kid, Hands Down,” Life, September 14, 1962. 

8. Einhorn 1967. 

9. Russell et al. 1986. 

10. See also Degens et al. 1969. 

11. Lissman, p. 184; Offutt 1984, pp. 19-20. 

12. de Vries 1948a, 1948b. 

13. Honrubia 1976; Mountain 1986; Ashmore 1987. 

14. Zwislocki 1992; Gordon, Smith and Chamberlain 1982, cited in Zwislocki. 

15. Nowotny and Gummer 2006. 

16. Brenner 1868. 

17. Mountain 1986. 

18. Mountain et al. 1986; Ashmore 1987; Honrubia and Sitko 1976. 

19. Lenhardt 2003. 

20. Combridge and Ackroyd 1945, Item No. 7, p. 49. 

21. Gavrilov et al. 1980. 

22. Qin et al. 2011. 

23. Stevens 1938, p. 50, fig. 17; Corso 1963; Moller and Pederson 2004, figs. 1-3; Stanley and Walker 2005. 

24. Stevens 1937. 

25. Environmental Health Criteria 137, 1993 edition, pp. 160 and 161, figs. 23 and 24. 

26. Duane Dahlberg, Ph.D., personal communication. 

27. Petrie 1963, pp. 89-92. 

28. Maggs 1976. 

29. Reported by the Low Frequency Noise Sufferer’s Association of England, Jean Skinner, personal communication; by Sara Allen of Taos, New Mexico, personal communication; and by Mullins and Kelley 1995. 

30. Calculation based on Jansky and Bailey 1962, fig. 35, Ground Wave Field Intensity; and Garufi 1989, fig. 6, U.S. Coast Guard Conductivity Map. 

31. In Africa, only Egypt, Tunisia, Ghana, Senegal, Ethiopia, Zambia, Zimbabwe, and South Africa currently have bans in place or in progress. In the Middle East, only Israel, Lebanon, Kuwait, Bahrain, Qatar, and the United Arab Emirates currently have bans. Other countries where prohibition is neither in place nor in progress include Haiti, Jamaica, St. Kitts and Nevis, Granada, Antigua and Barbuda, St. Vincent and the Grenadines, St. Lucia, Trinidad and Tobago, Dominica, Venezuela, Bolivia, Paraguay, Uruguay, Suriname, Albania, Moldova, Belarus, Uzbekistan, Kyrgyzstan, Turkmenistan, Mongolia, Turkey, Afghanistan, Pakistan, Nepal, Bhutan, India, Bangladesh, Myanmar, Singapore, Cambodia, Laos, Indonesia, East Timor, Papua New Guinea, New Zealand, Bosnia and Herzegovinia, Kosovo, and North Macedonia. 

32. Signal structure for GSM: superframe (6.12 sec), control multiframe (235.4 msec), traffic multiframe (120 msec), frame (4.615 msec), time slot (0.577 msec), symbol (270,833 per second per channel, 33,850 per second per user). Signal structure for UMTS: frame (10 msec), time slot (0.667 msec), symbol (66.7 ȝsec), chip (0.26 ȝsec). Signal structure for LTE: frame (10 msec), half-frame (5 msec), subframe (1 msec), slot (0.5 msec), symbol (0.667 msec). 

33. Mild and Wilén 2009. 

34. Hutter et al. 2010. 

35. National Center for Health Statistics 1982-1996. 

36. Shargorodsky et al. 2010. 

37. Del Bo et al. 2008.

38. Nondahl et al. 2012.




No comments:

Part 1 Windswept House A VATICAN NOVEL....History as Prologue: End Signs

Windswept House A VATICAN NOVEL  by Malachi Martin History as Prologue: End Signs  1957   DIPLOMATS schooled in harsh times and in the tough...