Thursday, February 16, 2023

Part 3 : Sandworm: A New Era of Cyberwar ... Flashback : Aurora ... Flashback: Moonlight Maze ... Flashback: Estonia ...Flashback: Stuxnet

Sandworm
A New Era of Cyberwar
by Andy Greenberg
PART II ORIGINS 
Once men turned their thinking over to machines in the hope that this would set them free. 
But that only permitted other men with machines to enslave them.

10 FLASHBACK: 
AURORA 
Nine years before his Ukraine trip, on a piercingly cold and windy morning in March 2007, Mike Assante arrived at an Idaho National Laboratory facility thirty-two miles west of Idaho Falls, a building in the middle of a vast, high desert landscape covered with snow and sagebrush. He walked into an auditorium inside the visitors’ center, where a small crowd was gathering. The group included officials from the Department of Homeland Security, the Department of Energy, and the North American Electric Reliability Corporation (NERC), executives from a handful of electric utilities across the country, and other researchers and engineers who, like Assante, were tasked by the national lab to spend their days imagining catastrophic threats to American critical infrastructure. 

At the front of the room was an array of video monitors and data feeds, set up to face the room’s stadium seating, like mission control at a rocket launch. The screens showed live footage from several angles of a massive diesel generator. The machine was the size of a school bus, a mint green, gargantuan mass of steel weighing twenty-seven tons, about as much as an M3 Bradley tank. It sat a mile away from its audience in an electrical substation, producing enough electricity to power a hospital or a navy ship and emitting a steady roar. Waves of heat coming off its surface rippled the horizon in the video feed’s image. 

Assante and his fellow INL researchers had bought the generator for $300,000 from an oil field in Alaska. They’d shipped it thousands of miles to the Idaho test site, an 890-square-mile piece of land where the national lab maintained a sizable power grid for testing purposes, complete with sixty-one miles of transmission lines and seven electrical substations. 

Now, if Assante had done his job properly, they were going to destroy it. And the assembled researchers planned to kill that very expensive and resilient piece of machinery not with any physical tool or weapon but with about 140 kilobytes of data, a file smaller than the average cat GIF shared today on Twitter. 
■ 
Three years earlier, Assante had been the chief security officer at American Electric Power, a utility with millions of customers in eleven states from Texas to Kentucky. A former navy officer turned cybersecurity engineer, Assante had long been keenly aware of the potential for hackers to attack the power grid. But he was dismayed to see that most of his peers in the electric utility industry had a relatively simplistic view of that still-theoretical and distant threat. If hackers did somehow get deep enough into a utility’s network to start opening circuit breakers, the industry’s common wisdom at the time was that staff could simply kick the intruders out of the network and flip the power back on. “We could manage it like a storm,” Assante remembers his colleagues saying. “The way it was imagined, it would be like an outage and we’d recover from the outage, and that was the limit of thinking around the risk model.” 

But Assante, who had a rare level of crossover expertise between the architecture of power grids and computer security, was nagged by a more devious thought. What if attackers didn’t merely hijack the control systems of grid operators to flip switches and cause short-term blackouts, but instead reprogrammed the automated elements of the grid, components that made their own decisions about grid operations without checking with any human? 

In particular, Assante had been thinking about a piece of equipment called a protective relay. Protective relays are designed to function as a safety mechanism to guard against dangerous physical conditions in electric systems. If lines overheat or a generator goes out of sync, it’s those protective relays that detect the anomaly and open a circuit breaker, disconnecting the trouble spot, saving precious hardware, even preventing fires. A protective relay functions as a kind of lifeguard for the grid. 

But what if that protective relay could be paralyzed—or worse, corrupted so that it became the vehicle for an attacker’s payload? 

That disturbing question was one Assante had carried over to Idaho National Laboratory from his time at the electric utility. Now, in the visitor center of the lab’s test range, he and his fellow engineers were about to put his most malicious idea into practice. The secret experiment was given a code name that would come to be synonymous with the potential for digital attacks to inflict physical consequences: Aurora. 
■ 
The test director read out the time: 11:33 a.m. He checked with a safety engineer that the area around the lab’s diesel generator was clear of bystanders. Then he sent a go-ahead to one of the cybersecurity researchers at the national lab’s office in Idaho Falls to begin the attack. Like any real digital sabotage, this one would be performed from miles away, over the internet. The test’s simulated hacker responded by pushing roughly thirty lines of code from his machine to the protective relay connected to the bus-sized diesel generator. 

The inside of that generator, until that exact moment of its sabotage, had been performing a kind of invisible, perfectly harmonized dance with the electric grid to which it was connected. Diesel fuel in its chambers was aerosolized and detonated with inhuman timing to move pistons that rotated a steel rod inside the generator’s engine—the full assembly was known as the “prime mover”—roughly 600 times a minute. That rotation was carried through a rubber grommet, designed to reduce any vibration, and then into the electricity-generating components: a rod with arms wrapped in copper wiring, housed between two massive magnets so that each rotation induced electrical current in the wires. Spin that mass of wound copper fast enough, and it produced 60 hertz of alternating current, feeding its power into the vastly larger grid to which it was connected. 

A protective relay attached to that generator was designed to prevent it from connecting to the rest of the power system without first syncing to that exact rhythm: 60 hertz. But Assante’s hacker in Idaho Falls had just reprogrammed that safeguard device, flipping its logic on its head. 

At 11:33 a.m. and 23 seconds, the protective relay observed that the generator was perfectly synced. But then its corrupted brain did the opposite of what it was meant to do: It opened a circuit breaker to disconnect the machine. 

When the generator was detached from the larger circuit of Idaho National Laboratory’s electrical grid and relieved of the burden of sharing its energy with that vast system, it instantly began to accelerate, spinning faster, like a pack of horses that had been let loose from its carriage. As soon as the protective relay observed that the generator’s rotation had sped up to be fully out of sync with the rest of the grid, its maliciously flipped logic immediately reconnected it to the grid’s machinery. 

The moment the diesel generator was again linked to the larger system, it was hit with the wrenching force of every other rotating generator on the grid. All of that equipment pulled the relatively small mass of the diesel generator’s own spinning components back to its original, slower speed to match its neighbors’ frequencies. 

On the visitor center’s screens, the assembled audience watched the giant machine shake with sudden, terrible violence, emitting a sound like a deep crack of a whip. The entire process from the moment the malicious code had been triggered to that first shudder had spanned only a fraction of a second. 

Black chunks began to fly out of an access panel on the generator, which the researchers had left open to watch its internals. Inside, the black rubber grommet that linked the two halves of the generator’s shaft was tearing itself apart. 

A few seconds later, the machine shook again as the protective relay code repeated its sabotage cycle, disconnecting the machine and reconnecting it out of sync. This time a cloud of gray smoke began to spill out of the generator, perhaps the result of the rubber debris burning inside it. 

Assante, despite the months of effort and millions of dollars in federal funds he’d spent developing the attack they were witnessing, somehow felt a kind of sympathy for the machine as it was being torn apart from within. “You find yourself rooting for it, like the little engine that could,” Assante remembered. “I was thinking, ‘You can make it!’ ” 

The machine did not make it. After a third hit, it released a larger cloud of gray smoke. “That prime mover is toast,” an engineer standing next to Assante said. After a fourth blow, a plume of black smoke rose from the machine thirty feet into the air in a final death rattle. 

The test director ended the experiment and disconnected the ruined generator from the grid one final time, leaving it deathly still. In the forensic analysis that followed, the lab’s researchers would find that the engine shaft had collided with the engine’s internal wall, leaving deep gouges in both and filling the inside of the machine with metal shavings. On the other side of the generator, its wiring and insulation had melted and burned. The machine was totaled. 

In the wake of the demonstration, a silence fell over the visitor center. “It was a sober moment,” Assante remembers. The engineers had just proven without a doubt that hackers who attacked an electric utility could go beyond a temporary disruption of the victim’s operations: They could damage its most critical equipment beyond repair. “It was so vivid. You could imagine it happening to a machine in an actual plant, and it would be terrible,” Assante says. “The implication was that with just a few lines of code, you can create conditions that were physically going to be very damaging to the machines we rely on.” 

But Assante also remembers feeling something weightier in the moments after the Aurora experiment. It was a sense that, like Robert Oppenheimer watching the first atomic bomb test at another U.S. national lab six decades earlier, he was witnessing the birth of something historic and immensely powerful. 

“I had a very real pit in my stomach,” Assante says. “It was like a glimpse of the future.”

11 
FLASHBACK: MOONLIGHT MAZE 
The known history of state-sponsored hacking stretches back three decades before Russia’s hackers would switch off the power to hundreds of thousands of people and two decades before experiments like the Aurora Generator Test would prove how destructive those attacks could be. It began with a seventy-five-cent accounting error. 

In 1986, Cliff Stoll, a thirty-six-year-old astronomer working as the IT administrator at Lawrence Berkeley National Laboratory, was assigned to investigate that financial anomaly: Somehow, someone had remotely used one of the lab’s shared machines without paying the usual per-minute fee that was typical for online computers at the time. He quickly realized the unauthorized user was a uniquely sophisticated hacker going by the name “Hunter” who had exploited a zero-day vulnerability in the lab’s software. Stoll would spend the next year hunting the hunter, painstakingly tracking the intruder’s movements as he stole reams of files from the lab’s network. 

Eventually, Stoll and his girlfriend, Martha Matthews, created an entire fake collection of files to lure the thief while watching him use the lab’s computers as a staging point to attempt to penetrate targets including the Department of Defense’s MILNET systems, an Alabama army base, the White Sands Missile Range, a navy data center, air force bases, NASA’s Jet Propulsion Laboratory, defense contractors like SRI and BBN, and even the CIA. Meanwhile, Stoll was also tracing the hacker back to his origin: a university in Hannover, Germany. 

Thanks in part to Stoll’s detective work, which he captured in his seminal cybersecurity book, The Cuckoo’s Egg, German police arrested Stoll’s hacker along with four of his West German associates. Together, they had approached East German agents with an offer to steal secrets from Western government networks and sell them to the KGB. 

All five men in the crew were charged with espionage. “Hunter,” whose real name was Markus Hess, was given twenty months in prison. Two of the men agreed to cooperate with prosecutors to avoid prison time. The body of one of those cooperators, thirty-year-old Karl Koch, was later found in a forest outside Hannover, burned beyond recognition, a can of gasoline nearby. 
■ 
Ten years after those intrusions, Russia’s hackers returned. This time, they were no longer foreign freelancers but organized, professional, and highly persistent spies. They would pillage the secrets of the American government and military for years. 

Starting in October 1996, the U.S. Navy, the U.S. Air Force, and agencies including NASA, the Department of Energy, the Environmental Protection Agency, and the National Oceanic and Atmospheric Administration began detecting sporadic intrusions on their networks. Though the interlopers routed their attacks through compromised machines from Colorado to Toronto to London, the first victims of the hacking campaign nonetheless managed to trace the hackers to a Moscow-based internet service provider, Cityline. 

By June 1998, the Pentagon’s Defense Information Systems Agency was investigating the breaches, along with the FBI and London’s Metropolitan Police. They determined that the hackers were stealing an enormous volume of data from U.S. government and military agencies: By one estimate, the total haul was equivalent to a stack of paper files as high as the Washington Monument. As the investigators came to grips with the size of the unprecedented cyber espionage operation they were facing, they gave it a name: Moonlight Maze. 

By 1998, it was clear that the Moonlight Maze hackers were almost certainly Russian. The timing of their operations showed that the intruders were working during Moscow daylight hours. Investigators went digging through the records of academic conferences and found that Russian scientists had attended conferences on topics that closely matched the subjects of the files they’d stolen from the U.S. agencies. One former air force forensics expert, Kevin Mandia, even reverse engineered the hackers’ tools, stripping away the code’s layers of obfuscation and pulling out strings of Russian language. (Decades later, Mandia would be John Hultquist's boss at FireEye, the company that acquired iSight Partners following its similar discovery of Sandworms Russian origins.) 

By all appearances, it looked like a Russian intelligence agency was pilfering the secrets of the U.S. government, in what would soon be recognized as the first state-on-state cyber spying campaign of its kind. But proving that the spies were working for the Russian government itself was far more difficult than proving they were merely located in Russia. It was a fundamental problem that would plague hacker investigations for decades to come. Unless detectives could perform the nearly impossible feat of following the footprints of an intrusion back to an actual building or identify the individuals by name, governments could easily deny all responsibility for their spying, pinning the blame on bored teenagers or criminal gangs. 

So in early 1999, after the desperate Moonlight Maze investigators had failed for years to stop the penetrations or prove any definitive connection to the Kremlin, they resorted to a backup plan: They asked Russia for help. 
■ 
In March 1999, FBI agents hosted officials from Russia’s Ministry of Internal Affairs at an upscale D.C. restaurant, toasted them with vodka, and formally requested the assistance of Russian law enforcement to track down the hackers who were almost certainly based in Moscow. 

The ministry offered a surprisingly friendly response, promising to lend “aggressive investigative support.” After all, this was in the post Soviet, pre-Putin era of the 1990s. America had, ostensibly, won the Cold War. The new, post-perestroika Russia under President Boris Yeltsin seemed as if it might become an actual democratic ally of the West. 

Less than two weeks later, the American investigators flew to Moscow to meet with Russian officials. One, a general, was particularly friendly with the U.S. delegation, inviting the investigators to another vodka-drenched dinner. Too friendly, it turned out: At the end of that second evening of diplomacy, the inebriated general nearly caused an international incident by inserting his tongue uninvited into a female FBI agent’s ear. 

But the next day, the ministry really did follow through on its offer of cooperation: The hungover general ordered a subordinate to take the Americans to the offices of the internet service providers that had been used by the Moonlight Maze hackers, including Cityline. The investigators soon found that Cityline offered its internet services not just to civilians but to the Russian government, a clue that they hoped might lead to evidence of the Kremlin’s involvement. 

Then, something unexpected happened. In another meeting at the Russian Defense Ministry, the same general shocked the group by straightforwardly confirming that the Russian government was behind the Moonlight Maze break-ins. 

The intrusions had been staged through the Russian Academy of Sciences, the general explained, but the individuals responsible were “those motherfuckers in intelligence.” He declared that such behavior toward Russia’s newfound friends in the United States would not be tolerated. The U.S. delegation, hardly believing its luck, congratulated each other on the successful trip. Perhaps what had seemed like an intractable problem, this new plague of Russian cyber spying, could be solved with diplomacy. 

The Americans’ optimism was short-lived. The next day, the delegation learned from their Russian handlers that their schedule had been filled with sightseeing trips around Moscow. When the same thing happened the day after that, the investigators began to grow frustrated. They asked their Russian contacts about the whereabouts of the general they’d been meeting with and received no clear response. After a third day without further meetings, they knew that their brief, unexpected interlude of Russo-American cooperation on cybersecurity was over. 

The confused investigators could only guess at what had occurred. Their friendly general, it seemed, had missed the memo on the Kremlin’s hacking campaign. He had considered it a rogue aberration instead of what it was: a powerful new capability, and one that the Russian government was honing into a central tool for intelligence gathering in the post-Soviet era. The mistake no doubt carried serious consequences. The Americans never saw the general again. 

When the investigators returned to the United States, they found that Moonlight Maze’s intrusions had ceased. For a moment, it seemed, their probe might have chastened the Russian government into ordering a stop to the espionage spree. Then, just two months later, in the spring of 1999, military network administrators saw that the same relentless hacking had restarted, now with better stealth and more obfuscation in the code of its tools. A new age of state-sponsored cyber espionage had begun. 

Not long after that trip, in June 1999, the Department of Defense officially launched the Joint Task Force–Computer Network Defense, or JTF-CND, a new arm of the Pentagon devoted to the growing threat of digital intrusions. At the ribbon-cutting ceremony to celebrate the unit’s creation in August of that year, Deputy Secretary of Defense John Hamre discreetly alluded to the ongoing cybersecurity crisis the military was facing as Moonlight Maze continued to siphon its secrets. 

“The Department of Defense has been at cyberwar for the last half year,” Hamre told the audience. He didn’t name Moonlight Maze; the code name would only leak to the press months later. “Cyberspace isn’t just for geeks,” Hamre added in his JTF-CND speech. “It’s for warriors now.” 
■ 
What did Hamre mean by all this talk of warriors in cyberspace and that still unfamiliar word, “cyberwar”? 

By the time of Hamre’s speech in 1999, the notion had already been tossed around in military studies circles for years. The term “cyberwar” had been coined in a 1987 Omni magazine article that defined it in terms of giant robots and autonomous weapon systems replacing and augmenting human soldiers. It described flying drones, self-guided tanks, and battlefields covered in the “carcasses of crippled machines.” 

But in 1993, another landmark paper scrapped that Terminator style definition and gave cyberwar a far more influential meaning, expressing it in terms of military forces’ potential exploitation of information technology. That article by two analysts from the think tank Rand, John Arquilla and David Ronfeldt, appeared in the journal Comparative Strategy with the title “Cyberwar Is Coming!” (The exclamation point, Arquilla would later say, was intended “to show everybody how serious this was.”) 

The two Rand analysts defined cyberwar as any means of warfare that shifts the balance of knowledge in the attacker’s favor. Those tactics could include reconnaissance and espionage, but also, crucially, attacking the enemy’s command-and-control systems. “It means disrupting if not destroying the information and communications systems, broadly defined to include even military culture, on which an adversary relies in order to ‘know’ itself: who it is, where it is, what it can do when, why it is fighting, which threats to counter first, etc.,” Arquilla and Ronfeldt wrote. “As an innovation in warfare, we anticipate that cyberwar may be to the 21st century what blitzkrieg was to the 20th century.”* 

But by the time of Hamre’s ribbon-cutting speech half a decade later, a darker conception of cyberwar had slowly begun to take shape. Hamre had said in a 1997 congressional hearing that the United States must prepare for an “electronic Pearl Harbor”: a calamitous, surprise cyber attack designed not just to take out military command-and control communications but to physically devastate American infrastructure. 

That more apocalyptic vision of cyberwar had been brewing in government and military analysis circles, too. What if, the war wonks had only just begun to wonder, hackers could reach out from the internet and into the physical systems that underpin civilization? 

Rand’s think tankers, three years after Arquilla and Ronfeldt cyberwar article, had run their own hacker war-game simulations around this exact question in 1996. In that exercise, dramatically titled “The Day After…in Cyberspace,” Rand’s analysts imagined catastrophic, lethal consequences from cyber attacks that affected militaries and civilians alike: the derailment of a train in Germany, the disruption of controls at the Saudi Arabian oil firm Aramco, cutting power to a U.S. air base, crashing an airliner in Chicago, or sparking panic on the New York and London stock exchanges. 

This vision of a digital Armageddon took a chilling leap beyond the picture of cyberwar that Arquilla and Ronfeldt had described. Instead of merely using a cyberattack to cut the communicative strings of a military’s soldiers and weapons, what if cyber war meant that hackers themselves would become the soldiers? What if cyber attacks became their weapons, as physically destructive as a bullet or a warhead? 

This notion of a physically debilitating attack by digital means, as Rand imagined it, raised troubling questions about the foundations of modern society. “If one quarter of the air traffic control systems were inoperable for 48 hours, could air transportation continue?” the analysts asked themselves in their final report on the exercises. “Would two thirds of banking systems suffice; if so, for how long?” 

As they wondered aloud about these unthinkable scenarios, the war gamers came to the consensus that most critical of all was the vulnerability of the electricity supply, upon which all other layers of modern society’s technological infrastructure depend. “If the power system is at risk,” they wrote, “everything is at risk.” 
■ 
In 1999, cyberwar was, more or less, science fiction. By almost any definition, John Hamre was getting ahead of himself in his foreboding speech. Moonlight Maze wasn’t cyberwar. It was straightforward cyberespionage. 

Even as the Russian hackers stole reams upon reams of data, they weren’t using their access to military networks to sabotage or corrupt those systems. There was no sign that they were seeking to disrupt or deceive U.S. command and control to gain the kind of tactical advantage Arquilla and Ronfeldt had described. And they certainly weren’t reaching out into the physical world to cause lethal mayhem and blackouts. 

But Moonlight Maze did demonstrate that state-sponsored hackers could gain far deeper and broader access than many in the U.S. government had thought possible. And next time, they might not use those abilities for mere spying. 

In January 2000, President Bill Clinton himself encapsulated the threat in an ominous speech on the White House’s South Lawn. The brief remarks were intended to unveil a plan to kick-start U.S. cybersecurity research and recruiting. Instead, they resonate as a warning from the past. “Today, our critical systems, from power structures to air traffic control, are connected and run by computers,” Clinton said. 

There has never been a time like this in which we have the power to create knowledge and the power to create havoc, and both those powers rest in the same hands. We live in an age when one person sitting at one computer can come up with an idea, travel through cyberspace, and take humanity to new heights. Yet, someone can sit at the same computer, hack into a computer system and potentially paralyze a company, a city, or a government. 

The day when hackers would inflict that scale of disruption hadn’t yet arrived. But Clinton’s imagination of that future wasn’t wrong. In fact, it was just beyond the horizon. 

* The authors suggested that the sort of cyberwar they described might actually be a less violent and lethal form of military combat, one in which an attacker might be able to quickly pierce to the command center of an enemy army rather than fight a grueling and bloody war of attrition. “It is hard to think of any kind of warfare as humane, but a fully articulated cyber war doctrine might allow the development of a capability to use force not only in ways that minimize the costs to oneself, but which also allow victory to be achieved without the need to maximize the destruction of the enemy,” they wrote. “If for no other reason, this potential of cyberwar to lessen war’s cruelty demands its careful study and elaboration.”

12 
FLASHBACK: ESTONIA 
Toomas Hendrik Ilves’s internet was down. 

Or so it seemed to the fifty-three-year-old president of Estonia when he woke up on his family farm one Saturday in late April 2007. At first, he assumed it must be a problem with the connection at his remote farmhouse, surrounded by acres of rolling hills. Ilves bristled at the latest annoyance. The day before, he’d grudgingly allowed the country’s security services to smuggle him out of the presidential palace in the capital of Tallinn and bring him 125 miles south to his family estate, named Ƅrma, where a perimeter of national guardsmen stood watch. 

The last-minute move was designed to protect Ilves from an increasingly volatile situation in Tallinn. Violence had shaken the city for days. Angry rioters, composed largely of the country’s Russian speaking minority, had overturned cars and smashed storefronts, causing millions of dollars in damage. They’d brawled with police and called for the government’s resignation—a demand echoed by Russian government statements. 

All of that chaos had been triggered by a symbolic slight: Sixteen years after the fall of the Soviet Union, the Estonian government had finally made the decision to relocate from central Tallinn a statue of a Soviet soldier, surrounded by a small collection of graves of World War II casualties. To the country’s ethnic Russians, the graves and the six-and-a-half-foot-tall bronze monument served as a remembrance of the Soviet Union’s bloody sacrifices to defeat Estonia’s Nazi occupiers. To most Estonians, they were instead a reminder of the grim Soviet occupation that followed, marked by mass deportations to Siberia and decades of economic stagnation. 

The statue had served for years as a flash point for Estonia’s tensions with Russia and its own Russian-speaking population. When government workers exhumed the Soviet war graves and transferred them, along with the statue, to a military cemetery on the edge of town in late April 2007, pro-Russian Estonians flooded into central Tallinn in a seething mass of unrest. 

Ilves had left Tallinn reluctantly and remained anxious about the escalating riots. So the first thing he did upon waking up early that morning in his farmhouse’s second-floor bedroom was to open his MacBook Pro and visit the website for Estonia’s main newspaper, Postimees, looking for an update on the riots and Russia’s calls for his government’s ouster. But the news site mysteriously failed to load. His browser’s request timed out and left him with an error message. 

He tried other Estonian media sites, and they too were down. Was it his computer’s Wi-fi card? Or his router? But no, he quickly discovered that the British Financial Times loaded just fine. Then Ilves tried a few Estonian government websites. They too were unreachable. 

Ilves called his IT administrator and asked what might be the problem with the Ƅrma connection. The confused presidential tech staffer told Ilves that it wasn’t unique to him. Estonian sites seemed to be down for everyone. Somehow a significant fraction of Estonia’s entire domestic web was crippled. 

Estonia had, over the prior decade, sprung out of its Soviet doldrums to become one of the most digitally vibrant countries in the world. The internet had become a pillar of Estonian life: 95 percent of its citizens’ banking took place on the web, close to 90 percent of income taxes were filed online, and the country had even become the first in the world to enable internet voting. Ilves himself took significant credit for pushing through many of those initiatives as a minister and later as president. Now it seemed that the uniquely web friendly society he’d helped to build was experiencing a uniquely web- centric meltdown. 

As Ilves clicked through broken sites in his remote farmhouse, he sensed something far worse at play than simple broken technology. It seemed he’d stumbled into the fog of war, a feeling of strategic blindness and isolation in a critical moment of conflict. “Is this a prelude to something?” he asked himself. “What the hell is going on?” 
■ 
The attacks had started the night before. Hillar Aarelaid had been expecting them. 

The head of Estonia’s Computer Emergency Response Team, or CERT, had been watching on hacker forums for days as pseudonymous figures had planned out an operation to unleash a flood of junk traffic at Estonian websites, in retaliation for the bronze soldier’s removal. They asked volunteers to aim a series of repeated pings from their computers at a long list of targets, massing into a brute-force distributed denial-of-service attack that would overwhelm the sites’ servers. 

When the first waves of that inundation hit, Aarelaid was at a pub in a small town in Ireland, drinking a Guinness after finishing two weeks of forensic training at a nearby police academy. His phone rang with a call from his co-worker at the Estonian CERT. “It’s started,” the man told him. The two Estonians agreed, with typical brevity, to monitor the attacks and respond with the countermeasures they had planned: work with the sites’ administrators to increase bandwidth, bring backup servers online, and filter out the malicious traffic. Aarelaid, a laconic former beat cop with close-cropped hair and perpetual stubble, hung up after no more than ten seconds and went back to drinking his Guinness. 

The next morning, as President Ilves was still puzzling over his farmhouse internet connection in the south of Estonia, Aarelaid’s CERT colleague picked him up at the Tallinn airport and briefed him on the attackers’ progress. The flood of malicious data was growing, and so was the target list. Most of the media and government sites from the Ministry of Defense to the parliament were under bombardment—a barrage big enough that many were now off-line. 

Aarelaid remained unmoved. These sorts of distributed denial-of service attacks were low-hanging fruit for untalented hackers, mostly used for petty extortion. He still believed that the usual response to the annoyance would win out when the attackers got bored of the arms race. “We can handle this,” Aarelaid told his CERT co-worker. He considered the attack a mere “cyber riot,” the internet extension of the improvised chaos playing out on Tallinn’s streets. 

By the third day of the attacks, however, it was painfully clear to Aarelaid that these weren’t run-of-the-mill website takedowns, and the normal responses weren’t going to stop them. With every attempt to block the streams of malicious traffic, the attackers altered their techniques to evade filters and renew their pummeling. More and more computers were being enlisted to add their firepower to the toxic flood. Individual volunteer attackers had given way to massive botnets of tens of thousands of enslaved machines controlled by criminal hackers including the notorious Russian Business Network, a well known cyber criminal operation responsible for a significant portion of the internet’s spam and credit-card-fraud campaigns. That meant malware-infected PCs all over the world, from Vietnam to the United States, were now training fire hoses of data at Estonia. 

The attackers’ goals shifted, evolving from mere denial-of-service attacks to defacements, replacing the content of websites with swastikas and pictures of the country’s prime minister with a Hitler mustache, all in a coordinated effort to paint Estonians as anti Russian fascists. The target list, too, was growing to absurd proportions, hitting everything from banks to arbitrary e-commerce sites to the community forums of Tallinn’s apartment complexes. “Twenty, fifty, a hundred sites, it’s not possible anymore with those numbers to respond,” says Aarelaid. “By Sunday, we realized the normal response wasn’t going to work.” 

On Monday morning, Aarelaid held a meeting with administrators of key government and commercial target sites at the CERT office in central Tallinn. They agreed that it was time for a draconian new approach. Instead of trying to filter out known sources of malicious traffic, they would simply blacklist every web connection from outside Estonia. 

As Estonia’s web administrators put that blacklist into effect, one by one, the pressure on their servers was lifted: The small fraction of the attack traffic originating from inside Estonia itself was easily absorbed. But the strategy came at a certain cost: It severed the Estonian media from the rest of the world, preventing it from sharing its stories of riots and digital bombardment. The tiny country had successfully locked out its foreign attackers. But it had also locked itself in. 
■ 
Over the days that followed that lockdown, Estonia’s CERT began the slow process of relieving the country’s internet isolation. Aarelaid and his colleagues worked with internet service providers around the world to painstakingly identify and filter out the malicious machines hosted by each of those global sources of traffic. The attacks were still growing, mutating, and changing their origins—until finally, a week after the attacks had started, they suddenly stopped. 

In the eerie lull that followed, however, Estonia’s defenders knew that the attackers would return. On May 9, Russia celebrates Victory Day, a holiday commemorating the Soviet defeat of Hitler after four years of immeasurable losses and sacrifice. Chatter on hacker forums made clear that the next wave of attacks was being saved for that symbolic day, rallying fellow digital protesters to the cause. “You do not agree with the policy of eSStonia???” asked one poster on a Russian forum, using the “SS” to emphasize Estonia’s supposed Nazi ties. “You may think you have no influence on the situation??? You CAN have it on the Internet!” 

“The action will be massive,” wrote another. “It’s planned to take Estonnet the fuck down:).” 

At almost exactly the stroke of midnight Moscow time on May 8, another barrage hit the Estonian web with close to a million computers conscripted into dozens of botnets, taking down fifty-eight sites simultaneously. 

All that night and through the days that followed, Aarelaid coordinated with the internet service providers he’d befriended to filter out new malicious traffic. But in the second wave of the attack, some of the hackers had also moved beyond mere brute-force flooding. He began to see more sophisticated attacks exploiting software vulnerabilities that allowed the hackers to briefly paralyze internet routers, taking down internet-reliant systems that included ATMs and credit card systems. “You go to the shop and want to pay for milk and bread,” Aarelaid says. “You cannot pay with a card in the shop. You cannot take cash from the ATM. So you go without milk and bread.” 

As the escalating attacks wore on, however, they also began to lose their shock-and-awe effect on Estonia’s webmasters and its population. As Aarelaid tells it, he and IT administrators around the country developed a typically Estonian stoicism about the attacks. They’d go to sleep each night, giving the attackers free rein to tear down their targets at will. Then the defenders would wake up the next morning and clean up the mess they found, filtering the new traffic and restarting routers to bring the country’s digital infrastructure back online before the start of the workday. Even the more sophisticated router attacks had only temporary effects, Aarelaid says, curable with a reboot. 

He compares this siege-defense routine to the Estonian ability to tolerate subzero temperatures in winters, with only a few hours of sun a day, collectively honed over thousands of years. “You go into work and it’s dark. You come home and it’s dark. For a long time, you don’t see any light at all, so you’re ready for these kinds of things,” Aarelaid says. “You prepare your firewood.” 
■ 
The attacks ebbed and flowed for the rest of that May until, by the end of the month, they had finally dwindled and then disappeared. They left behind questions that, even a decade later, haven’t been answered with certainty: Who was behind the attacks? And what did they intend to achieve? 

Estonians who found themselves in the epicenter of the events, like Aarelaid and Ilves, believed from the first that Russia’s government— not merely its patriotic hackers—had a hand in planning and executing Estonia’s bombardment. After the initial, weak smatterings of malicious traffic, the attacks had come to seem too polished, too professional in their timing and techniques to be the work of rogue hacktivists. Who, after all, was coordinating between dozens of botnets, seemingly controlled by disparate Russian crime syndicates? An analysis by the security firm Arbor Networks also found that a telling subset of the traffic sources overlapped with earlier distributed denial-of-service attacks aimed at the website of Garry Kasparov, an opposition party presidential candidate and outspoken critic of the Kremlin. 

“It was a very organized thing, and who can organize this? Criminals? Nope,” says Aarelaid. “It was a government. And the Russian government wanted this most.” 

Other Estonians in the thick of the attacks saw them as a kind of partnership between non government hackers and their government handlers—or in the case of the gangs like the Russian Business Network, cybercriminals directed by Kremlin patrons, in exchange for the country’s law enforcement turning a blind eye to their business operations. “It’s like feudalism. You can do some kind of business because some boss in your area allows you to, and you pay him some tribute,” says Jaan Priisalu, who at the time of the attacks was the head of IT security at Estonia’s largest bank, Hansabank. “If your boss is going to war, you’re also going to war.” 

And in early 2007, Russia’s boss was indeed going to war, or at least setting the thermostat for a new cold one. Two months before the Estonian attacks, Putin had taken the stage at the Munich Security Conference and given a harsh, history-making speech that excoriated the United States and NATO for creating what he saw as a dangerous imbalance in global geopolitics. He railed against the notion of a post– Cold War “unipolar” world in which no competing force could check the power of the United States and its allies. 

Putin clearly felt the direct threat of that rising, singular superpower conglomerate. After all, Estonia had joined NATO’s alliance three years earlier, along with the other Baltic states of Lithuania and Latvia, bringing the group for the first time to Russia’s doorstep, less than a hundred miles from St. Petersburg. 

“NATO has put its frontline forces on our borders,” Putin said in his Munich speech. The alliance’s expansion, he continued, represents “a serious provocation that reduces the level of mutual trust. And we have the right to ask: against whom is this expansion intended?” Putin’s unspoken answer to that question was, of course, Russia—and himself. 

When the cyberattacks in Estonia peaked in intensity three months later, Putin didn’t hide his approval, even as his government denied responsibility. In a May 9 Victory Day speech, he gave his implicit blessing to the hackers. “Those who desecrate monuments to the heroes of the war are insulting their own people and sowing discord and new distrust,” he told a crowd in Moscow’s Red Square. 

Still, NATO never treated the Estonian cyber attacks as an overt act of aggression by the Russian state against one of NATO’s own. Under Article 5 of the Washington Treaty that lays out NATO’s rules, an attack against any NATO member is meant to be considered an attack against all of them, with a collective response in kind. But when President Ilves began speaking with his ambassadors in the first week of the cyberattacks, he was told that NATO members were unwilling to remotely consider an Article 5 response to the Russian provocations. This was, after all, a mere attack on the internet, not a life-threatening act of physical warfare. 

Ilves says he asked his diplomats to instead inquire about Article 4, which merely convenes NATO leaders for a “consultation” when a member’s security is threatened. The liaisons quickly brought back an answer: Even that milder step proved a nonstarter. How could they determine Russia was behind the provocations? After all, NATO’s diplomats and leaders hardly understood the mechanics of a distributed denial-of-service attack. The traffic’s source appeared to be Russian freelance hackers and criminals or, more confusing still to the lay observer, hijacked computers in countries around the world. 

Underlying all of that inaction, Ilves says, was another motivation: what he describes as a kind of fracture between western European NATO countries and eastern Europeans facing Russian threats. “There’s a sense that it’s ‘over there,’ that ‘they’re not like us,’ ” Ilves says, mocking what he describes as a “haughty, arrogant” tone of western European NATO members. “ ‘Oh, those eastern Europeans, they don’t like the Russians, so they have a failure and they blame it on Russia.’ ” 

In the end, NATO did essentially nothing to confront Russia in response to the Estonian attacks. Putin, it seemed, had tested a new method to bloody the nose of a NATO country with plausible deniability, using tools that were virtually impossible to trace to the Kremlin. And he’d correctly judged the lack of political will to defend NATO’s eastern European members from an innovative new form of mass sabotage. 

The events of those two months in Estonia would, in some circles, come to be seen as the first cyberwar, or, more creatively, “Web War I.” The cyber attacks were, in reality, hardly as catastrophic as any true war; the threat of an “electronic Pearl Harbor” still lay in the future. But the Russian government nonetheless appeared to have demonstrated an indiscriminate, unprecedented form of disruption of an adversary’s government and civil society alike. And it had gotten away with it.

13 
FLASHBACK: GEORGIA 
It was a few hours after nightfall when Khatuna Mshvidobadze learned Russian tanks were rolling toward her location. 

Mshvidobadze was, on the night of August 11, 2008, working late in her office at the NATO Information Center in central Tbilisi, the capital of the former Soviet republic of Georgia. She held a position at the time as the deputy director of that organization, a part of Georgia’s Ministry of Defense devoted to lobbying for the small Caucasus country to become part of NATO’s alliance. Much of the group’s work consisted in hosting events and persuading media to make the case for Georgia to join forces with its Western neighbors across the Black Sea. But in the summer of 2008, the NATO Information Center found itself with a new, far more urgent focus: combating the Kremlin’s attempts to dominate the media narrative surrounding a Russian invasion. 

War had broken out days earlier. Russia had moved troops and artillery into two separatist regions within Georgia’s borders, Abkhazia and South Ossetia. In response, the Georgian forces launched a preemptive strike. On August 7, they shelled military targets in the South Ossetian town of Tskhinvali, trying to gain the initiative in what they saw as an inevitable conflict fueled by the Kremlin’s aggression. But their plan, by all appearances, hadn’t accounted for the overwhelming force of the Russian response. 

Proclaiming that it was protecting Abkhazia and South Ossetia from Georgian oppression, Russia flooded the small country with more than twenty-five thousand troops, twelve hundred artillery vehicles, two hundred planes, and forty helicopters. Those numbers dwarfed Georgia’s army of fewer than fifteen thousand soldiers and its barebones air force of eight planes and twenty-five helicopters. By the second day of the war, the Kremlin had unleashed its Black Sea fleet of warships for the first time since World War II, sending an armada across the water to blockade Georgia’s coastline. The country had, in mere days, been outgunned and surrounded. 

By August 11, Russian forces were moving out of the separatist regions and into the heart of Georgian territory, taking the city of Gori and splitting the invaded country in two. By that evening, Russian tanks were poised to close the forty-mile stretch from Gori to the capital. 

For Mshvidobadze, working in her downtown Tbilisi office, that night of August 11 was the most chaotic of her life. To start, her building’s internet was inexplicably down, making her job of combating Russian military propaganda—including false claims that Georgians had been massacring civilians in South Ossetia and Abkhazia—nearly impossible. 

In the midst of that rising sense of helplessness, she received a phone call from her boss, the NATO Information Center’s director. Days prior, the director had traveled to the South Ossetian front to cover the unfolding conflict as a journalist, leaving her deputy, Mshvidobadze, to run the organization in her absence. Now Mshvidobadze’s boss wanted to warn her that the Russians were coming for Tbilisi. Everyone needed to evacuate. 

In the hour that followed, Mshvidobadze and her staff prepared for a potential occupation, deleting sensitive files and destroying documents that they feared might fall into Russian hands. Then, in a final injection of chaos, the power across the city suddenly went out— perhaps the result of physical sabotage by the invading forces. It was around midnight when the staff finally hurried out of the blacked-out building and parted ways. 

The NATO Information Center was, at the time, housed in a glass structure on a side street of Tbilisi’s Vake District, a trendy neighborhood known during Georgia’s Soviet era as the home of the city’s intelligentsia. Mshvidobadze walked a block to a busier street nearby and found a scene of utter societal breakdown. The power outage had left the streetlights dark, so that only the headlights of cars illuminated sidewalks. Drivers were frantic, ignoring all traffic laws and plowing through intersections with dead traffic signals— preventing her from even crossing the street. As she tried in vain to flag a taxi, other desperate pedestrians ran past her, some screaming in fear or crying. 

Mshvidobadze was determined to get home to her younger sister, who lived with her in an apartment across the city. But she couldn’t reach her or call a cabdriver to pick her up: Cell phones were working only intermittently, as desperate Tbilisians’ phones swamped telecom networks. 

It would be half an hour before she could finally get through to a driver who could find her amid the pandemonium. Until then, she remained frozen at the intersection, watching the city panic. “It was a terrible, crazy situation. You have to be in a war zone to understand the feeling,” she says. “All these thoughts were running through my head. I thought of my sister, my family, myself. I thought of the future of my country.” 
■ 
Jose Nazario had seen Georgia’s war coming, nearly a month earlier— not from the front lines of the Caucasus, but from his office in Michigan. Nazario, a security researcher for Arbor Networks, the cyberattack-tracking firm, had come into work that July morning at the company’s offices, a block from the south end of the University of Michigan’s campus in Ann Arbor, and started the day with his usual routine: checking the aftermath of the previous night’s botnet battles. 

To analyze the entire internet’s digital conflicts in real time, Arbor ran a system called Blade Runner, named for its bot-tracking purpose. It was part of a collection of millions of “honeypots”—virtual computers running on Arbor’s servers around the world, each of which was expressly designed to be hacked and conscripted into a botnet’s horde of enslaved PCs. Arbor used the computers as a kind of guinea- pig collective, harvesting them for malware samples and, more important for the company’s business model, to monitor the instructions the bots received from botnets’ command-and-control servers. Those instructions would allow them to determine whom the hackers were targeting and with what sort of firepower. 

That morning, the results of Nazario’s BladeRunner review turned up something strange. A major botnet was training its toxic torrent of pings at the website for the Georgian president, Mikheil Saakashvili. The site had apparently been punished with enough malicious traffic to knock it off-line. And the queries that had overwhelmed the site’s server had included a strange set of characters, what appeared to be a message for its administrators: “win+love+in+Rusia.” 

That strange and slightly misspelled string immediately hinted to Nazario that the attack wasn’t the usual criminal extortion takedown but something with a political bent. It looked more like the work of the botnets that had barraged Estonia the year prior, which he and the rest of Arbor’s staff had tracked with fascination. 

“It probably sounds better in the original Russian,” Nazario says of the message. “But it was pretty unambiguous.” He called up one of his favorite contacts for discussing the web’s geopolitical conflicts: John Hultquist, then a young analyst at the State Department with a focus on cybersecurity and eastern Europe. 

In the months since Hultquist had joined State, he and Nazario had been cultivating a mutually beneficial friendship. Hultquist, eager for access to Arbor Networks’ attack data, had initially called Nazario to offer him a ride to the airport during one of Nazario’s sales visits to D.C. Nazario was equally interested to hear Hultquist’s views on the foreign policy context for the attacks Arbor tracked. Since then, the two men had developed a routine: Hultquist would pick Nazario up at the end of his D.C. trips, and they’d drive to dinner at Jaleo, a tapas restaurant in Crystal City. There they’d talk over the latest attacks waged against targets ranging from Estonia to Ingushetia to Chechnya and then rush to the airport for Nazario’s flight home. 

After Nazario discovered the attack on Georgia’s president’s website, he and Hultquist quickly pieced together the larger picture: Tensions between Russia and Georgia were approaching a breaking point. Much like Ukraine, Georgia’s newly re-elected, pro-Western president was pushing the country toward NATO. If the country joined that alliance, it would represent NATO’s farthest expansion yet into Russia’s sphere of influence. That very idea, of course, infuriated the Kremlin. 

In response, Russia was slowly ramping up its military presence in Abkhazia and South Ossetia as part of a so-called peacekeeping force. When Georgia protested to NATO that Russia was quietly threatening its sovereignty, it was mostly dismissed, warned not to provoke a conflict with its massive, powerful neighbor. In the meantime, skirmishes and flashes of violence were breaking out in Georgia’s Russia-backed separatist regions, with bombings and intermittent firefights killing or wounding handfuls of separatists, as well as Georgian police and soldiers. 

Now it seemed to Nazario and Hultquist that the Russian government—or at least patriotic Russian hackers aligned with its goals—was using new tools to tighten the screws against Georgia, the same ones it had experimented with in its fracas with Estonia. Only this time the cyber attacks might be a prelude to an actual shooting war. 

That war arrived on August 7. A day later, a nearly simultaneous wave of distributed denial-of-service attacks hit thirty-eight websites, including the Ministry of Foreign Affairs, Georgia’s National Bank, its parliament, its supreme court, the U.S. and U.K. embassies in Tbilisi, and again, President Saakashvili’s website. As in Estonia, hackers defaced some sites to post pictures of Saakashvili alongside pictures of Hitler. And the attacks appeared to be centrally coordinated: They began within half an hour of one another and would continue unabated until shortly after noon on August 11, just as Russia was beginning to negotiate a cease-fire. 

As in Estonia, the attacks were impossible to tie directly to Moscow. They came, as all botnet attacks do, from all directions at once. But the security firm Secureworks and researchers at the nonprofit Shadowserver Foundation were able to connect the attacks with the Russian Business Network, the same cybercriminals whose botnets had contributed to the Estonian attacks, as well as more grassroots hackers organized through sites like StopGeorgia.ru. 

In some cases, the digital and physical attacks seemed uncannily coordinated. The hackers hit official sites and news outlets in the city of Gori, for instance, just before Russian planes began bombing it. 

“How did they know that they were going to drop bombs on Gori and not the capital?” asked Secureworks researcher Don Jackson. “From what I’ve seen firsthand, there was at some level actual coordination and/or direction.” 

Khatuna Mshvidobadze, who after the Georgian war went on to get a doctorate in political science and cybersecurity policy and now works as a security researcher and consultant, argues that there can be little doubt today that the Kremlin had a direct hand in the cyberattacks. “How many signs do you need?” she asks, her voice tinged with anger. “This is how the Russian government behaves. They use proxies, oligarchs, criminals to make attribution harder, to give Russia deniability. This kind of game doesn’t work anymore. We know who you are and what you’re up to.” 

For John Hultquist, there was a detail of the attacks that stayed with him, a clue that he would file away in his memory, only to have it resurface six years later as he was tracking Sandworm. Many of the hackers bombarding Georgia were using a certain piece of malware to control and direct their digital armies, one that was still in an earlier incarnation but would develop over time into a far more sophisticated tool of cyberwar: BlackEnergy. 
■ 
Russia and Georgia agreed to a cease-fire on August 12, 2008. In the days that followed, Russia’s tanks continued to advance into Georgian territory—a final provocation before they ultimately turned around and withdrew. They never entered the capital. The shelling ceased, and the Russian fleet dismantled its Black Sea blockade. 

Russia’s gains from its brief war with Georgia, however, were tangible. It had consolidated pro-Russian separatist control of Abkhazia and South Ossetia, granting Russia a permanent foothold on roughly 20 percent of Georgia’s territory. Just as in Ukraine in 2014, Russia hadn’t sought to conquer or occupy its smaller neighbor, but instead to lock it into a “frozen conflict,” a permanent state of low-level war on its own soil. The dream of many Georgians, like Mshvidobadze, that their country would become part of NATO, and thus protected from Russian aggression, had been put on indefinite hold. 

And what role did Russia’s cyber attacks play in that war? Practically none, Mshvidobadze says. “No one was even thinking about cyber back then, no one knew anything about it,” she says. At the time, after all, Georgia was hardly Estonia. Only seven in a hundred Georgians even used the internet. And they had much more immediate concerns than inaccessible websites—like the mortar shells exploding around their cities and villages and the tanks lumbering toward their homes. 

But the cyberattacks contributed to a broader confusion, both internally and internationally. They disabled a key avenue for Georgians to reach the West and share their own narrative about their war with Russia. Mshvidobadze still fumes at the commonly held idea, for instance, that the Georgian shelling of Tskhinvali sparked the war and not Russia’s quietly amassing troops and matĆ©riel inside Georgian territory for weeks prior. 

But perhaps more important than the cyberattacks’ practical effects were the historical precedent they set. No country had ever before so openly combined hacker disruption tactics with traditional warfare. The Russians had sought to dominate their enemy in every domain of war: land, sea, air, and now the internet. Georgia was the first crude experiment in a new flavor of hybrid warfare that bridged the digital and the physical. 

Reflecting on both the Georgian and the Estonian conflicts today, Hultquist sees primitive prototypes for what was to come. The Russian hackers behind them were nowhere near Sandworm in their skill or resources. But they hinted at an era of unrestricted, indiscriminate digital attacks, with little regard for the line between military and civilian. 

“Hackers turning off the power? We weren’t there yet,” says Hultquist. “But whatever cyberwar would become, there’s no doubt, this is where it began.”

14 
FLASHBACK: STUXNET 
In January 2009, just days before Barack Obama would be inaugurated, he met with President George W. Bush to discuss a subject shrouded under the highest echelon of executive secrecy. On most matters of national security, even on topics as sensitive as the command sequence to initiate nuclear missile launches, Bush had let his subordinates brief the incoming president. But on this, he felt the need to speak with Obama himself. Bush wanted his successor’s commitment to continue an unprecedented project. It was an operation the Bush-era NSA had developed for years but that was only just beginning to come to fruition: the deployment of a piece of code that would come to be known as Stuxnet, the most sophisticated cyber weapon in history. 

Stuxnet’s conception, more than two years earlier, had been the result of a desperate dilemma. When Iran’s hard-liner president Mahmoud Ahmadinejad had taken power in 2005, he’d publicly flaunted his intention to develop the country’s nuclear capabilities. That included enriching uranium to a grade that could be used for nuclear power. But international watchdog groups noted that Iran had only a single nuclear power plant, and it was already supplied with enriched uranium from Russia. They suspected a far less innocent motive: Ahmadinejad wanted nuclear weapons—a desire that Israel would likely consider an existential threat and a potential match that could ignite the entire Middle East. 

Iran’s government had sought to obtain nukes since as early as the 1980s, when it was locked in a brutal war with Iraq and suspected that the Iraqi leader, Saddam Hussein, was seeking to build nuclear bombs of his own. But neither country had actually succeeded in its atomic ambitions, and in the decades that followed, Iran had made only stuttering progress toward joining the world’s nuclear superpowers. 

Within two months of Ahmadinejad’s election in the summer of 2005, however, he had thrown out an agreement Iran had made with the International Atomic Energy Agency, or IAEA, suspending the country’s nuclear evolution. The country had, for years prior to that agreement, been building two 270,000-square-foot, largely subterranean facilities, twenty-five feet beneath the desert surface in Natanz, a central Iranian city. The purpose of those vast bunkers had been to enrich uranium to a weapons-grade purity. Under Ahmadinejad, Natanz was pitched back into high gear. 

In 2005, U.S. intelligence agencies had estimated it would take six to ten years for Iran to develop a nuclear bomb. Israeli intelligence had put their estimate closer to five. But after Iran restarted its nuclear enrichment program at Natanz, Israeli intelligence shrank that estimate to as little as two years. Privately, the Israelis told U.S. officials a bomb might be ready in six months. A crisis was looming. 

As that deadline grew ever closer, Bush’s national security team had laid out two options, neither remotely appealing. Either the United States could allow Iran’s unpredictable and highly aggressive government to obtain a devastating weapon, or it could launch a missile strike on Natanz—an act of war. In fact, war seemed inevitable on either horn of the dilemma. If Iran ventured too close to the cusp of fulfilling its nuclear ambitions, Israel’s hard-line government was poised to launch its own strike against the country. “I need a third option,” Bush had repeatedly told his advisers. 

That option would be Stuxnet. It was a tantalizing notion: a piece of code designed to kneecap Iran’s nuclear program as effectively as an act of physical sabotage, carried out deep in the heart of Natanz, and without the risks or collateral damage of a full-blown military attack. Together with the NSA’s elite offensive hacking team, then known as Tailored Access Operations, or TAO, and the Israeli cybersecurity team known as Unit 8200, the Pentagon’s Strategic Command began developing a piece of malware unlike any before. It would be capable of not simply disrupting critical equipment in Natanz but destroying it. 

By 2007, a collection of Department of Energy national labs had obtained the same P1 centrifuges the Iranians were using, gleaming cylinders as thick as a telephone pole and nearly six and a half feet tall. For months, the labs would quietly test the physical properties of those machines, experimenting with how they might be destroyed purely via digital commands. (Some of those tests occurred at Idaho National Laboratory, during roughly the same period the lab’s researchers were working on the Aurora hacking demonstration that showed they could destroy a massive diesel generator with a few lines of code. Mike Assante, who masterminded the Aurora work, declined to answer any questions about Stuxnet.) 

Not long after the tests began, Bush’s intelligence advisers laid out for him on a table the metal detritus of a centrifuge destroyed by code alone. The president was impressed. He green-lighted a plan to deploy that brilliant, malicious piece of software, an operation code-named Olympic Games. It would prove to be a tool of cyberwar so sophisticated that it made the cyberattacks in Estonia and Georgia look like medieval catapults by comparison. 

Olympic Games was still in its early stages when the Bush presidency came to a close in early 2009. Stuxnet had only just begun to demonstrate its potential to infiltrate and degrade Iran’s enrichment processes. So Bush held an urgent transition meeting with Obama, where the outgoing president explained firsthand to his successor the geopolitical importance and delicacy of their cyber warfare mission, the likes of which had never before been attempted. 

Obama was listening. He wouldn’t simply choose to continue the Stuxnet operation. He would vastly expand it. 
■ 
Fortunately for the continued existence of the human race, enriching uranium to the purity necessary to power the world’s most destructive weapon is an absurdly intricate process. Uranium ore, when it’s dug out of the earth, is mostly made up of an isotope called uranium-238. It contains less than 1 percent uranium-235, the slightly lighter form of the silvery metal that can be used for nuclear fission, unleashing the energy necessary to power or destroy entire cities. Nuclear power requires uranium that’s about 3 to 5 percent uranium-235, but nuclear weapons require a core of uranium that’s as much as 95 percent composed of that rarer isotope. 

This is where centrifuges come in. To enrich uranium into bomb worthy material, it has to be turned into a gas and pumped into a centrifuge’s long, aluminum cylinder. A chamber inside the length of that cylinder is spun by a motor at one end, revolving at tens of thousands of rotations per minute, such that the outer edge of the chamber is moving beyond the speed of sound. The centrifugal force pushing from the center toward the walls of that spinning chamber reaches as much as a million times the force of gravity, separating out the heavier uranium-238 so that the uranium-235 can be siphoned off. To reach weapons-grade concentrations, the process has to be repeated again and again through a “cascade” of centrifuges. That’s why a nuclear enrichment facility such as the one hidden deep beneath Natanz requires a vast forest of thousands of those tall, fragile, and highly engineered whirling machines. 

Stuxnet was designed to be the perfect, invisible wrench thrown into those works. 

Sometime in 2008, Natanz’s engineers began to face a mysterious problem: At seemingly random times, one of their centrifuges would begin to spin out of control, its internal chamber moving faster than even its carefully crafted bearings were designed to handle. In other cases, pressure inside the chamber would increase until it was pushed out of its orbit. The spinning cylinder would then crash into its housing at supersonic speed, tearing the machine apart from the inside—just as Idaho National Laboratory’s diesel generator had eviscerated itself in the Aurora test a year earlier. 

Natanz’s operators could see no sign or warning in their digital monitoring of the centrifuges to explain the machines’ sudden suicides. Yet they kept happening. Eventually, the plant’s administrators would assign staff to sit and physically watch the centrifuges for any indication that might explain the mystery. They resorted to decommissioning entire cascades of 164 centrifuges in an attempt to isolate the problem. Nothing worked. 

“The intent was that the failures should make them feel they were stupid, which is what happened,” one of the participants in the secret Olympic Games operation would later tell the New York Times reporter David Sanger. U.S. and Israeli intelligence saw signs of internal disputes among Iran’s scientists as they sought to place the blame for the repeated disasters. Some were fired. 

As time wore on—and as the Obama administration began to shepherd the operation—Natanz’s centrifuge problems only grew more acute. In late 2009 and early 2010, officials at the International Atomic Energy Agency who were tensely monitoring Iran’s nuclear progress saw evidence that the Iranians were carting decommissioned centrifuges out of their enrichment facility at a pace well beyond the usual failure rate. Out of the 8,700 centrifuges in Natanz at the time, as many as 2,000 were damaged, according to one IAEA official. 

Olympic Games, in other words, was working. American and Israeli hackers had planted their digital sabotage code into the exact heart of the mechanical process that had brought the Middle East to the brink of war, and they were disrupting it with uncanny precision. Stuxnet had allowed them to pull off that coup without even tipping off their targets that they were under attack. Everything was going according to plan—until the summer of 2010, when the hackers behind Stuxnet would lose control of their creation, exposing it to the world. 
■ 
The discovery of Stuxnet began the same way as the discovery of Sandworm would years later: a zero day. 

In June 2010, VirusBlokAda, an obscure antivirus firm based in Minsk, Belarus, found that a computer of one of its customers in Iran had been stuck in a loop of repeated crashes and restarts. The company’s researchers investigated the source of those crashes and found something far more sophisticated than they had imagined. An ultra-stealthy form of malware known as a “rootkit” had buried itself deep within the computer’s operating system. And as they analyzed that rootkit, they found something far more shocking: It had infected the machine via a powerful zero day that took advantage of the way Windows displays the contents of a USB drive. As soon as an infected USB stick was inserted into the computer’s port, the malware had sprung out to install itself on the machine with no indication to the user whatsoever. 

After VirusBlokAda published an announcement about the malware on a security forum, researchers at the security giant Symantec picked up the thread. They would pull on it for months to come, a detective story detailed in Kim Zetter’s definitive book on Stuxnet, Countdown to Zero Day. The malware’s size and complexity alone were remarkable: It consisted of five hundred kilobytes of code, twenty to fifty times as large as the typical malware they dealt with on a daily basis. And as the researchers reverse engineered that code’s contents, they found it contained three more zero days, allowing it to effortlessly spread among Windows machines—an entire built-in, automated arsenal of masterful hacker tricks. 

No one in the security community could remember seeing a piece of malware that used four zero days in a single attack. Stuxnet, as Microsoft eventually dubbed the malware based on file names in its code, was easily the most sophisticated cyber attack ever seen in the wild. 

By the end of that summer, Symantec’s researchers had assembled more pieces of the puzzle: They’d found that the malware had spread to thirty-eight thousand computers around the world but that twenty two thousand of those infections were in Iran. And they’d determined that the malware interacted with Siemens’s STEP 7 software. That application was one form of the software that allows industrial control system operators to monitor and send commands to equipment. Somehow, the analysts determined, Stuxnet’s goal seemed to be linked to physical machines—and probably in Iran. It was only in September 2010 that the German researcher Ralph Langner dove into the minutiae of that Siemens-targeted code and came to the conclusion that Stuxnet’s goal was to destroy a very specific piece of equipment: nuclear enrichment centrifuges. 

With that final discovery, the researchers could put together all of the links in Stuxnet’s intricate kill chain. First, the malware had been designed to jump across air gaps: Iran’s engineers had been careful enough to cut off Natanz’s network entirely from the internet. So, like a highly evolved parasite, the malware instead piggybacked on human connections, infecting and traveling on USB sticks. There it would lie dormant and unnoticed until one of the drives happened to be plugged into the enrichment facility’s isolated systems. (Siemens software engineers might have been the carriers for that malware, or the USB malware might have been more purposefully planted by a human spy working in Natanz.) 

Once it had penetrated that air-gapped network, Stuxnet would unfold like a ship in a bottle, requiring no interaction with its creators. It would silently spread via its panoply of zero-day techniques, hunting for a computer running Siemens STEP 7 software. When it found one, it would lie in wait, then unleash its payload. Stuxnet would inject its commands into so-called programmable logic controllers, or PLCs— the small computers that attach to equipment and serve as the interfaces between physical machines and digital signals. Once infected, the centrifuge that a PLC controlled would violently tear itself apart. In a final touch of brilliance, the malware would, before its attack, pre-record feedback from the equipment. It would then play that recording to the plant’s operators while it committed its violence so that to an operator observing the Siemens display, nothing would appear amiss until it was far too late. 

Stuxnet’s only flaw was that it was too effective. Among computer security researchers, it’s practically a maxim that worms spread beyond their creators’ control. This one was no exception. Stuxnet had propagated far beyond its Natanz target to infect computers in more than a hundred countries across the world. Other than in the centrifuge caverns of Natanz, those collateral infections hadn’t caused physical destruction. But they had blown the ultra secret malware’s cover, along with an operation that had been millions of dollars and years in the making. 

Once Stuxnet’s purpose became clear, the United States and Israel quickly became the prime suspects for its creation. (It would be two more years, however, before a front-page story in The New York Times confirmed the two countries’ involvement.) 

When Stuxnet’s existence went public, the Obama administration held a series of tense meetings to decide how to proceed. Should they pull the plug on the program before it was definitively tied back to the United States? It was only a matter of time, they figured, before Iran’s engineers would learn the true source of their problems and patch their software vulnerabilities, shutting Stuxnet out for good. 

Instead, the Americans and Israelis behind the worm decided they had nothing to lose. So in a go-for-broke initiative, they released another, final series of Stuxnet versions that were designed to be even more aggressive than the original. Before Iran’s engineers had repaired their vulnerabilities, the malware destroyed nearly a thousand more of their centrifuges, offering one last master class in cyber sabotage. 
■ 
Stuxnet would change the way the world saw state-sponsored hacking forever. Inside Natanz’s haywire centrifuges, the leading edge of cyber warfare had taken a giant leap forward, from Russia’s now primitive-looking web disruptions of 2007 and 2008 to virtuosic, automated physical destruction. 

Today, history is still weighing whether Bush’s and Obama’s executive decisions to carry out that cyberattack were worth their cost. According to some U.S. intelligence analysts, Stuxnet set back the Iranian nuclear program by a year or even two, giving the Obama administration crucial time to bring Iran to the bargaining table, culminating in a nuclear deal in 2015. 

But in fact, those long-term wins against Natanz’s operation weren’t so definitive. Even in spite of its confusion and mangled centrifuges, the facility actually increased its rate of uranium enrichment over the course of 2010, at times progressing toward bomb-worthy material at a rate 50 percent faster than it had in 2008. Stuxnet might have, if anything, only slowed the acceleration of Ahmadinejad’s program. 

And what was Stuxnet’s price? Most notably, it exposed to the world for the first time the full prowess and aggression of America’s—and to a lesser extent Israel’s—most elite state hackers. It also revealed to the American people something new about their government and its cybersecurity priorities. After all, the hackers who had dug up the four zero-day vulnerabilities used in Stuxnet hadn’t reported them to Microsoft so that they could be patched for other users. Instead, they had exploited them in secret and left Windows machines around the world vulnerable to the same techniques that had allowed them to infiltrate Natanz. When the NSA chose to let its Tailored Access Operations hackers abuse those software flaws, it prioritized military offense over civilian defense. 

Who can say how many equally powerful zero days the U.S. government has squirreled away in its secret collection? Despite assurances from both the Obama and the Trump administrations that the U.S. government helps to patch more vulnerabilities than it hoards in secret, the specter of its hidden digital weapons cache has nonetheless haunted defenders in the cybersecurity community for years. (Just a few years later, in fact, that collection of zero days would backfire in an absurd, self-destructive fiasco.) 

But in a broader and more abstract sense, Stuxnet also allowed the world to better imagine malware’s potential to wreak havoc. In darkened rooms all over the globe, state-sponsored hackers took notice of America’s creation, looked back at their own lackluster work, and determined that they would someday meet the new bar Stuxnet had set. 

At the same time, political leaders and diplomats around the world recognized in Stuxnet the creation of a new norm, not only in its technical advancements, but in geopolitics. America had dared to use a form of weaponry no country had before. If that weapon were later turned on the United States or its allies, how could it object on principle? 

Had physical destruction via code become an acceptable rule of the global game? Even the former NSA and CIA director Michael Hayden seemed shaken by the new precedent. “Somebody crossed the Rubicon,” Hayden said in an interview with The New York Times. The attack that the West’s prophets of cyberwar had always feared, one capable of shutting down or destroying physical equipment from anywhere in the world, had come to pass. And Americans had been the first to do it. “No matter what you think of the effects—and I think destroying a cascade of Iranian centrifuges is an unalloyed good—you can’t help but describe it as an attack on critical infrastructure,” Hayden concluded. 

Stuxnet was no “cyber 9/11” or “electronic Pearl Harbor.” It was a highly targeted operation whose damage was precisely limited to its pinpoint victim even when the worm spread out of its creators’ control. But the fact remained: In an attempt to prevent Iran from joining the nuclear arms race America had itself started with the bombings of Hiroshima and Nagasaki sixty-five years earlier, it had sparked another form of arms race—one with severe, unforeseeable consequences. 

“This has a whiff of August 1945,” Hayden would say later in a speech. “Somebody just used a new weapon, and this weapon will not be put back in the box.”

next- 118
Warnings

FAIR USE NOTICE
This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. As a journalist, I am making such material available in my efforts to advance understanding of artistic, cultural, historic, religious and political issues. I believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law.

In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. Copyrighted material can be removed on the request of the owner.

No comments:

Part 1 Windswept House A VATICAN NOVEL....History as Prologue: End Signs

Windswept House A VATICAN NOVEL  by Malachi Martin History as Prologue: End Signs  1957   DIPLOMATS schooled in harsh times and in the tough...