Sunday, February 5, 2023

Part 1 Sandworm A New Era of Cyberwar And the Hunt for The Kremlin's most Dangerous Hacker. Zero Day ... BlackEnergy ... Arrakiso2 ... Force Multiplier ...+++

Sandworm
A New Era of Cyberwar
And the Hunt for The
Kremlin's most Dangerous Hacker
by Andy Greenberg

 INTRODUCTION 
On June 27, 2017, something strange and terrible began to ripple out across the infrastructure of the world. 

A group of hospitals in Pennsylvania began delaying surgeries and turning away patients. A Cadbury factory in Tasmania stopped churning out chocolates. The pharmaceutical giant Merck ceased manufacturing vaccines for human papillomavirus. 

Soon, seventeen terminals at ports across the globe, all owned by the world’s largest shipping firm, Maersk, found themselves paralyzed. Tens of thousands of eighteen-wheeler trucks carrying shipping containers began to line up outside those ports’ gates. Massive ships arrived from journeys across oceans, each carrying hundreds of thousands of tons of cargo, only to find that no one could unload them. Like victims of a global outbreak of some brain-eating bacteria, major components in the intertwined, automated systems of the world seemed to have spontaneously forgotten how to function. 

At the attack’s epicenter, in Ukraine, the effects of the technological doomsday were more concentrated. ATMs and credit card payment systems inexplicably dropped off-line. Mass transit in the country’s capital of Kiev was crippled. Government agencies, airports, hospitals, the postal service, even scientists monitoring radioactivity levels at the ruins of the Chernobyl nuclear power plant, all watched helplessly as practically every computer in their networks was infected and wiped by a mysterious piece of malicious code. 

This is what cyberwar looks like: an invisible force capable of striking out from an unknown origin to sabotage, on a massive scale, the technologies that underpin civilization. 

For decades, the Cassandras of internet security warned us this was coming. They cautioned that hackers would soon make the leap beyond mere crime or even state-sponsored espionage and begin to exploit vulnerabilities in the digitized, critical infrastructure of the modern world. In 2007, when Russian hackers bombarded Estonia with cyberattacks that tore practically every website in the country offline, that blitz hinted at the potential scale of geopolitically motivated hacking. Two years later, when the NSA’s malicious software called Stuxnet silently accelerated Iran’s nuclear enrichment centrifuges until they destroyed themselves, the operation demonstrated another preview of what was in store: It showed that tools of cyberwar could reach out beyond the merely digital, into even the most closely guarded and sensitive components of the physical world. 

But for anyone watching Russia’s war in Ukraine since it began in early 2014, there were clearer, more direct harbingers. Starting in 2015, waves of vicious cyberattacks had begun to strike Ukraine’s government, media, and transportation. They culminated in the first known blackouts ever caused by hackers, attacks that turned off power for hundreds of thousands of civilians. 

A small group of researchers would begin to sound the alarm— largely in vain—that Russia was turning Ukraine into a test lab for cyberwar innovations. They cautioned that those advancements might soon be deployed against the United States, NATO, and a larger world that remained blithely unprepared for this new dimension of war. And they pointed to a single force of Kremlin-backed hackers that seemed to be launching these unprecedented weapons of mass disruption: a group known as Sandworm. 

Over the next two years, Sandworm would ramp up its aggression, distinguishing itself as the most dangerous collection of hackers in the world and redefining cyberwar. Finally, on that fateful day in late June 2017, the group would unleash the world-shaking worm known as Not Petya, now considered the most devastating and costly malware in history. In the process, Sandworm would demonstrate as never before that highly sophisticated, state-sponsored hackers with the motivations of a military sabotage unit can attack across any distance to undermine the foundations of human life, hitting interlocked, interdependent systems with unpredictable, disastrous consequences. 

Today, the full scale of the threat Sandworm and its ilk present looms over the future. If cyberwar escalation continues unchecked, the victims of state-sponsored hacking could be on a trajectory for even more virulent and destructive worms. The digital attacks first demonstrated in Ukraine hint at a dystopia on the horizon, one where hackers induce blackouts that last days, weeks, or even longer— intentionally inflicted deprivations of electricity that could mirror the American tragedy of Puerto Rico after Hurricane Maria, causing vast economic harm and even loss of life. Or one where hackers destroy physical equipment at industrial sites to cause lethal mayhem. Or, as in the case of Not Petya, where they simply wipe hundreds of thousands of computers at a strategic moment to render brain-dead the digital systems of an enemy’s economy or critical infrastructure. 

This book tells the story of Sandworm, the clearest example yet of the rogue actors advancing that cyberwar dystopia. It follows the years-long work of the detectives tracking those hackers—as Sandworm’s fingerprints appeared on one digital disaster scene after another—to identify and locate them, and to call attention to the danger the group represented in the desperate hope that it could be stopped. 

But Sandworm is not just the story of a single hacker group, or even of the wider threat of Russia’s reckless willingness to wage this new form of cyberwar around the world. It’s the story of a larger, global arms race that continues today. That race is one that the United States and the West have not only failed to stop but directly accelerated with our own headlong embrace of digital attack tools. And in doing so, we’ve invited a new, unchecked force of chaos into the world. 

PROLOGUE 
The clocks read zero when the lights went out. 

It was a Saturday night in December 2016, and Oleksii Yasinsky was sitting on the couch with his wife and teenage son in the living room of their Kiev apartment. The forty-year-old Ukrainian cybersecurity researcher and his family were an hour into Oliver Stone’s film Snowden when their building abruptly lost power. 

“The hackers don’t want us to finish the movie,” Yasinsky’s wife joked. She was referring to an event that had occurred a year earlier, a cyberattack that had cut electricity to nearly a quarter-million Ukrainians two days before Christmas in 2015. 

Yasinsky, a chief forensic analyst at a Kiev cybersecurity firm, didn’t laugh. He looked over at a portable clock on his desk: The time was 00:00. Precisely midnight. 

Yasinsky’s television was plugged into a surge protector with a battery backup, so only the flicker of images on-screen lit the room now. The power strip started beeping plaintively. Yasinsky got up and switched it off to save its charge, leaving the room suddenly silent. 

He went to the kitchen, pulled out a handful of candles, and lit them. Then he stepped to the kitchen window. The thin, sandy-blond engineer looked out on a view of the city as he’d never seen it before: The entire skyline around his apartment building was dark. Only the gray glow of distant lights reflected off the clouded sky, outlining blackened hulks of modern condos and Soviet high-rises. 

Noting the precise time and the date, almost exactly a year since the December 2015 grid attack, Yasinsky felt sure that this was no normal blackout. He thought of the cold outside—close to zero degrees Fahrenheit—the slowly sinking temperatures in thousands of homes, and the countdown until dead water pumps led to frozen pipes. 

That’s when another paranoid thought began to work its way through Yasinsky’s mind: For the past fourteen months, he had found himself at the center of an enveloping crisis. A growing list of Ukrainian companies and government agencies had come to him to analyze a plague of cyberattacks that were hitting them in rapid, remorseless succession. A single group of hackers seemed to be behind all of it. Now he couldn’t suppress the sense that those same phantoms, whose fingerprints he had traced for more than a year, had reached back, out through the internet’s ether, into his home. 

PART I 
EMERGENCE 
Use the first moments in study. You may miss many an opportunity for quick victory this way, but the moments of study are insurance of success. Take your time and be sure. 

THE ZERO DAY 
Beyond the Beltway, where the D.C. intelligence-industrial complex flattens out to an endless sea of parking lots and gray office buildings marked with logos and corporate names designed to be forgotten, there’s a building in Chantilly, Virginia, whose fourth floor houses a windowless internal room. The room’s walls are painted matte black, as if to carve out a negative space where no outside light penetrates.

In 2014, just over a year before the outbreak of Ukraine’s cyberwar, this was what the small, private intelligence firm iSight Partners called the black room. Inside worked the company’s two-man team tasked with software vulnerability research, a job that required focus intense enough that its practitioners had insisted on the closest possible office layout to a sensory-deprivation chamber. 

It was this pair of highly skilled cave dwellers that John Hultquist first turned to one Wednesday morning that September with a rare request. When Hultquist had arrived at his desk earlier that day in a far-better-lit office, one with actual windows on the opposite side of the iSight building, he’d opened an email from one of his iSight colleagues in the company’s Ukraine satellite operation. Inside, he found a gift: The Kiev-based staff believed they might have gotten their hands on a zero-day vulnerability. 

A zero day, in hacker jargon, is a secret security flaw in software, one that the company who created and maintains the software’s code doesn’t know about. The name comes from the fact that the company has had “zero days” to respond and push out a patch to protect users. 

A powerful zero day, particularly one that allows a hacker to break out of the confines of the software application where the bug is found and begin to execute their own code on a target computer, can serve as a kind of global skeleton key—a free pass to gain entrance to any machine that runs that vulnerable software, anywhere in the world where the victim is connected to the internet. 

The file Hultquist had been passed from iSight’s Ukraine office was a PowerPoint attachment. It seemed to silently pull off exactly that sort of code execution, and in Microsoft Office, one of the world’s most ubiquitous pieces of software. 

As he read the email, Klaxons sounded in Hultquist’s mind. If the discovery was what the Ukrainians believed it might be, it meant some unknown hackers possessed—and had used—a dangerous capability that would allow them to hijack any of millions of computers. Microsoft needed to be warned of its flaw immediately. But in a more self-interested sense, discovering a zero day represented a milestone for a small firm like iSight hoping to win glory and woo customers in the budding security sub industry of “threat intelligence.” The company turned up only two or three of those secret flaws a year. Each one was a kind of abstract, highly dangerous curiosity and a significant research coup. “For a small company, finding a nugget like this was very, very gratifying,” Hultquist says. “It was a huge deal for us.” 

Hultquist, a loud and bearish army veteran from eastern Tennessee with a thick black beard and a perpetual smile, made a point of periodically shouting from his desk into a room next door known as the bull pen. One side of that space was lined with malware experts, and the other with threat analysts focused on understanding the geopolitical motives behind digital attacks. As soon as Hultquist read the email from iSight’s Ukrainian staff, he burst out of his office and into the bull pen, briefing the room and assigning tasks to triage what would become, unbeknownst then to any of them, one of the biggest finds in the small company’s history. 

But it was down the hall, in the black room, that the hacker monks within would start to grapple with the significance of iSight’s discovery: a small, hidden marvel of malicious engineering. 

Working on computers whose glowing monitors were the room’s only light source, the reverse engineers began by running the Ukrainians’ malware-infected PowerPoint attachment again and again inside a series of virtual machines—ephemeral simulations of a computer housed within a real, physical one, each one of them as sealed off from the rest of the computer as the black room was from the rest of the iSight offices. 

In those sealed containers, the code could be studied like a scorpion under an aquarium’s glass. They’d allow it to infect its virtual victims repeatedly, as the reverse engineers spun up simulations of different digital machines, running varied versions of Windows and Microsoft Office, to study the dimensions and flexibility of the attack. When they’d determined that the code could extract itself from the PowerPoint file and gain full control of even the latest, fully patched versions of the software, they had their confirmation: It was indeed a zero day, as rare and powerful as the Ukrainians and Hultquist had suspected. By late in the evening—a passage of time that went almost entirely unmarked within their work space—they’d produced a detailed report to share with Microsoft and their customers and coded their own version of it, a proof-of-concept rewrite that demonstrated its attack, like a pathogen in a test tube. 

PowerPoint possesses “amazing powers,” as one of the black room’s two reverse engineers, Jon Erickson, explained to me. Over years of evolution, it’s become a Rube Goldberg machine packed with largely unnecessary features, so intricate that it practically serves as its own programming language. And whoever had exploited this zero day had deeply studied one feature that allowed anyone to place an information “object” inside a presentation, like a chart or video pulled from elsewhere in the PowerPoint file’s own bundle of data, or even from a remote computer over the internet. 

In this case, the hackers had used the feature to carefully plant two chunks of data within the presentation. The first it loaded into a temporary folder on the target computer. The second took advantage of PowerPoint’s animation feature: PowerPoint’s animations don’t merely allow speakers to bore audiences with moving text and cartoons but actually execute commands on the computer on which the presentation is running. In this case, when the presentation loaded that animation file, it would run an automated script that right-clicked on the first file the presentation had planted on the machine and click “install” on the resulting drop-down menu, giving that code a foothold on the computer without tipping off its user. The result was something like a harmless-looking package left on your doorstep that, after you bring it inside, sprouts an arm, cuts itself open, and releases tiny robots into your foyer. All of this would happen immediately and invisibly, the instant the victim double-clicked the attachment to open it. 

Erickson, the reverse engineer who first handled the zero day in iSight’s black room, remembers his work disassembling and defusing the attack as a somewhat rare, fascinating, but utterly impersonal event. In his career, he’d dealt with only a handful of real zero days found in the wild. But he’d analyzed thousands upon thousands of other malware samples and had learned to think of them as specimens for study without considering the author behind them—the human who had rigged together their devious machinery. “It was just some unknown guy and some unknown thing I hadn’t seen before,” he said. 

But zero days do have authors. And when Erickson had first begun to pull apart this one in his blacked-out workshop that morning, he hadn’t simply been studying some naturally occurring, inanimate puzzle. He was admiring the first hints of a remote, malevolent intelligence. 

BLACK ENERGY 
Once iSight’s initial frenzy surrounding its zero-day discovery had subsided, the questions remained: Who had written the attack code? Whom were they targeting with it, and why? 

Those questions fell to Drew Robinson, a malware analyst at iSight whom John Hultquist described as a “daywalker”: Robinson possessed most of the same reverse-engineering skills as the black room’s vampire crew but sat in the sunlit bull pen next to Hultquist’s office, responsible for a far wider angle analysis of hacking campaigns, from the personnel who carried them out to their political motives. It would be Robinson’s job to follow the technical clues within that PowerPoint to solve the larger mysteries of the hidden operation it represented. 

Minutes after Hultquist had walked into the bull pen to announce the all-hands-on-deck discovery of the PowerPoint zero day that Wednesday morning, Robinson was poring over the contents of the booby-trapped attachment. The actual presentation itself seemed to be a list of names written in Cyrillic characters over a blue-and-yellow Ukrainian flag, with a watermark of the Ukrainian coat of arms, a pale blue trident over a yellow shield. Those names, Robinson found after using Google Translate, were a list of supposed “terrorists”—those who sided with Russia in the Ukrainian conflict that had begun earlier that year when Russian troops invaded the east of the country and its Crimean peninsula, igniting separatist movements there and sparking an ongoing war. 

That the hackers had chosen an anti-Russian message to carry their zero-day infection was Robinson’s first clue that the email was likely a Russian operation with Ukrainian targets, playing on the country’s patriotism and fears of internal Kremlin sympathizers. But as he searched for clues about the hackers behind that ploy, he quickly found another loose thread to pull. When the PowerPoint zero day executed, the file it dropped on a victim’s system turned out to be a variant of a piece of notorious malware, soon to become far more notorious still. It was called BlackEnergy. 

BlackEnergy’s short history up to that point already contained, in some sense, its own primer on the taxonomy of common hacking operations, from the lowliest “script kiddies”—hackers so unskilled that they could generally only use tools written by someone more knowledgeable—to professional cybercriminals. The tool had originally been created by a Russian hacker named Dmytro Oleksiuk, also known by his handle, Cr4sh. Around 2007, Oleksiuk had sold BlackEnergy on Russian-language hacker forums, priced at around $40, with his handle emblazoned like a graffiti tag in a corner of its control panel. 

The tool was designed for one express purpose: so-called distributed denial-of-service, or DDoS, attacks designed to flood websites with fraudulent requests for information from hundreds or thousands of computers simultaneously, knocking them off-line. Infect a victim machine with BlackEnergy, and it became a member of a so called botnet, a collection of hijacked computers, or bots. A botnet operator could configure Oleksiuk’s user-friendly software to control which web target its enslaved machines would pummel with spoofed requests as well as the type and rate of that digital bombardment. 

By late 2007, the security firm Arbor Networks counted more than thirty botnets built with BlackEnergy, mostly aiming their attacks at Russian websites. But on the spectrum of cyber attack sophistication, distributed denial-of-service attacks were largely crude and blunt. After all, they could cause costly downtime but not the serious data breaches inflicted by more penetrating hacking techniques. 

In the years that followed, however, BlackEnergy had evolved. Security firms began to detect a new version of the software, now equipped with an arsenal of interchangeable features. This revamped version of the tool could still hit websites with junk traffic, but it could also be programmed to send spam email, destroy files on the computers it had infested, and steal banking usernames and passwords.*1 
*1 As that more sophisticated cyber criminal use of BlackEnergy spread, its original creator, Oleksiuk, had been careful to distance himself from it—particularly after BlackEnergy was connected to financial fraud against Russian banks, a dangerous move in a country otherwise known to look the other way when cybercriminals focused on Western victims. “The fact that its source code was available to many people in all sorts of (semi) private parties, can mean that someone took it for their own needs,” Oleksiuk tried to explain in a post—titled “Fuck me I’m famous”—on the blogging site LiveJournal in 2009. “To suspect that the author of this bot software, whose autograph was written on publicly accessible versions of it 3 years ago, is involved in criminal machinations, you’d have to be a complete idiot.”

Now, before Robinson’s eyes, BlackEnergy had resurfaced in yet another form. The version he was looking at from his seat in iSight’s bull pen seemed different from any he’d read about before—certainly not a simple website attack tool, and likely not a tool of financial fraud, either. After all, why would a fraud-focused cybercrime scheme be using a list of pro-Russian terrorists as its bait? The ruse seemed politically targeted. From his first look at the Ukrainian BlackEnergy sample, he began to suspect he was looking at a variant of the code with a new goal: not mere crime, but espionage.*2 
*2 In fact, security analysts at the Russian security firm Kaspersky had quietly suspected someone had been using BlackEnergy for sophisticated spying since early 2013. Versions of the tool had begun appearing that were no longer offered for sale on hacker forums, and some were designed to infect machines that run Linux—an operating system rare enough that the hackers must have been using it for precision spy operations, not indiscriminate theft. “The crimeware use was gone,” the Kaspersky analyst Maria Garnaeva told me. “That was when the hackers using this became a unique targeted attack group.” 

Soon after, Robinson made a lucky find that revealed something further about the malware’s purpose. When he ran this new BlackEnergy sample on a virtual machine, it tried to connect out over the internet to an IP address somewhere in Europe. That connection, he could immediately see, was the so-called command-and-control server that functioned as the program’s remote puppet master. And when Robinson reached out himself via his web browser to that faraway machine, he was pleasantly shocked. The command-and control computer had been left entirely unsecured, allowing anyone to browse its files at will. 

The files included, amazingly, a kind of help document for this unique version of BlackEnergy that conveniently listed its commands. It confirmed Robinson’s suspicion: The zero-day-delivered version of BlackEnergy had a far broader array of data-collection abilities than the usual sample of the malware found in cybercrime investigations. The program could take screenshots, extract files and encryption keys from victim machines, and record keystrokes, all hallmarks of targeted, thorough cyberspying rather than some profit-focused bankfraud racket. 

But even more important than the contents of that how-to file was the language it was written in: Russian. 

ARRAKIS02 
The cybersecurity industry constantly warns of the “attribution problem”—that the faraway hackers behind any operation, especially a sophisticated one, are very often impossible to pinpoint. The internet offers too many opportunities for proxies, misdirection, and sheer overwhelming geographic uncertainty. But by identifying the unsecured command-and-control server, Robinson had broken through iSight’s BlackEnergy mystery with a rare identifying detail. Despite all the care they’d displayed in their PowerPoint hacking, the hackers seemed to have let slip a strong clue of their nationality. 

After that windfall, however, Robinson still faced the task of actually delving into the innards of the malware’s code in an effort to find more clues and create a “signature” that security firms and iSight’s customers could use to detect if other networks had been infected with the same program. Deciphering the functionality of the malware’s code wasn’t going to be nearly as easy as tracing its command-and-control server. As Robinson would painstakingly learn over the next days of solid, brain-numbing work, it had been thoroughly scrambled with three alternating layers of compression and encryption. 

In other words, getting to the malware’s secrets was something like a scavenger hunt. Although Robinson knew that the malware was self contained and therefore had to include all the encryption keys necessary to unscramble itself and run its code, the key to each layer of that scrambling could only be found after decoding the layer on top of it. And even after guessing the compression algorithm the hackers had used by scanning the random-looking noise for recognizable patterns, Robinson spent days longer working to identify the encryption scheme they’d used, a unique modification of an existing system. As he fell deeper and deeper into that puzzle, he’d look up from his desk and find that hours had seemingly jumped forward. Even at home, he’d find himself standing fixated in the shower, turning the cipher over and over in his mind. 

When Robinson finally cracked those layers of obfuscation after a week of trial and error, he was rewarded with a view of the BlackEnergy sample’s millions of ones and zeros—a collection of data that was, at a glance, still entirely meaningless. This was, after all, the program in its compiled form, translated into machine-readable binary rather than any human-readable programming language. To understand the binary, Robinson would have to watch it execute step by-step on his computer, unraveling it in real time with a common reverse-engineering tool called IDA Pro that translated the function of its commands into code as they ran. “It’s almost like you’re trying to determine what someone might look like solely by looking at their DNA,” Robinson said. “And the god that created that person was trying to make the process as hard as possible.” 

By the second week, however, that microscopic step-by-step analysis of the binary finally began to pay off. When he managed to decipher the malware’s configuration settings, they contained a socalled campaign code—essentially a tag associated with that version of the malware that the hackers could use to sort and track any victims it infected. And for the BlackEnergy sample dropped by their Ukrainian PowerPoint, that campaign code was one that he immediately recognized, not from his career as a malware analyst, but from his private life as a science fiction nerd: “arrakis02.” 

In fact, for Robinson, or virtually any other sci-fi-literate geek, the word “Arrakis” is more than recognizable: It’s as familiar as Tatooine or Middle-earth, the setting of a central pillar of the cultural canon. Arrakis is the desert planet where the novel Dune, the 1965 epic by Frank Herbert, takes place. 

The story of Dune is set in a world where Earth has long ago been ravaged by a global nuclear war against artificially intelligent machines. It follows the fate of the noble Atreides family after they’ve been installed as the rulers of Arrakis—also known as Dune—and then politically sabotaged and purged from power by their evil rivals, the Harkonnens. 

After the Atreides are overthrown, the book’s adolescent hero Paul Atreides takes refuge in the planet’s vast desert, where thousand-footlong sandworms roam underground, occasionally rising to the surface to consume everything in their path. As he grows up, Atreides learns the ways of Arrakis’s natives, known as the Fremen, including the ability to harness and ride the sandworms. Eventually, he leads a spartan guerrilla uprising, and riding on the backs of sandworms into a devastating battle, he and the native Fremen take the capital city back from the Harkonnens, their insurgency ultimately seizing control of the entire global empire that had backed the Harkonnens’ coup. 

“Whoever these hackers were,” Robinson remembers thinking, “it seems like they’re Frank Herbert fans.” 

When he found that arrakis02 campaign code, Robinson could sense he’d stumbled onto something more than a singular clue about the hackers who had chosen that name. He felt for the first time that he was seeing into their minds and imaginations. In fact, he began to wonder if it might serve as a kind of fingerprint. Perhaps he could match it to other crime scenes. 

Over the next days, Robinson set the Ukrainian PowerPoint version of BlackEnergy aside and went digging, both in iSight’s archives of older malware samples and in a database called VirusTotal. Owned by Google’s parent company, Alphabet, VirusTotal allows any security researcher who’s testing a piece of malware to upload it and check it against dozens of commercial antivirus products—a quick and rough method to see if other security firms have detected the code elsewhere and what they might know about it. As a result, VirusTotal has assembled a massive collection of in-the-wild code samples amassed over more than a decade that researchers can pay to access. Robinson began to run a series of scans of those malware records, searching for similar snippets of code in what he’d unpacked from his BlackEnergy sample to match earlier code samples in iSight’s or VirusTotal’s catalog. 

Soon he had a hit. Another BlackEnergy sample from four months earlier, in May 2014, was a rough duplicate of the one dropped by the Ukrainian PowerPoint. When Robinson dug up its campaign code, he found what he was looking for: houseatreides94, another unmistakable Dune reference. This time the BlackEnergy sample had been hidden in a Word document, a discussion of oil and gas prices apparently designed as a lure for a Polish energy company. 

For the next few weeks, Robinson continued to scour his archive of malicious programs. He eventually wrote his own tools that could scan for the malware matches, automate the process of unlocking the files’ layers of obfuscating encryption, and then pull out the campaign code. His collection of samples slowly began to grow: BasharoftheSardaukars, SalusaSecundus2, epsiloneridani0, as if the hackers were trying to impress him with their increasingly obscure knowledge of Dune’s minutiae. 

Each of those Dune references was tied, like the first two he’d found, to a lure document that revealed something about the malware’s intended victims. One was a diplomatic document discussing Europe’s “tug-of-war” with Russia over Ukraine as the country struggled between a popular movement pulling it toward the West and Russia’s lingering influence. Another seemed to be designed as bait for visitors attending a Ukraine-focused summit in Wales and a NATO-related event in Slovakia that focused in part on Russian espionage. One even seemed to specifically target an American academic researcher focused on Russian foreign policy, whose identity iSight decided not to reveal publicly. Thanks to the hackers’ helpful Dune references, all of those disparate attacks could be definitively tied together. 

But some of the victims didn’t look quite like the usual subjects of Russian geopolitical espionage. Why exactly, for instance, were the hackers focused on a Polish energy company? Another lure, iSight would later find, targeted Ukraine’s railway agency, Ukrzaliznytsia. 

But as Robinson dug deeper and deeper into the trash heap of the security industry, hunting for those Dune references, he was most struck by another realization: While the PowerPoint zero day they’d discovered was relatively new, the hackers’ broader attack campaign stretched back not just months but years. The earliest appearance of the Dune-linked hackers’ lures had come in 2009. Until Robinson had managed to piece together the bread crumbs of their operations, they’d been penetrating organizations in secret for half a decade. 

After six weeks of analysis, iSight was ready to go public with its findings: It had discovered what appeared to be a vast, highly sophisticated espionage campaign with every indication of being a Russian government operation targeting NATO and Ukraine. 

As Robinson had painstakingly unraveled that operation, his boss, John Hultquist, had become almost as fixated on the work of the Russian hackers as the malware analysts scrutinizing its code were. Robinson sat on the side of the bull pen closest to Hultquist’s office, and Hultquist would shout questions to him, his Tennessee-accented bellow easily penetrating the wall. But by the middle of October, Hultquist now invaded the bull pen on an almost daily basis to ask for updates from Robinson as the mystery spun out from that first PowerPoint zero day. 

For all the hackers’ clever tricks, Hultquist knew that getting any attention for their discovery would still require media savvy. At the time, Chinese cyberspies, not Russian ones, were public enemy number one for the American media and security industry. Companies from Northrop Grumman to Dow Chemical to Google had all been breached by Chinese hackers in a series of shocking campaigns of data theft—mostly focused on intellectual property and trade secrets—that the then NSA director, Keith Alexander, called the “greatest transfer of wealth in history.” A Russian espionage operation with unsurprising eastern European targets like this one, despite all its insidious skill and longevity, nonetheless risked getting lost in the noise. 

Their hackers would need a catchy, attention-grabbing name. Choosing it, as was the custom in the cybersecurity industry, was iSight’s prerogative as the firm that had uncovered the group.* And clearly that name should reference the cyberspies’ apparent obsession with Dune. 
* In fact, iSight wasn’t necessarily the first to piece together this hacker group’s fingerprints. The Slovakian firm ESET was, around the same time, making the same discoveries, including even the Dune-themed campaign codes in the group’s malware. ESET even presented its findings at the Virus Bulletin conference in Seattle in September 2014. But because ESET didn’t publish its findings online, iSight’s analysts told me they weren’t aware of its parallel research, and iSight has been widely credited—perhaps mistakenly—with discovering Sandworm first.

Robinson, a Dune fan since he was a teenager, suggested they label the hacking operation “Bene Gesserit,” a reference to a mystical order of women in the book who possess near-magical powers of psychological manipulation. Hultquist, who had never actually read Frank Herbert’s book, vetoed the idea as too abstruse and difficult to pronounce. 

Instead, Hultquist chose a more straightforward name, one he hoped would evoke a hidden monster, moving just beneath the surface, occasionally emerging to wield terrible power—a name more fitting than Hultquist himself could have known at the time. He called the group Sandworm.

4
FORCE MULTIPLIER 
Six weeks after they’d first discovered Sandworm, iSight’s staff held a round of celebratory drinks in the office, gathering at a bar the company kept fully stocked down the hall from the analysts’ bull pen. Sandworm’s debut onto the world stage had been everything Hultquist had hoped for. When the company went public with its discovery of a five-years-running, zero-day-equipped, Dune-themed Russian espionage campaign, the news had rippled across the industry and the media, with stories appearing in The Washington Post, Wired, and countless tech and security industry trade publications. Robinson remembers toasting Hultquist with a glass of vodka, in honor of the new species of Russian hacker they’d unearthed. 

But that same evening, 2,500 miles to the west, another security researcher was still digging. Kyle Wilhoit, a malware analyst for the Japanese security firm Trend Micro, had spotted iSight’s Sandworm report online that afternoon, in the midst of the endless meetings of the corporate conference he was attending at a hotel in Cupertino, California. Wilhoit knew iSight by reputation and John Hultquist in particular and made a note to take a closer look at the end of the day. He sensed that discoveries as significant as iSight’s tended to cascade. Perhaps it would shake loose new findings for him and Trend Micro. 

That night, sitting outside at the hotel bar, Wilhoit and another Trend Micro researcher, Jim Gogolinski, pulled out their laptops and downloaded everything that iSight had made public—the so-called indicators of compromise it had published in the hopes of helping other potential victims of Sandworm detect and block their attackers. 

Among those bits of evidence, like the plastic-bagged exhibits from a crime scene, were the IP addresses of the command-and-control servers the BlackEnergy samples had communicated back to. As the night wore on and the bar emptied out, Wilhoit and Gogolinski began to check those IP addresses against Trend Micro’s own archive of malware and VirusTotal, to see if they could find any new matches. 

After the hotel’s bar closed, leaving the two researchers alone on the dark patio, Wilhoit found a match for one of those IP addresses, pointing to a server Sandworm had used in Stockholm. The file he’d found, config.bak, also connected to that Swedish machine. And while it would have looked entirely unremarkable to the average person in the security industry, it immediately snapped Wilhoit’s mind to attention. 

Wilhoit had an unusual background for a security researcher. Just two years earlier, he’d left a job in St. Louis as manager of IT security for Peabody Energy, America’s largest coal company. So he knew his way around so-called industrial control systems, or ICS—also known in some cases as supervisory control and data acquisition, or SCADA, systems. That software doesn’t just push bits around, but instead sends commands to and takes in feedback from industrial equipment, a point where the digital and physical worlds meet. 

ICS software is used for everything from the ventilators that circulate air in Peabody’s mines to the massive washing basins that scrub its coal, to the generators that burn coal in power plants to the circuit breakers at the substations that feed electrical power to consumers. ICS applications run factories, water plants, oil and gas refineries, and transportation systems—in other words, all of the gargantuan, highly complex machinery that forms the backbone of modern civilization and that most of us take for granted. 

One common piece of ICS software sold by General Electric is Cimplicity, which includes a kind of application known as a human machine interface, essentially the control panel for those digital-to physical command systems. The config.bak file Wilhoit had found was in fact a .cim file, designed to be opened in Cimplicity. Typically, a .cim file loads up an entire custom control panel in Cimplicity’s software, like an infinitely reconfigurable dashboard for industrial equipment. 

This Cimplicity file didn’t do much of anything—except connect back to the Stockholm server iSight had identified as Sandworm’s. But for anyone who had dealt with industrial control systems, the notion of that connection alone was deeply troubling. The infrastructure that runs those sensitive systems is meant to be entirely cut off from the internet, to protect it from hackers who might sabotage it and carry out catastrophic attacks. 

The companies that run such equipment, particularly the electric utilities that serve as the most fundamental layer on which the rest of the industrialized world is built, constantly offer the public assurances that they have a strict “air gap” between their normal IT network and their industrial control network. But in a disturbing fraction of cases, those industrial control systems still maintain thin connections to the rest of their systems—or even the public internet—allowing engineers to access them remotely, for instance, or update their software. 

The link between Sandworm and a Cimplicity file that phoned home to a server in Sweden was enough for Wilhoit to come to a startling conclusion: Sandworm wasn’t merely focused on espionage. Intelligence-gathering operations don’t break into industrial control systems. Sandworm seemed to be going further, trying to reach into victims’ systems that could potentially hijack physical machinery, with physical consequences. 

“They’re gathering information in preparation to move to a second stage,” Wilhoit realized as he sat in the cool night air outside his Cupertino hotel. “They’re possibly trying to bridge the gap between digital and kinetic.” The hackers’ goals seemed to extend beyond spying to industrial sabotage. 

Wilhoit and Gogolinski didn’t sleep that night. Instead, they settled in at the hotel’s outdoor table and started scouring for more clues of what Sandworm might be doing in ICS systems. How was it gaining control of those interfaces? Who were its targets? The answers continued to elude them. 

They skipped all their meetings the next day, writing up their findings and posting them on Trend Micro’s blog. Wilhoit also shared them with a contact at the FBI who—in typically tight-lipped G-man fashion—accepted the information without offering any in return. 

Back in his Chantilly office, John Hultquist read Trend Micro’s blog post on the Cimplicity file. He was so excited that he didn’t even think to be annoyed that Trend Micro had found an unturned stone in the middle of iSight’s major discovery. “It totally opened up a new game,” Hultquist said. 

Suddenly those misfit infrastructure targets among Sandworm’s victims, like the Polish energy firm, made sense. Six weeks earlier, iSight had found the clues that shifted its mental model of the hackers’ mission from mere cybercrime to nation-state-level intelligence gathering. Now Hultquist’s idea of the threat was shifting again: beyond cyberspying to cyberwar. “This didn’t look like classic espionage anymore,” Hultquist thought. “We were looking at reconnaissance for attack.”

Hultquist had, in some sense, been searching for something like Sandworm his entire career, long before iSight stumbled into it, before he even knew what form it would take. Like many others in the cybersecurity industry, and particularly those with a military background, he’d been expecting cyberwar’s arrival: a new era that would finally apply hackers’ digital abilities to the older, more familiar worlds of war and terrorism. For Hultquist, it would be a return to form. Since his army days a decade and a half earlier, he’d learned to think of adversaries as ruthless people willing to blow things up, to disrupt infrastructure, and to kill him, his friends, and innocent civilians he’d been tasked to protect. 

An army reservist from the tiny town of Alcoa in eastern Tennessee, Hultquist had been called up in the midst of college to serve in Afghanistan after September 11. Soon the twenty-year-old found himself in Kandahar province in a Civil Affairs unit. Their job was to roll around the countryside in a six-man team, meeting with the heads of local villages in an effort to win hearts and minds. “We were still armed to the teeth, of course,” Hultquist told me, followed by a kind of cackle that punctuates many of his stories. “It was high adventure.” He let his black beard grow wild and came to be known within the unit as Teen Wolf. 

His Civil Affairs unit’s motto, printed across a badge on their uniforms’ shoulder, was vis amplificans vim, a phrase his superiors had told him roughly translated to “force multiplier.” The idea was to build relationships with local civilians that would aid in and expand on the less subtle work of expelling and killing the Taliban; they were the carrot to the infantry and Special Forces’ stick. They’d have lunch with a group of village elders, ask them what they needed over a meal of goat and flatbread, and then, say, dig them a well. “Sometimes we’d come back a couple weeks later and they’d tell us where an ammo cache was hidden,” Hultquist says. 

In those early days of the war, the Taliban had already mostly fled the country, evaporating away from the initial U.S. invasion into the mountains of Pakistan. As they slowly began to slip back into Afghanistan in the months that followed, however, the violence ramped up again. One night, a Taliban guerrilla shot two rockets at the building where Hultquist and his unit were sleeping. One missed, banking skyward. The other, by a stroke of luck, failed to explode and was defused by their explosive-ordnance unit. Just days later, those same bomb technicians were killed when explosives they were defusing in a hidden Taliban rocket cache suddenly detonated. Hultquist and his unit were the first to the scene and spent hours collecting their dismembered body parts. 

After the invasion of Iraq in 2003, Hultquist was transferred there, a deployment that was immediately as intense and bloody as Afghanistan had grown to be. In Iraq, the war quickly shifted to a hunt for a largely invisible force of saboteurs planting hidden makeshift bombs, a highly asymmetric guerrilla conflict. Hultquist learned how psychologically devastating those repeated, unpredictable, and lethal explosions could be. He’d eventually earn an army commendation for valor for his quick response when a team of fellow soldiers’ Humvee was hit with a roadside bomb, administering first aid and an IV to two men who survived the attack. 

The gunner on top of the vehicle, however, had died instantly in the blast. When the bomb had gone off, he’d had grenades strapped to his chest so that he could quickly feed them into the launcher. Hultquist still remembers the sound of those grenades exploding one by one as the man’s body burned. 

Hultquist completed his tour of duty, returned to the United States, and finished college. After graduating, he got a job teaching a course on psychological operations at Fort Dix in New Jersey and then moved to one of the Information Sharing and Analysis Centers, or ISACs, that had been created around the country in the years after 9/11 to address possible terrorism threats. He was assigned to focus on the problem of highway safety and later the security of water systems and railways, thinking up countermeasures to grim scenarios like attackers plowing large vehicles into crowds or planting bombs in vehicles’ cargo holds, as terrorists had done in Sri Lanka in cases he studied. 

He was introduced to the digital side of those security threats only in 2006, when he joined the State Department as a junior intelligence analyst contractor, tasked mostly with helping to protect the agency’s own networks from hackers. At the time, China’s state-sponsored cyberspying campaigns were just coming into focus as a serious problem for America’s national security and even its commercial dominance. In the mid-2000s, a series of intrusions known as Titan Rain, believed to be carried out by cyberspies working for China’s People’s Liberation Army, had broken into Lockheed Martin, Sandia National Labs, and NASA. By the time Hultquist started his job at State, reports were surfacing on an almost weekly basis of Chinese espionage that had breached the networks of targets from defense contractors to tech companies. “They were stealing all of our intellectual property, and all of our attention,” Hultquist says of the Chinese hackers. 

But from his first years tracking state-sponsored cyberspies in the U.S. government, Hultquist gravitated to a different, less considered form of digital attack. After his experience trying to outthink insurgents and terrorists in the army and then at the ISACs, he naturally focused not on espionage but on the threats capable of inflicting psychological disruption on an enemy, shutting down civilian resources and creating chaos. 

In 2007, for instance, Estonia had come under a punishing, unprecedented barrage of DDoS attacks that all seemed to originate in Russia. When Estonian police cracked down on riots incited by the country’s Russian-speaking minority, targeted floods of junk traffic knocked Estonia’s government, media, and banking sites off-line for days in a networked blitzkrieg like nothing the world had ever seen before. The next year, when war broke out between Russia and Georgia, another of its post-Soviet neighbors, crude cyberattacks pummeled that country’s government and media, too. Russia, it seemed to Hultquist, was trying out basic methods of pairing traditional physical attacks with digital weapons of mass disruption. 

Back then, Hultquist had mostly watched from the sidelines. He’d studied the Estonian and Georgian attacks, met with researchers who tracked them, and briefed senior officials. But he’d rarely been able to pull their attention away from the massive siphoning of state secrets and intellectual property being carried out by China’s hackers, a threat that seemed far more immediate to American interests. 

Now, years later, iSight’s Sandworm discovery had put Hultquist at the vanguard of what seemed to be a new, far more advanced form of Russian cyberwar. In the midst of Russia’s invasion into Ukraine, a team of Russian hackers was using sophisticated penetration tools to gain access to its adversaries’ infrastructure, potentially laying the groundwork to attack the underpinnings of civilian society, hundreds of miles beyond the front lines: He imagined sabotaged manufacturing, paralyzed transportation, blackouts. 

As Sandworm’s mission crystallized in his mind, a phrase from his time in the army’s Civil Affairs unit came to him from more than a decade earlier: vis amplificans vim.

After he read Trend Micro’s report, Hultquist’s fascination grew: Sandworm had transformed in his mind from a vexing puzzle to a rare and dangerous geopolitical phenomenon. He began to bring it up constantly with iSight’s analysts, with any reporter he spoke to, with other members of the security industry, and with the D.C. intelligence community. For iSight’s office Halloween party, he even made himself a Sandworm costume out of a green children’s play tunnel, an expression of his pet preoccupation that was perhaps only partly a selfmocking joke. “Sandworm was my favorite thing,” Hultquist said simply. 

He was nonetheless frustrated to find that after the initial hype around iSight’s discovery, his Sandworm-watchers club didn’t have many other members. The mainstream media seemed to have, for the moment, largely exhausted its interest in the group. Vague hints of a technically convoluted connection to infrastructure attacks weren’t enough, it seemed, to attract even a fraction of the attention that iSight had initially brought to Sandworm’s zero day and secret Dune clues. 

But Hultquist didn’t know that someone else had been tracking the group’s campaign of intrusions, too, and had quietly assembled by far the most disturbing portrait of the group yet. 

Thirteen days after Trend Micro had released its findings on Sandworm’s connection to industrial control system attacks, the division of the Department of Homeland Security known as the Industrial Control Systems Cyber Emergency Response Team, or ICSCERT, released its own report. ICS-CERT acts as a specialized infrastructure-focused government cybersecurity watchdog tasked with warning Americans about impending digital security threats. It had deep ties with U.S. utilities like power and water suppliers. And now, perhaps triggered by iSight and Trend Micro’s research, it was confirming Hultquist’s worst fears about Sandworm’s reach. 

Sandworm, according to the ICS-CERT report, had built tools for hacking not only the GE Cimplicity human-machine interfaces Trend Micro had noted but also similar software sold by two other major vendors, Siemens and Advantech/Broadwin. The report stated that the intrusions of industrial control system targets had begun as early as 2011 and continued until as recently as September 2014, the month iSight detected Sandworm. And the hackers had successfully penetrated multiple critical infrastructure targets, though none were named in the document. As far as ICS-CERT could tell, the operations had only reached the stage of reconnaissance, not actual sabotage. 

iSight’s analysts began discreetly following up on the DHS report with their sources in the security industry and quickly confirmed what they’d read between the lines: Some of Sandworm’s intrusions had occurred at infrastructure targets that weren’t just Ukrainian or Polish. They were American. 

Less than two months after iSight had found its first fingerprints, Hultquist’s idea of Sandworm had shifted yet again. “This was a foreign actor who had access to zero days making a deliberate attempt on our critical infrastructure,” Hultquist says. “We’d detected a group on the other side of the world carrying out espionage. We’d pored over its artifacts. And we’d found it was a threat to the United States.” 

Even the revelation that Sandworm was a fully equipped infrastructure-hacking team with ties to Russia and global attack ambitions never received the attention Hultquist thought it deserved. It was accompanied by no statement from White House officials. The security and utility industry trade press briefly buzzed with the news and then moved on. “It was a sideshow, and no one gave a shit,” Hultquist said with a rare hint of bitterness. 

But all the attention seemed to have finally reached one audience: Sandworm itself. When iSight looked for the servers connected with the malware again after all of the public reports, the computers had been pulled off-line. The company would find one more BlackEnergy sample in early 2015 that seemed to have been created by the same authors, this time without any Dune references in its campaign codes. It would never find that sort of obvious, human fingerprint again; the group had learned from the mistake of revealing its sci-fi preferences. 

Sandworm had gone back underground. It wouldn’t surface again for another year. When it did, it would no longer be focused on reconnaissance. It would be primed to strike.  

STARLIGHTMEDIA 
On a calm Sunday morning in October 2015, more than a year before Yasinsky would look out of his kitchen window at a blacked-out skyline, he sat near that same window in his family’s high-rise apartment in Kiev, sipping tea and eating a bowl of cornflakes. Suddenly his phone buzzed with a call from an IT administrator at work. 

Yasinsky was, at the time, employed as the director of information security at StarLightMedia, Ukraine’s largest TV broadcasting conglomerate. The night before, his colleague on the phone told him, two of StarLight’s servers had inexplicably gone off-line. The admin assured Yasinsky that it wasn’t an emergency. The machines had already been restored from backups. 

But as Yasinsky quizzed his colleague further about the server outage, one fact immediately made him feel uneasy. The two machines had gone dark at almost the same minute. “One server going down, it happens,” Yasinsky thought. “But two servers at the same time? That’s suspicious.” 

Resigned to a lost weekend, he left his apartment and began his commute to StarLight’s offices, descending the endless escalator that leads into Kiev’s metro, one of the deepest in the world and designed during the Cold War to serve as a series of potential bomb shelter tunnels. After forty minutes underground, Yasinsky emerged into the cool autumn air of central Kiev. He took the scenic route to the office, walking through Taras Shevchenko Park and the university campus next to it. As he passed street musicians, college students on dates, then the botanical gardens, whose leaves were beginning to turn, the dismal war that had broken out in the east of the country felt far away. 

Yasinsky arrived at StarLightMedia’s office, a five-story building on a quiet street. Inside, he and the company’s IT administrators began examining the image they’d kept of one of the corrupted servers, a digital replica of all its data. Yasinsky’s hunch that the outage was no accident was immediately confirmed. The server’s master boot record —the deep-seated, reptile-brain portion of a computer’s hard drive that tells the machine where to find its own operating system—had been precisely overwritten with zeros. And the two victim servers that had suffered that lobotomy weren’t randomly chosen. They were domain controllers, computers with powerful privileges that could be used to reach into hundreds of other machines on the corporate network. 

Yasinsky quickly discovered the attack had indeed gone far beyond just those two machines. Before they had been wiped, the pair of corrupted servers had themselves planted malware on the laptops of thirteen StarLight employees. The staffers had been preparing a morning TV news bulletin ahead of Kiev’s local elections when they suddenly found that their computers had been turned into black screened, useless bricks. The infection had triggered the same boot record overwrite technique on each of their hard drives. 

Nonetheless, Yasinsky could see that his company had been lucky. When he looked at StarLightMedia’s network logs, it appeared the domain controllers had committed suicide prematurely. They’d actually been set to infect and destroy two hundred more of the company’s PCs. Someone had carefully planted a logic bomb at the heart of the media firm’s network, designed to cause it as much disruption as possible. 

Yasinsky managed to pull a copy of the destructive program from the backups, and that night, back at home in the north of the city, he scrutinized its code. He was struck by the layers of obfuscation; the malware had evaded all antivirus scans. It had even impersonated an antivirus scanner itself, Microsoft’s Windows Defender. After his family had gone to sleep, Yasinsky printed the code and laid the papers across his kitchen table and floor, crossing out lines of camouflaging characters and highlighting commands to see the malware’s true form. 

Yasinsky had been working in information security for twenty years. After a stint in the army, he’d spent thirteen years as an IT security analyst for Kyivstar, Ukraine’s largest telecommunications firm. He’d managed massive networks and fought off crews of sophisticated cyber criminal hackers. But he’d never analyzed such a well-concealed and highly targeted digital weapon. 

As a security researcher, Yasinsky had long prided himself on a dispassionate and scientific approach to the problems of information security, drilling into the practical details of digital defense rather than obsessing over the psychology of his adversary. But as he followed Sandworm’s tracks through StarLightMedia’s network, he nonetheless could sense he was facing an enemy more sophisticated than he’d ever seen before. 

Oleksii Yasinsky had understood intuitively from childhood that the digital was no less real than the physical—that life and death could depend as easily on one as on the other. 

As a nine-year-old growing up in Soviet Kiev in 1985, he’d sneak a copy of the state-issued magazine Tekhnika Molodezhi, or “Technology for the Youth,” under his blanket, along with a flashlight and his treasured MK-61 calculator. He’d flip to the pages devoted to the continuing adventures of the two fictional cosmonauts Korshunov and Perepyolkin. The pair, through the vagaries of fate, had found themselves stuck on the moon with only a lunar transport vehicle designed for short trips. Even worse, they were low on fuel, with no electronic guidance system. It was Yasinsky’s secret responsibility, in his illicit post-bedtime cocoon, to get those two men home by copying commands from the magazine into his programmable calculator. 

“The life of two people helplessly dangling in space depended on this little boy,” Yasinsky would later write in a journal, describing the intensity of that first programming experience. 

Back then I did not yet understand the meaning hidden in the neat columns of mysterious characters printed on yellowed pages of the magazine. Pages seemed to be torn from some sort of wizard manuscript, and I was clicking on the soft gray keys of the calculator anticipating a new adventure. But even at the time I knew: this was the key to a completely different world, or, more precisely, the myriad of other worlds I could create myself. 

Yasinsky grew up in a two-room home in a typical five-story, Khrushchev-era Soviet apartment complex in Kiev. He was a child of engineers: His father worked in a record-player factory, and his mother was a university researcher in aerospace metals. He had, as he describes it, a very typical Soviet childhood. He proudly wore the red scarfed uniform of Lenin’s Young Pioneers to school every day, played in the building’s courtyard with his friends, and occasionally broke neighbors’ windows with a soccer ball. He remembers no politics ever being discussed at home, with the exception of a few whispers from his parents in the kitchen about a visit his great-grandparents had received from the secret police, a conversation quickly cut short for fear of eavesdropping neighbors. 

School never interested Yasinsky as much as the adventures he unlocked with his MK-61 calculator. It was, after all, his first computer, at a time when the Apple IIs and Nintendo consoles of the West had yet to penetrate the Iron Curtain. But when Yasinsky was around twelve, his father managed to collect and then assemble the components of a Sinclair Spectrum PC. It was, for Yasinsky, a mindblowing upgrade. He spent hours painstakingly reading manuals he found photocopied at the local radio market, writing code in BASIC and later assembly, filling the screen with pixel art depictions of wireframe spaceships. 

The moment he believes turned his obsession with computers from a hobby to a career, however, was an act not of programming but of reverse engineering. Simply by changing a few bytes in the code of a primitive shooter video game, he discovered he could endow his character with unlimited lives and ammunition. That basic act of hacking, for Yasinsky, wasn’t merely a way to cheat in a meaningless game. It was instead as if he’d gained new powers to reshape reality itself. “I had turned the world upside down. I’d gone into the other side of the screen,” Yasinsky remembers. 

It followed intuitively, for him, that if this power could change the digital world, it could control the physical universe, too. “I realized the world is not what we see,” he says. “It wasn’t about getting extra lives; it was about changing the world I’d found myself in.” 

In the late 1980s, however, came Gorbachev’s policy of glasnost, or “openness,” and with it a flood of Western distractions. For Yasinsky and his young teenage friends, the influx of global media took the form of Jean-Claude Van Damme and Bruce Lee kung fu films. A karate and judo obsession briefly superseded his love for computers. Yasinsky was a talented enough fighter that in 1993 he was selected for the Ukrainian national karate championships. But in one of those tournament matches, he says, an opponent struck him with an illegal kick just below the knee, tearing the ligaments in the back of his leg and ending his brief martial arts career. “Fortunately, I still had Assembly,” Yasinsky wrote in his journal. 

After two years studying computer science at the Kiev Polytechnic Institute, Yasinsky was drafted into the army. He describes the next year and a half as a long lesson in discipline, organization, self confidence, and intensely rigorous drudgery. “A soldier’s best friend is a shovel, and it’s good to be a soldier,” he remembers his superiors drumming into him. Aside from that bit of character building, he says that he learned nothing except how to properly make a bed. 

When he was released and got back to his university education, he finally returned to computer science. He found that there was an emerging field within the discipline that appealed to his sense of the hidden structure of the world and the levers that moved it: cybersecurity. 

Yasinsky learned only its barest basics in his studies. But when he graduated, he landed a job at Kyivstar, then Ukraine’s largest telecom provider. That job, he says, gave him his real education. Though most of his career there is protected by a nondisclosure agreement, he hints that he worked on the company’s team that fights fraud and crime and served as a consultant to law enforcement. He also says that the job was his first experience learning to sift through massive data sets to fight intelligent, malicious adversaries. “It was like the Matrix,” he says. “You look at all these numbers and you can see real human behavior.” 

After six years, Yasinsky moved on to a purely digital version of the same cat-and-mouse game: Rather than physical-world criminals, he was tasked with tracking the hackers who sought to exploit Kyivstar’s systems. In the late 2000s, those hackers were transitioning from opportunistic criminal schemes to highly organized fraud operations. Yasinsky found himself engaged in the same sort of reverse engineering that had captivated him as a teenager. But instead of taking apart the code of a mere video game, he was dissecting elaborate criminal intrusions, deconstructing malware to see the intentions of the devious parasites within Kyivstar’s network. 

Even as the stakes of that cat-and-mouse game escalated, it had seemed like a fair fight. In cybersecurity, attackers have the advantage: There are always more points of ingress than defenders can protect, and a skilled hacker needs only one. But these were nonetheless mostly small criminal operations facing a well-organized corporate security team capable of identifying their incursions and limiting the damage they could inflict. 

Then, not long before the outbreak of Ukraine’s war with Russia, Yasinsky took a position as chief information security officer at StarLightMedia. And he found himself facing a new form of conflict— one for which neither his company nor his country nor the world at large was prepared. 

By the fall of 2015, only the smallest hints of that conflict’s scale were visible. For days, Yasinsky worked to determine the basic facts of the mysterious attack on StarLightMedia, reverse engineering the obfuscated code he’d pulled from the company’s backups, the digital IED that had nearly devastated its network. Beneath all its cloaking and misdirection, Yasinsky determined, was a piece of malware known as KillDisk, a data-destroying tool that had been circulating among hackers for about a decade.* 
* Two security researchers, Michael Goedeker and Andrii Bezverkhyi, say they and Bezverkhyi’s security firm, SOC Prime, were deeply involved in StarLightMedia’s investigation. But Yasinsky disputed the extent of this cooperation, telling me that while he had shared some information with Bezverkhyi and SOC Prime had provided some tools for their work, neither Goedeker nor anyone from SOC Prime had contributed to StarLightMedia’s final analysis.

Understanding how that destructive program got into StarLightMedia’s system would take weeks longer: Along with two colleagues, Yasinsky obsessively dug into the company’s network logs, combing them again and again, working through nights and weekends to parse the data with ever finer filters, hoping to extract clues. 

The team began to find the telltale signs of the hackers’ presence— some compromised corporate YouTube accounts, an administrator’s network log-in that had remained active even when he was out sick. Slowly, with a sinking dread, they found evidence showing that the intruders had been inside their network for weeks before detonating their attack’s payload. Then another clue suggested they’d been inside the system for three months. Then six. 

Finally, they identified the piece of malware that had given the hackers their initial foothold, penetrating one of the staff’s PCs via an infected attachment: It was again a form of BlackEnergy, the same malware that iSight had tied to Sandworm a year earlier. But now it had been reworked to evade detection by antivirus software and included new modules that allowed the hacker to spread to other machines on the same network and execute the KillDisk data wiper. 

As he dug into the forensics of how his company had been sabotaged, Yasinsky began to hear from colleagues at other firms and in the government that they too had been hacked, and in almost exactly the same way. A competing media company, TRK, hadn’t gotten off as easily: It had lost more than a hundred computers to the KillDisk attack. Another intrusion had hit Ukrzaliznytsia, Ukraine’s biggest railway company. Yasinsky would later learn that Kiev’s main airport, Boryspil, had been struck. There were other targets, too, ones that asked Yasinsky to keep their breaches secret. Again and again, the hackers used the all-purpose BlackEnergy malware for access and reconnaissance, then KillDisk for data destruction. Their motives remained an enigma, but their marks were everywhere. 

“With every step forward, it became clearer that our Titanic had found its iceberg,” says Yasinsky. “The deeper we looked, the bigger it was.” 

next- 43
HOLODOMOR TO CHERNOBYL

FAIR USE NOTICE
This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. As a journalist, I am making such material available in my efforts to advance understanding of artistic, cultural, historic, religious and political issues. I believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law.

In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. Copyrighted material can be removed on the request of the owner.

No comments:

Part 1 Windswept House A VATICAN NOVEL....History as Prologue: End Signs

Windswept House A VATICAN NOVEL  by Malachi Martin History as Prologue: End Signs  1957   DIPLOMATS schooled in harsh times and in the tough...