BEN NIMMO, ERIC HUTCHINS
SUMMARY
The online threatscape in 2023 is characterized by an unprecedented variety of actors, types of operation, and threat response teams. Threat actors range from intelligence agencies and troll farms to child-abuse networks. Abuses range from hacking to scams, election interference to harassment. Responders include platform trust-and-safety teams, government agencies, open-source researchers, and others. As yet, these responding entities lack a shared model to analyze, describe, compare, and disrupt the tactics of malicious online operations. Yet the nature of online activity—assuming the targets are human—is such that there are significant commonalities between these abuse types: widely different actors may follow the same chain of steps. By conducting a phase-based analysis of different violations, it is possible to isolate the links in the chain within a unified model, where breaking any single link can disrupt at least part of the operation, and breaking many links—“completing the kill chain”—can disrupt it comprehensively. Using this model will allow investigators to analyze individual operations and identify the earliest moments at which they can be detected and disrupted. It will also enable them to compare multiple operations across a far wider range of threats than has been possible so far, to identify common patterns and weaknesses in the operation. Finally, it will allow different investigative teams across industry, civil society, and government to share and compare their insights into operations and threat actors according to a common taxonomy, giving each a better understanding of each threat and a better chance of detecting and disrupting it.
INTRODUCTION
Governments,1 nonprofit organizations,2 commercial companies,3 academic institutions,4 and social media platforms5 have all invested heavily in setting up teams to tackle some of the abuses within the online environment. In parallel, countries and international institutions have begun work to define and regulate the online space, with initiatives such as the UK’s Online Safety Bill (formerly Online Harms Bill)6 and the EU’s revised Code of Practice on Disinformation7 and Digital Services Act.8
Underpinning these efforts, the research community has conducted foundational work to define and describe the taxonomy of different threats. The cyber espionage community has led the way with the seminal Intrusion Kill Chain,9 the Unified Kill Chain,10 the MITRE ATT&CK framework,11 the Diamond Model of intrusion analysis,12 and the Pyramid of Pain approach to prioritizing detection indicators.13 In the field of influence operations, a number of experts and organizations have proposed kill chains, including Bruce Schneier,14 Clint Watts,15 the Center for Security and Emerging Technology at Georgetown University,16 and the Credibility Coalition Misinfosec Working Group (AMITT and DISARM frameworks).17 The Digital Shadows Photon Research Team has proposed a kill chain for account takeovers;18 Optiv has a cyber fraud kill chain.19 While many of these reference the Intrusion Kill Chain as their inspiration, each is tailored to a specific violation type, such as hacking, influence operations, or fraud.
These models vary in audience and focus. Some are designed for use by specific defenders—for example, the Intrusion Kill Chain, which offers network defenders an intelligence-based framework to disrupt computer exploitation and attack, or Watts’s Social Media Kill Chain, which proposes a model for social media platforms to detect and understand influence operations. Others are broader, such as Schneier’s Influence Operations Kill Chain, which recommends countermeasures against influence operations for tech platforms, intelligence agencies, the media, and educators, among others. Some models focus on threat actors’ tactics (AMITT: “Create fake Social Media Profiles / Pages / Groups”), while others focus on their overall strategies (Schneier: “find the cracks in the fabric of society”). All have enriched the public debate around online operations and our understanding of the threatscape.
However, two key gaps remain. First, public debate is hampered by the lack of a common taxonomy and vocabulary to analyze, describe, and compare different types of online operations.20 One problem can have many names: for example, within the space of online political interference, different frameworks refer to “disinformation,”21 “information operations,”22 “misinformation incidents,”23 “malinformation,”24 and “influence operations”—terms which may have distinct meanings but are often used interchangeably.25 Simultaneously, one word can have many meanings: the term “exploitation” covers both executing unauthorized code on a victim’s system26 and amplifying an influence campaign with bots, trolls, and “useful idiots.”27
Second, each model is designed primarily to analyze a single threat activity, be it hacking, influence operations, spam, or fraud. But online operations are amorphous and do not always fit neatly into a single violation type. For example, the operation known as Ghostwriter28 and an unrelated operation from Azerbaijan29 that Meta disrupted both combined hacking and online disinformation. In 2016, Russian military intelligence famously combined hacking, social media activity, planting of articles by fake personas on mainstream media outlets, and weaponized leaking via a third party.30 Analyzing any of these operations through one threat-specific framework carries the risks of missing other important segments of their activity, underenforcing, and reinforcing siloed approaches to tackling different forms of online abuse.
We have designed the Online Operations Kill Chain to fill these gaps by providing an analytic framework that is designed to be applied to a wide range of online operations—especially those in which the targets are human.31 These include, but are not limited to, cyber attacks, influence operations, online fraud, human trafficking, and terrorist recruitment. It is our hope that a common framework for investigators across platforms, in the open-source community, and within democratic institutions will enable more effective collaboration to analyze, describe, compare, and disrupt online operations.
USING THE ONLINE OPERATIONS KILL CHAIN
The basis of our approach is that, despite their many differences, online operations still have meaningful commonalities. At the most fundamental level—at risk of sounding simplistic—any online operation has to be able to get online. That likely means, at the very least, acquiring an IP address and (depending on the platform) probably an email address or mobile phone number for verification purposes. If the operation runs a website, it will need hosting, administrators, and a content creation platform. If active on social media, it must be able to acquire or create accounts. It will likely try to evade detection by platforms or users by adopting technical and visual disguises, such as stealing a profile picture or obfuscating a piece of code to get past antivirus scanners.32 All these requirements hold true across threat areas, whether the operation is aimed at espionage or election interference, sex trafficking or selling fake Ray-Bans.
The Online Operations Kill Chain builds on those commonalities to propose a unified phase-based framework to analyze many types of operations. It covers the full range of abuses that the routinely tackle, from cyber espionage and influence operations to scams. It is designed to cover multifaceted operations such as Ghostwriter or the Russian military’s hack-and-leak operations, as well as simpler ones. Despite this wide coverage, it focuses on identifying the threat actor’s specific tactical, technical, and procedural activities.
Analysts and investigators can use the kill chain on three levels, whether they work at a tech platform, an open-source institution, or a government body. First, they can apply it to a single operation and use it to sequence that operation’s activity, finding the combination of tactics, techniques, and procedures (TTPs) that would allow for the earliest detection and disruption.33
As a hypothetical example, if investigators identify that an influence operation is using a particular niche email domain to set up fake social media accounts (kill chain phase: acquiring assets); disguising them with profile pictures generated using generative adversarial networks, or GANs, such as StyleGAN 2 (disguising assets phase); and then using those accounts to spam links to state media websites (indiscriminate engagement phase), then they can prioritize finding ways to detect the combination of email provider and GAN profile picture, which potentially could help in disrupting further fake accounts before they post.
Second, they can use the kill chain to compare multiple operations. This can allow them to analyze commonalities between two operations of the same type (such as two harassment campaigns) or between operations of different types (such as a harassment campaign versus a scam), or even to analyze tactical changes in an individual, long-running operation by a particular threat actor by comparing its behavior at different times.34 This, in turn, can provide the necessary data to prioritize countermeasures that could be applied to multiple operations at the same time.
To continue the above hypothetical example, the investigative team could check the kill chain records of other operations to see if the use of either that particular niche domain or StyleGAN 2 images is a recurring pattern. If they find that StyleGAN 2 images have been used by cyber espionage, harassment, and spam networks but the niche email domain has not, they can prioritize finding ways to detect the images, which could enable them to identify many types of operations at an early stage.
Third, and within the limits of privacy regulation, research teams across different disciplines can use the kill chain to share and compare their findings on different operations. Since each investigative team is likely to see different facets of the operation, they can collectively build up a better understanding than any one team could alone.
To extend our hypothetical example further, let us assume that the investigative team shares its kill chain analysis of the initial operation with its peers among tech platforms, law-enforcement institutions, and the open-source community. By pooling their respective insights according to the kill chain’s common framework, this community could identify not only the use of that particular email domain and StyleGAN 2 pictures but also other distinguishing features, such as IP addresses; fictitious personas across social media, blogging, and media platforms; and malware. All of these could then be fed back into each team’s understanding of the overall operation, possibly empowering more precise and earlier detection.
This approach would make defenses more resilient by enabling investigators on different teams to “complete the kill chain”: identify multiple points at which an operation could be detected and disrupted. It would also increase resilience by allowing teams who specialize in very different areas—for example, scams, harassment of human-rights defenders, and election interference—to compare the operations they see, identify the most common TTPs, and prioritize them for countermeasure development.
INTERNAL VERSUS EXTERNAL USE
The kill chain is both an analysis tool for investigators and a vehicle to structure communication. It is designed for use within and between platforms, open-source researchers, and governments.
Within institutions, especially platforms, it allows investigative teams to record the TTPs of different operations according to a unified taxonomy and to identify detection leads and points in the chain where the operation can be disrupted. Indicators for internal sharing can be exceptionally granular, including, for example, the combination of IP address, email domain, malware type, and posting pattern that characterizes the malicious operation. Iterative observations can be made to track an operation’s changes over time.
Between institutions, the kill chain allows different teams to describe the operations they have uncovered according to a unified taxonomy and to identify the weak points in the chain and the partners who could break those links.35 Given the restrictions of privacy-protection and information-sharing arrangements, such communication will likely be less granular or comprehensive. It could, however, mean sharing technical indicators such as IP addresses between industry peers and sharing behavioral indicators with the public, such as the distinctive pairs of URLs posted by the Chinese influence operation that Meta disrupted in late 2021.36
We designed the kill chain to be used by the open-source community as well as platforms (see box 1). We have experienced firsthand how much information open-source researchers can uncover.37 The Online Operations Kill Chain is designed to enable them to structure and share their own research in a standardized way. For example, an open-source unit that identifies the websites, social media assets, naming conventions, and posting patterns of an operation based on publicly available information—all of which elements have featured in open-source discoveries before—can use the kill chain to set these out in sequence for the benefit of the public and platforms.
Box 1: Seeing and Sharing
There are significant differences in the sorts of indicators that different members of the defender community can be expected to see. Social media companies and tech providers are more likely to have consistent insights into the infrastructure that underpins different operations on their platforms; open-source researchers are more likely to have consistent insights into online operations’ behavior across many platforms. Moreover, different operations leave very different footprints: a complex, public-facing influence operation will spread across far more surfaces than a spearphishing campaign.
However, these differences should not be overstated: open-source techniques can, under some circumstances, expose many details of an operation’s infrastructure. Moreover, the technical indicators that each platform or provider sees may also vary markedly. No one investigative team—whether platform, government, or open-source—has a monopoly on insights into online operations.
This is why we believe that responsible sharing is crucial to enable a comprehensive response to any given abusive operation. What seems a tangential insight to one team may be the precise detail that another team needs to break open the case, so the best way to defend against online operations is for each member of the defender community to share what information they can, together with their contextual assessment of how each indicator fits into the overall operation.
PRINCIPLES OF THE ONLINE OPERATIONS KILL CHAIN
We have built the Online Operations Kill Chain according to the following principles:Observation-based: The Online Operations Kill Chain is restricted to TTPs that can be directly observed, such as an operation’s use of internet infrastructure, or demonstrated with high confidence, such as an operation’s use of an encrypted messaging app if an asset links to that app in its bio. It is not designed to track activity that can only be hypothesized, such as an operation’s strategic goal.
Tactical: The kill chain is designed for tactical analysis of online operations. It is not designed to analyze larger phenomena, such as organic social movements, or measure very large-scale vulnerabilities, such as the overall health of a body politic.
Platform-agnostic: We have designed the kill chain to apply to all kinds of platforms—not only social media, but websites and email providers, for example. Some TTPs include real-world activity, such as setting up shell companies or physical offices, or co-opting influencers, journalists, and others to carry out influence activities, as some troll farms are known to have done.38 The precise activity will vary from one surface to another, but the links in the kill chain are constant.
Optimized for human-on-human operations: We have optimized the Online Operations Kill Chain to describe operations in which the source and target are human—for example, an espionage team trying to socially engineer a diplomat, an influence operation trying to co-opt a journalist, or a network sharing child sexual abuse material. The kill chain can be applied to machine-on-machine attacks, but it is not primarily designed with them in mind.
One or many platforms: We have designed the kill chain to be applicable to both single-platform and multiplatform operations. A number of techniques and procedures explicitly reference cross-platform activity, such as backstopping personas by maintaining the same fake identity on multiple social media and using each platform to boost the credibility of the others, running phishing websites, posting content from one platform to another, and switching conversations from direct messages to emails.
Modular: The kill chain reflects the possible phases of an operation, but not every operator goes through every phase. The links in the kill chain can therefore be thought of as modular elements, with not every element present in every case.
TERMINOLOGY
TTPs. We use the industry’s traditional framing of TTPs, where tactics are the highest level of observed behavior. Each tactic is broken down into a number of more specific techniques, and each technique is broken down into the most granular level of procedures.
We consider each tactic to be a separate link in the kill chain: disrupt one tactic, and you can disrupt an entire operation.
Assets. Anything that the operation controls or gains access to can be an asset. This can include both online and offline resources. Online assets include various types of social media and email accounts, but also websites, cryptocurrency wallets, and malware. Offline assets include SIM cards, bank accounts, office buildings (such as the “troll farms” exposed in Albania39 and Nicaragua40), and even office furniture (such as the beanbags that characterized one Russian troll farm).41
Information. We understand “information” in the broadest sense, to include electronic data and information about the real world. For example, a list of targets’ social media accounts, a database of compromised passwords, the movements of ships and aircraft, or the office address of a business would all count as information for the purposes of our kill chain.
Engagement. Engagement is any way an operation attempts to interact with people who are not part of it. It does not presuppose that the attempt is successful: a network like the Chinese Spamouflage network42 may sometimes use common hashtags to attract attention (tactic: targeted engagement; technique: posting to reach a specific audience; procedure: posting hashtags), but its posts typically received no engagement from assets outside the operation itself.
Harm. We consider “harm” to be any behavior that actually or potentially puts people at risk of physical harm, deceives or defrauds them, compromises their personal information, silences their voice, or promotes criminal activity.
We developed the Online Operations Kill Chain based on analysis of the behaviors that Meta’s threat intelligence teams regularly tackle, such as cyber espionage, influence operations, human exploitation, terrorism and organized crime, scams, and coordinated reporting and harassment. Other platforms and entities may see a scope for additional harms.
Online operation. As noted above, we use the term “online operation” as shorthand to describe a coordinated set of activities conducted by a threat actor with the apparent intent of causing harm. The kill chain is designed to analyze online operations, identify their weak points, and enable investigators to disrupt them.
THE ONLINE OPERATIONS KILL CHAIN
The kill chain consists of ten links. Each link represents a top-level tactic—a broad approach that threat actors use. Each tactic is broken down into more detailed techniques, which break down into yet more detailed procedures (see table 1). Procedures can be coupled with nonbehavioral metadata (such as country of origin) to produce a fine-grained picture of the operation.
At ten links, the Online Operations Kill Chain is longer than most other kill chains. This is primarily because most kill chains begin with the “reconnaissance” phase. It is our position that for an operation to conduct reconnaissance, especially on social media, it most likely will have gone through other steps first (such as acquiring IP addresses, emails and/or phone numbers, and social media accounts, as well as likely disguising those assets to make them harder to detect). These “upstream” stages are reflected in our kill chain—although not all platforms or entities will be able to observe them.
Authors’ note: all case-specific examples referenced in the following sections are drawn from public reporting.
PHASE 1: ACQUIRING ASSETS
This refers to any instance in which an operation acquires or sets up an asset or capability.43 Such assets can range from IP and email addresses to social-media accounts and malware to physical locations in a city.
For example, as Meta reported in April 2022,44 the hybrid cyber espionage and influence operation from Azerbaijan acquired commodity surveillanceware for Android and publicly available hash-cracking tools. An Iranian cyber-espionage operation that Meta disrupted early in 2022 created a hitherto unknown strain of malware dubbed “HilalRAT.”45
The original troll farm, the Russian Internet Research Agency, started out in 2013 by renting office space in Saint Petersburg, and it “purchased credit card and bank account numbers from online sellers”46 while a successor operation in early 2020 acquired a building in Ghana as a base of operations.47 An Iranian influence operation first reported by FireEye created a number of purported news websites to spread its message.48 Many scams register front businesses to gather and launder their proceeds.
Examples of asset acquisition within the Online Operations Kill Chain:Acquiring encrypted email addresses
Acquiring social media assets
Registering businesses
Renting office space
Registering web domains
PHASE 2: DISGUISING ASSETS
This tactic covers any action an operation uses to make its assets look authentic. This can range from stealing profile pictures from celebrities to creating deeply backstopped personas across multiple social media platforms and websites.
For example, many operations have sought to disguise their fake accounts by giving them profile pictures likely created from freely available websites using GANs.49 Some sexual predators pose as adolescents in their online engagements with potential victims.50 An Iranian cyber espionage operation that Meta disrupted in July 202151 ran cross-platform, backstopped accounts that posed as recruiters, defense and aerospace employees, journalists, medical staff, and even an aerobics instructor.52 Many scammers have impersonated military officers.53
Asset disguise is an essentially static tactic: the threat actor selects a persona of greater or lesser sophistication and maintains it with more or less regularity. This is distinct from efforts to evade detection, described below, which are an ongoing, often repetitive practice.
Examples of asset disguise within the Online Operations Kill Chain:Using StyleGAN 2 profile pictures
Impersonating real people or organizations
Posing as fictional media outlets
Using remote infrastructure appropriate to the target country
Backstopping personas across multiple platforms
PHASE 3: GATHERING INFORMATION
This covers any effort an operation makes to gather information, whether manually or by automation. It includes not only manual or scaled cyber reconnaissance techniques, scraping, and accessing databases of stolen passwords but also using open-source registers of marine or air traffic, searching corporate registries, and viewing potential targets’ social media profiles.
Much of this activity happens out of the public eye and is primarily visible to platforms, data system managers, companies, and law enforcement. For example, an agent for Chinese intelligence in the United States used “various social media sites” to research potential recruits from 2015 to 2020, according to the U.S. Department of Justice (DOJ).54 Also according to the DOJ, the Internet Research Agency tracked “certain metrics” of American social media groups, including “the group’s size, the frequency of content placed by the group, and the level of audience engagement with that content, such as the average number of comments or responses to a post.”55 In 2021, Meta disrupted seven providers of abusive commercial services that targeted journalists, dissidents, critics of authoritarian regimes, families of opposition, and human rights activists with surveillance-for-hire techniques.56
Examples of information gathering within the Online Operations Kill Chain:Using commercially available surveillance-for-hire tools
Using open-source flight tracking data
Searching for targets on social media platforms
Scraping public information
Monitoring trending topics
PHASE 4: COORDINATING AND PLANNING
This covers any method an operation uses to coordinate and plan its activity. This can include both overt and covert coordination and both manual techniques and automation.
For example, an anti-vaccine network that Meta disrupted in France and Italy in late 2021 used Telegram channels to coordinate and train people in online harassment.57 Some of this coordination was exposed by open-source researchers.58 Right-wing activists in 2016 were reported to be using direct message chat rooms on Twitter to coordinate their targets and use of bots.59 Left-wing activists at the Alabama special election in 2017 used publicly viewable spreadsheets to coordinate their supporters’ posting;60 a Mexican operator showed on video how he used a spreadsheet to coordinate automated Twitter activity in 2018.61
Examples of coordination and planning within the Online Operations Kill Chain:Coordinating via public posts
Training recruits in private groups
Coordinating using encrypted apps
Publishing lists of targets and hashtags
Automating posting across multiple accounts
PHASE 5: TESTING PLATFORM DEFENSES
Some operations test the limits of online detection and enforcement by sending a range of content with varying degrees of violation and observing which ones are detected.
For example, the Russian military intelligence unit that targeted Hillary Clinton’s presidential campaign servers in 2016 sent test spearphishing emails as part of its preparation.62 Hacking groups may upload their own malware to an antivirus data website like VirusTotal to see if it would be detected. Operations that exchange or post violating content, such as hate speech or sexually explicit imagery, may post variations of the same message to see which ones are detected automatically.
Examples of defense testing within the Online Operations Kill Chain:Sending phishing links to operation-controlled email accounts
Posting A/B variations of violating images
Posting A/B variations of violating texts
Testing own malware using publicly available tools
Posting spam at different rates from different accounts
PHASE 6: EVADING DETECTION
Any repetitive method an operation uses to sidestep online defenses qualifies as evading detection. This can include the use of camouflaged or edited text or images and also technical measures such as routinely changing IP addresses.
For example, one method used by the anti-vaccine operation referenced above was to write the French word “vaccin” as “vaxcin” or “vaxxin” to defeat keyword detection. Journalists have reported that the Boogaloo movement sometimes used the variant spelling Boogalo to evade detection on TikTok.63 A Russian operation nicknamed “Doppelganger” that spoofed the websites of European media outlets geo-restricted the fake sites so that only people in the target countries could view them.64
Examples of evasion within the Online Operations Kill Chain:Using typos to obfuscate key phrases
Geo-limiting website audiences
Editing images
Routing traffic through virtual private networks (VPNs) or anonymous web browsers like Tor
Using coded language or references
PHASE 7: INDISCRIMINATE ENGAGEMENT
This tactic includes any form of posting or engagement in which the operation makes no apparent effort to reach a particular audience. For example, spammers who use fake accounts to share posts to their own timelines, or operations that post content on their own websites and do not otherwise promote it, would count as indiscriminate engagers. Often, indiscriminate engagement is characterized by what operations do not do: an absence of any discernible efforts to reach an audience. In effect, it is a “post and pray” strategy, dropping their content onto the internet and leaving it to users to find it.
For example, the Chinese operation Spamouflage primarily posted on YouTube, Twitter, and Facebook.65 It used large numbers of accounts to post pro-China or anti-Western videos interspersed with innocuous landscapes and sayings, but accounts often did so without any attempt—such as hashtags or @-mentions—to attract an audience. Much of the Russian operation Secondary Infektion used a “post and pray” approach—for example, posting a blog about politics in Europe on a forum dedicated to the civil service in Pakistan.66 The Doppelganger operation sometimes made comments about the Ukraine war in response to posts about sport or fashion.
Spam operations often fall into this category too. Networks that use one fake account to post content on a social media platform and then use other fakes to share the original post to their own timelines may make the original post appear more popular than it really was, but they are not taking any meaningful steps to reach an authentic audience.
Examples of indiscriminate engagement within the Online Operations Kill Chain:Publishing content on web forums inappropriate to the subject matter
Replying to posts with no relevance to the subject matter
Publishing on operation-controlled websites only
Posting to operation-controlled social media timelines only
Using operation-controlled assets to comment on posts by other operation-controlled assets, where none of the assets has authentic followers
PHASE 8: TARGETED ENGAGEMENT
Targeted engagement, by contrast, covers any sort of method an operation uses to plant its content in front of a specific audience. It can include, for example, advertising, mentioning or replying to a target account, spearphishing, or even emailing real people and trying to trick them into becoming part of the operation.
There are many examples of targeted engagement. The Russian Internet Research Agency made heavy use of ads in 2015 and 2016;67 in 2020, it hired real people to write for it68 and even to run ads in the United States on its behalf.69 Russian military intelligence used social media messaging and email to communicate with people in the United States, including reporters, in 2016.70 An Iranian operation that focused on Scottish independence in late 2021 used independence-themed hashtags on many of its posts.71 A previously unreported Iranian hacking group that Meta disrupted in early 2022 used fake “job recruitment” personas to message and email its targets.72 This group created fake interview and chess apps, which would only deliver the malware payload after the targets interacted with the attacker in real time. In 2021, Google revealed North Korean actors posed as security researchers to lure other researchers into sharing vulnerabilities and exploit code.73
Targeted engagement is an important late-stage TTP for researchers, because it is the area where operations likely show the most unique combination of approaches. For journalists and researchers, this is also essential security awareness training to recognize when they or their colleagues become the targets.
Examples of targeted engagement within the Online Operations Kill Chain:Running ads
Using hashtags appropriate to the target audience
Emailing potential victims or recruits
Submitting operation material to authentic news outlets
Directing harassment groups to specific people or posts
PHASE 9: COMPROMISING ASSETS
An operation that attempts to access or take over accounts or information is considered to be compromising assets. Espionage actors are the primary culprits here, but scammers and influence operations can also compromise assets under some circumstances.
Social media asset compromise can cover, for example, password spraying, spearphishing, a variety of social engineering techniques, device compromise, and access via email compromise, as in the case of the espionage and influence operation known as Ghostwriter.74 It can also cover incidents when threat actors convince the administrators of pages or groups to make them administrators, too, and then use their new privileges to remove the other administrators from the page or group in question.75 And it can cover compromises of third-party apps, which give the threat actor access to high-profile accounts.76
Examples of compromise within the Online Operations Kill Chain:Phishing email login credentials
Using compromised email accounts to access social media accounts
Socially engineering victims to hand over credentials
Acquiring administrative privileges on social media assets
Installing malware on victim servers
PHASE 10: ENABLING LONGEVITY
Finally, operations that take steps to survive takedown, or to prolong their activity after exposure, are considered to be enabling longevity. Many publicly documented operations have responded to disruption by attempting to adapt their TTPs and restore their presence on different platforms: this is why one use of the kill chain can be to compare different stages of the same operation, to analyze any forced adaptation measures and develop countermeasures.
For example, Spamouflage responded to the takedown of one of its Twitter personas (named ๆ่ฅๆฐดfrancisw) by acquiring preexisting accounts on the platform, giving them the persona’s name and profile picture, and returning to posting with the explicit message, “This is my new account.”77 When Meta blocked the first set of spoofed domains created by Doppelganger, the operation created hundreds of new domains to try to redirect people to the spoofed sites.78 An Iranian operation known as IUVM responded to the loss of its social media assets by creating new fakes to spread its imagery.79 After Russian military intelligence’s “Alice Donovan” persona was exposed, it emailed at least one outlet that had published its work to falsely claim that “she” had deleted “her” Facebook account, but the account continued posting on Twitter.80 As the latter example shows, operations may also spread themselves across platforms, partly in the hope that at least some accounts may evade enforcement.
Attempts to prolong the longevity of an operation can take unusual forms. In 2018, the Internet Research Agency had approximately one hundred Instagram accounts taken down shortly before the U.S. midterm elections. It responded by falsely claiming that those accounts were only the tip of the iceberg, and its operation had already thrown the elections, engaging in what we call “perception hacking.” The attempt was met with ridicule, but it remains an example of trying to turn a takedown into a communications opportunity.81
Examples of enabling longevity within the Online Operations Kill Chain:Replacing disabled accounts with new ones using the same persona
Changing email addresses
Creating new web domains that redirect to old ones
Deleting logs and other evidence
Weaponizing a disruption to claim that it was part of the plan all along
After longevity: The daisy-chain effect. One recurring question when investigating the efforts of particularly persistent threat actors is: at what point should sufficiently determined persistence be considered a new operation? Many of the more persistent threat actors represent what could be thought of as a daisy-chain effect, in which the late-stage elements of one operation segue into the early-stage elements of a new one, and any distinction between the two is largely arbitrary.
For the sake of practicality, we consider that an operation can be treated as “new” if it changes the majority of its procedures in the first phases of the kill chain: asset acquisition and disguise. For example, a harassment network that reconstitutes after disruption by setting up accounts on the same IP addresses and reusing the visual branding of its first iteration would not count as a new operation. By contrast, when individuals associated with past activity by the Internet Research Agency began operating in Ghana in early 2020, they used entirely new physical and online infrastructure, disguised their operation as a local nongovernmental organization, and created a website and blogs—as well as social media accounts—to backstop the deception.82 This showed enough variation to qualify as a new operation.
APPENDIX: CASE STUDIES—THE ONLINE OPERATIONS KILL CHAIN IN USE
To illustrate how the kill chain can be used, the following case studies apply the Online Operations Kill Chain to operations that have been publicly reported with unusual detail: the hacking and leaking of Clinton campaign emails in 2016 by Russian military intelligence (the Main Directorate of the General Staff of the Armed Forces, or GRU) known as “DCLeaks”; the “PeaceData” website run by the Internet Research Agency in 2020; and the “V_V” anti-vaccine harassment movement that Meta took down in 2021.
The main sources for the GRU’s hack-and-leak operation in 2016 are the U.S. DOJ’s indictment of the suspected hackers83 and its redacted report into Russian interference.84 CounterPunch’s investigation into the “Alice Donovan” persona is a trove of information around “her” publishing activity.85 Sources for the PeaceData operation include the original takedown announcements by Facebook86 and Twitter;87 the simultaneous report by Graphika based on the takedown;88 and victim testimonies published by Reuters,89 the Daily Beast,90 the New York Times,91 and New Zealand news site newsroom.co.nz.92 The main sources for the V_V takedown are Meta’s takedown announcement93 and the in-depth research conducted by Graphika.94
The level of detail in the DOJ’s reporting gives us a rare opportunity to include in a public analysis details that would typically be private or inaccessible, such as server acquisition, financial transactions, and recruitment emails. The PeaceData and V_V cases give a more typical illustration of what can be achieved with open-source methods.
DCLEAKS AND ALICE DONOVAN
Acquiring assetsSetting up email addresses (yandex.com, mail.com, gmail.com, aol.fr)
Leasing server in target country
Leasing server in third country (Malaysia)
Leasing computer in target country
Acquiring cryptocurrency wallet
Acquiring VPN account
Acquiring cloud computer account
Acquiring link-shortening account
Acquiring social media accounts (Facebook, Twitter, Pinterest)
Creating malware (X-Agent, X-Tunnel)
Registering websites
Setting up blog
Setting up remote middleman server
Disguising assetsStealing profile pictures
Creating fictional personas (Alice Donovan, DCLeaks, Guccifer 2.0)
Backstopping personas across platforms (Facebook, Twitter, Pinterest, websites, blogs)
Attributing own activity to external organization (claiming DCLeaks was a “Wikileaks sub-project”)
Spoofing sender email address in spearphishing attacks
Creating phishing domains resembling real ones (accounts-qooqle.com, account-gooogle.com)
Creating email address one letter away from real person’s name
Gathering informationResearching victims on social media
Searching for open-source information about victims’ computer networks
Querying victim IP configurations to identify connected devices
Searching victim devices for keywords in files
Searching for translations
Copying articles by real authors
Coordinating and planningCoordinating through military chain of command
Coordinating between distinct units (cyber units 26165 and 74455)
Testing defensesTesting malware ability to connect to target
Testing ability to compress and exfiltrate data from target
Evading detectionUsing link-shortening tools to obfuscate malware links
Using middleman server to obfuscate data exfiltration
Using compression tools to conceal scale of data exfiltration
Registering web domain under privacy protection
Indiscriminate engagementPosting content on a blog hosted by WordPress
Targeted engagementSending malware to spearphishing targets by email
Submitting articles to news websites by email
Sending hacked content to unwitting individuals by email
Contacting news websites by direct message
Sending hacked content to unwitting individuals by direct message
Posting hacked content on password-protected site
Publishing hacked content on website on a daily basis
Promoting hacked content on social media
Laundering hacked content through external organization
Curating and copying content written by genuine authors
Compromising assetsSpearphishing target credentials
Using stolen credentials to access victim server
Installing malware on victim server
Logging keystrokes
Taking screenshots
Exfiltrating data via middleman server
Enabling longevityDeleting logs and files
Searching for open-source releases about the hackers’ tools
Replacing phishing infrastructure with new phishing site (actblues[.]com)
Using fake persona to deny public attribution (Guccifer 2.0)
Engaging with editors after exposure to proclaim innocence (Alice Donovan)
Claiming to have self-deleted social media accounts that were actually taken down, arguing this was “for safety reasons” (Alice Donovan)
Removing bylines of exposed fake personas from websites controlled by the operation, but leaving the articles up (Alice Donovan/Inside Syria Media Centre)
PEACEDATA
Acquiring assetsSetting up email addresses on encrypted domain (Proton Mail)
Setting up email addresses on own domain (peacedata.net)
Acquiring online payment account (PayPal)
Acquiring social media accounts (Facebook, Twitter, WhatsApp, LinkedIn, UpWork, Guru)
Acquiring inauthentic friends/followers
Setting up websites (peacemonitor.com, peacedata.net)
Disguising assetsUsing GAN-generated profile pictures
Running inauthentic media brand
Running fake personas
Pretending to be located in third countries
Backstopping personas across platforms (Facebook, Twitter, LinkedIn, website, author bylines, emails)
Giving fake personas specific roles within fake brand (such as recruiting, editor, or deputy editor)
Gathering informationCopying news articles from authentic sites
Searching for freelance contributors on social media
Searching for job-listing sites appropriate to target audience
Coordinating and planningCoordinating using encrypted email (Proton Mail)
Coordinating using encrypted messaging (WhatsApp)
Creating fake publishing partnership with external websites
Evading detectionRecruiting unwitting contributor in America to run political Facebook ads
Recruiting unwitting native-language authors
Recruiting professional translator
Moving communications from social media messaging to email
Indiscriminate engagementNo evidence (engagement was primarily targeted)
Targeted engagementRunning ads for freelance writers on job forums
Cold messaging potential contributors on LinkedIn
Emailing potential contributors
Direct messaging potential contributors on social media
Recruiting contributors in target countries
Paying contributors via PayPal
Sharing links into politically aligned Facebook groups
Asking unwitting contributors to amplify publications to their own networks
Adding political slant to some articles
Compromising assetsNo evidence
Enabling longevityGiving unwitting individuals admin rights on social media assets
Denying exposure in public statement
Denying exposure in private communications to contributors
V_V
Acquiring assetsAcquiring emails
Acquiring phone numbers
Acquiring authentic social media accounts (Facebook, Telegram, Instagram, YouTube, TikTok, VKontakte)
Acquiring duplicate social media accounts
Acquiring inauthentic social media accounts
Disguising assetsBranding assets with V_V logo
Coordinating and planningCreating public hierarchy within organization
Training new recruits using social media posts
Coordinating harassment in private channels
Coordinating on encrypted messaging apps (WhatsApp, Signal)
Coordinating posting assignments (for example, memes, links and videos)
Coordinating via shared hashtags
Allocating a rank/number to each member
Evading detectionScrambling letters in key words (“vaccine”/“vaxcine”)
Replacing letters with numbers in key words (“v4ccine”)
Replacing letters with emojis in key words (√ instead of V)
Switching channels from public to private and back at set times
Indiscriminate engagementDistributing printed flyers through residents’ physical mailboxes
Targeted engagementMass down-voting of targets’ posts
Mass commenting on targets’ posts
Mass posting hashtags
Defacing targets’ personal photos with Nazi imagery
Inviting users of other platforms to join Telegram
Tagging friends to attract them to branded content
Mass voting on online polls
Graffiti on target buildings
Compromising assetsMass booking genuine vaccination appointments and then cancelling them at the last minute
Enabling longevityOperating across platforms to take advantage of different enforcement regimes
Carnegie’s Partnership for Countering Influence Operations is grateful for funding provided by the William and Flora Hewlett Foundation, Craig Newmark Philanthropies, the John S. and James L. Knight Foundation, Microsoft, Facebook, Google, Twitter, and WhatsApp. The PCIO is wholly and solely responsible for the contents of its products, written or otherwise. We welcome conversations with new donors. All donations are subject to Carnegie’s donor policy review. We do not allow donors prior approval of drafts, influence on selection of project participants, or any influence over the findings and recommendations of work they may support.
No comments:
Post a Comment