Verder naar navigatie Doorgaan naar hoofdinhoud Ga naar de voettekst

Vaccine Misinformation Part 1: Misinformation Attacks as a Cyber Kill Chain

09 november 2021

door Swathi Nagarajan

The open and wide-reaching nature of social media platforms have led them to become breeding grounds for misinformation, the most recent casualty being COVID-19 vaccine information. Misinformation campaigns launched by various sources for different reasons but working towards a common objective – creating vaccine hesitancy – have been successful in triggering and amplifying the cynical nature of people and creating a sense of doubt about the safety and effectiveness of vaccines. This blog post discusses one of our first attempts within NCC Group to examine misinformation from the perspective of a security researcher, as a part of our broader goal of using techniques inspired by digital forensics, threat intelligence, and other fields to study misinformation and how it can be combatted. 

Developing misinformation countermeasures requires a multidisciplinary approach. The MisinfoSec Working Group is a part of the Credibility Coalition – an interdisciplinary research committee which aims to develop common standards for information credibility – which is developing a framework to understand and describe misinformation attacks using existing cybersecurity principles [1]

In this blogpost, which is part 1 of a series, we take a page out of their book and use the Cyber Kill Chain attack framework to describe the COVID-19 vaccine misinformation attacks occurring on social media platforms like Twitter and Facebook. In the next blogpost, we will use data from studies which analyze the effects of misinformation on vaccination rates to perform a formal risk analysis of vaccine misinformation on social media.

An Overview of the Cyber Kill Chain

The Cyber Kill Chain is a cybersecurity model which describes the different stages in a cyber-based attack. It was developed based on the “Kill Chain” model [2] used by the military to describe how enemies attack a target. By breaking down the attack into discrete steps, the model helps identify vulnerabilities at each stage and develop defense mechanisms to thwart attackers or force them to make enough noise to detect them.

Vaccine Misinformation Attacks as a Cyber Kill Chain

In this section, we use the Cyber Kill Chain defined by Lockheed Martin [3] to describe how misinformation attacks occur on social media. The goal is to study these attacks from a cybersecurity perspective in order to understand them better and come up with solutions by addressing vulnerabilities in each stage of the attack.

In this scenario, we assume that the “attackers” or “threat actors” are individuals or organizations whose objective is to create a sense of confusion about the COVID-19 vaccines. They could be motivated by a variety of factors like money, political agenda, religious beliefs, and so on. For instance, evidence was recently found to indicate the presence of Russian disinformation campaigns on social media [4][5] which attack the Biden administration’s vaccine mandates and sow distrust in the Pfizer and Moderna vaccines to promote the Russian Sputnik vaccine. Anti-vaccine activists, alternative health entrepreneurs and physicians have also been found to be responsible for a lot of the COVID-19 vaccine hoaxes circulating on social media sites including Facebook, Instagram and Twitter [6].

We also assume that the “asset” at risk is the COVID-19 vaccine information on social media, and the “targets” range from government and health organizations to regular users of social media.

Step 1: Reconnaissance

There are two types of reconnaissance:

  1. Passive reconnaissance: The attacker acquires information without interacting with the target actors. Since social media platforms are public, this kind of recon is easy for attackers to perform. They can monitor the activity of users with public profiles, follow trends, and study and analyze existing measures in place for thwarting misinformation spread. They can also find existing misinformation created both intentionally or as a joke (spoofs or satire) which they could later use out of context.
  2. Active reconnaissance: The attacker interacts with the target to acquire information. In this case, it could mean connecting with users having private profiles by impersonating a legitimate user, using phishing tactics to learn about the targets such as their political affiliation, vaccine opinions and vaccination status (if it is not already publicly available). The attacker could also create an account to snoop around and study other user activity, the social media platform activity and test the various misinformation prevention measures on a small scale.

Step 2: Weaponization

This stage involves the creation of the attack payload i.e., the misinformation. These could be advertisements, posts, images, and links to websites which contain misinformation. The misinformation could be blatantly false information, semantically incorrect information or out of context truths. For example, conspiracy theories like “The COVID-19 vaccines contain microchips that are used to track people” [7] are blatantly false. Blowing up the rare but serious side effects of the vaccine out of proportion is an example of misinformation as truth out of context. Deepfakes can also be used to create convincing misinformation. Spoof videos or misinformation generated accidentally that were already online could also be used by attackers to further their own cause [8].

A lot of social media platforms have been implementing tight restrictions on COVID-19 related posts, so it would be prudent for the attacker to create a payload which can circumvent those restrictions for as long as possible. 

Step 3: Delivery

Once the misinformation sources are ready, it needs to be deployed onto social media to reach the target people. This involves creating fake accounts, impersonating legitimate user accounts, making parody accounts/hate groups, and using deep undercover accounts that were created during the recon stage. The recon stage also reveals users or influencers whose beliefs align with the anti-vaccine misinformation which the attacker is attempting to spread. The attackers could convince these users – either through monetary means or by appealing to their shared objective – to spread the misinformation i.e., deliver the payload.

Step 4: Exploitation

The exploitation stage in this scenario refers to the misinformation payloads successfully bypassing misinformation screeners used by the social media platforms (if they exist) and reaching the target demographic i.e., anti-vaxxers, people who are on the fence about the vaccine etc. 

Despite several misinformation prevention measures being used by various social media platforms (refer Table 1), there still seems to be a significant presence and spread of misinformation online. Spreading misinformation on a large scale overwhelms the social media platforms [13] and adds complexity to their misinformation screening process since it requires manual intervention a lot of the time and the manpower available is very less compared to the large volume of misinformation. A study found that some social media sites allowed advertisements with coronavirus misinformation to be published [14], suggesting that some countermeasures may not always be effectively implemented.

Social Media PlatformMeasures to combat misinformation
Facebook (and its apps) [9]– Remove COVID-19 related misinformation that could contribute to imminent physical harm based on guidance from the WHO and other health authorities.
– Fact-checking to debunk false claims.
– Strong warning labels and notifications for posts containing misinformation which don’t directly result in physical harm.
– Labelling forwarded/chain messages and limiting the number of times a message can be forwarded.
– Remove accounts engaging in repeated malicious activities.
Twitter [10]– Remove tweets containing most harmful COVID-19 related misinformation.
– Apply labels to Tweets that may contain misleading information about COVID-19 vaccines.
– Strike-based system for locking or permanent suspension of accounts violating the rules and policies.
Reddit [11]– User-aided content moderation.
– Banning or quarantining reddit communities which promote COVID-19 denialism.
– Build a new reporting feature for moderators to allow them to better provide a signal when they see community interference.
TikTok [12]– Fact-checking.
– Remove content, ban accounts, and make it more difficult to find harmful content, like misinformation and conspiracy theories, in recommendations or search.
– Threat assessments
Table 1: Popular social media platforms and their measures to combat misinformation

The other vulnerability which attackers try to exploit has more to do with human nature and psychology. A study by YouGov [15] showed that 20% of Americans believe that it is “definitely true” or “probably true” that there is a microchip in the COVID-19 vaccines. This success rate of the conspiracy theory was attributed to the coping mechanism of humans to make sense of things which cannot be explained, or when there is a sense of uncertainty [16]. “Anti-vaxxers” have been around for a long time, and with the COVID-19 vaccines there is an even deeper sense of mistrust because of the shorter duration in which it was developed and tested. The overall handling of the COVID-19 pandemic by some government organizations has also been disappointing for the public. Attackers use this sense of chaos and confusion to their advantage to stir the pot further with their misinformation payloads.

Step 5: Installation

The installation stage of the cyber kill chain usually refers to the malware installation by attackers on the target system after delivering the payload. With respect to vaccine misinformation attacks, this stage refers to rallying groups of people and communities towards the attacker’s cause, either online and/or in physical locations. These users act as further carriers of the misinformation across various social media platforms, reinforcing it through reshares, reposts, retweets etc., leading to the misinformation gaining attention and popularity. 

Step 6: Command and Control

Once the attacker gains control over users who have seen the misinformation and interacted with it, they can incite further conflict in the conversations surrounding the misinformation, such as through comments of a post. They can also manipulate users into arranging or attending anti-vax rallies or protest vaccine mandates, causing a state of civil unrest. 

Step 7: Actions on Objectives

It is safe to assume that usually the objective of attackers performing vaccine misinformation attacks is to lower the vaccination rates. This objective can also be extended further and tied to other motives. For example, foreign state-sponsored misinformation attacks target US-developed vaccines, such as the Moderna and Pfizer mRNA vaccines, could have been created in order to suggest the superiority of vaccines developed in other nations. 

It is important to realize that the purpose of misinformation campaigns is not always to convince people that things are a certain way – rather, can simply be to seed doubt in a system, or in the quality of information available, making people more confused, overwhelmed, angry, or afraid. The sheer volume of misinformation available online has caused for some people a state of hesitancy and cynicism about the safety and effectiveness of the vaccines, even among people who are not typically “anti-vax”. Attackers generally aim to plant seeds of doubt in as many people’s minds as possible rather than attempting to convince them to not take the vaccine, since confusion is often sufficient to reduce vaccination rates.

Defenses and Mitigations to the Misinformation Kill Chain 

The use of the Cyber Kill Chain allows us to not only consider the actions of attackers in the context of information security, but to also consider appropriate defensive actions (both operational and technical). In this section, we will elaborate on the defensive actions that can be taken against various stages in the Misinformation Kill Chain. 

In keeping with the well-known information security concepts of layered defense [17] and defense in depth [18], the countermeasures should support and reinforce each other so that, for example, if an attacker is able to bypass technical controls in order to deploy an attack, then staff should step up to respond appropriately and follow an incident response procedure. 

The increasing concern about misinformation on social media has resulted in studies by governments in several countries [19], which provide suggestions for combatting the issue, and indications of which countermeasures have been effective [20][21][22]. Some of the measures are applicable at multiple stages of the kill chain. For example, the labelling of misinformation is intended to make a user less likely to read it, less likely to share it and less likely to believe it. 

The following countermeasures can be applied at different stages of the kill chain, to help stem the propagation of misinformation and to limit its effectiveness: 

Step 1: Reconnaissance 

  1. Limiting the availability of useful metadata, tracking/logging of site visits, reduce the data that is visible without login in order to limit information gathering, about both individuals as well as aggregate populations of individuals.
  2. Limiting group interaction for non-members in order to restrict anonymous or non-auditable reconnaissance.
  3. Implementing effective user verification during account creation to prevent fake accounts.
  4. Educating users about spoofing attacks and encourage them to keep their profiles private and accept requests cautiously.

Step 2: Weaponization 

  1. Using effective misinformation screeners to block users from creating and posting ads, images, videos or posts with misinformation.
  2. Labelling misinformation or grading it according to reliability (based on effective identification), in order to allow users to make a more informed decision on what they read. 
  3. Removing misinformation (based on effective identification) to prevent it from reaching its target. 

Step 3: Delivery 

  1. Recognition or identification (followed by either removal or marking) of misinformation using machine learning (ML), human observation or a combination of both.
  2. Recognition or identification (followed by either removal or suspension) of hostile actors using machine learning (ML), human observation or a combination of both. 
  3. Identification and removal of bots, especially when used for hostile purposes.

Step 4: Exploitation 

  1. Public naming of hostile actors in order to limit acceptance of their posts and raise awareness of their motivations and credibility.
  2. Encouraging members of the medical field to combat the large volumes misinformation with equally large volumes of valid and thoroughly vetted information about the safety and effectiveness of, in this case, the COVID-19 vaccines [22].
  3. Analyzing and improving the effectiveness of the misinformation prevention measures on social media platforms.
  4. Demanding and obtaining transparency and strong messaging from government organizations.

Step 5: Installation 

  1. Labelling misinformation or grading it according to reliability (based on effective identification).
  2. Tracking and removing misinformation (based on effective identification) in order to control its spread.

Step 6: Command and Control 

  1. Removal of groups or users with a record of repeatedly posting misinformation.
  2. Suspending accounts to influence better behavior, in the case of minor transgressions.

Step 7: Actions on Objectives 

  1. Media Literacy education – this is not a short-term measure but has been reported as very effective in Scandinavia [22] and is proposed as a countermeasure by the US DHS [20] to increase the resilience of the public to misinformation on social media by teaching them how to identify fake news stories and differentiate between facts and opinions.
  2. Fact checking – a wider presence of easily accessible sources for the general public and for journalists may assist in wider recognition of misinformation and help to form a general habit of checking against a reliable source.
  3. Pro-vaccine messaging on social media – encourage immunizations on social media emphasizing immediate and personalized benefits of taking the vaccines, rather than long-term protective or societal benefits since studies in health communications have shown the former approach to be much more effective than the latter [24]. Studies have also shown that using visual means than textual can magnify those benefits [25].

In Part 2 of this blog post series: Risk Analysis of Vaccine Misinformation Attacks

Since social media is an integral part of people’s lives and is often a primary source of information and news for many users, it is safe to assume that it influences the vaccination rates and vaccine hesitancy among its users. This in turn affects the ability of the population to achieve herd immunity and increases the number of people who are more likely to die from COVID-19. Several studies have been conducted recently attempting to understand and quantitatively measure the effects of vaccine misinformation on vaccination rates. Each of these studies used different approaches and metrics across different social media platforms to perform the analysis, but they all conclude the same thing in the end – misinformation lowers intent to accept a COVID-19 vaccine. In our next post in this series, we will look at the results of these studies in more detail and use them to perform a risk analysis of misinformation attacks.

References:

[1] https://marvelous.ai/wp-content/uploads/2019/06/WWW19COMPANION-165.pdf

[2] https://en.wikipedia.org/wiki/Kill_chain

[3] https://www.lockheedmartin.com/content/dam/lockheed-martin/rms/documents/cyber/LM-White-Paper-Intel-Driven-Defense.pdf

[4] https://www.nytimes.com/2021/08/05/us/politics/covid-vaccines-russian-disinformation.html

[5] https://fortune.com/2021/07/23/russian-disinformation-campaigns-are-trying-to-sow-distrust-of-covid-vaccines-study-finds/

[6] https://www.npr.org/2021/05/13/996570855/disinformation-dozen-test-facebooks-twitters-ability-to-curb-vaccine-hoaxes

[7] https://www.forbes.com/sites/brucelee/2021/05/09/as-covid-19-vaccine-microchip-conspiracy-theories-spread-here-are-some-responses/?sh=65408061602d

[8] https://www.factcheck.org/2021/07/scicheck-spoof-video-furthers-microchip-conspiracy-theory/

[9] https://about.fb.com/news/tag/misinformation/

[10] https://help.twitter.com/en/rules-and-policies/medical-misinformation-policy

[11] https://www.reddit.com/r/redditsecurity/comments/pfyqqn/covid_denialism_and_policy_clarifications/

[12] https://newsroom.tiktok.com/en-us/combating-misinformation-and-election-interference-on-tiktok

[13] https://www.scientificamerican.com/article/information-overload-helps-fake-news-spread-and-social-media-knows-it/

[14] https://www.consumerreports.org/social-media/facebook-approved-ads-with-coronavirus-misinformation/

[15] https://docs.cdn.yougov.com/w2zmwpzsq0/econTabReport.pdf

[16] https://www.insider.com/20-of-americans-believe-microchips-in-covid-19-vaccines-yougov-2021-7

[17] https://www.ibm.com/docs/en/i/7.3?topic=security-layered-defense-approach

[18] http://www.nsa.gov/ia/_files/support/defenseindepth.pdf

[19] https://www.poynter.org/ifcn/anti-misinformation-actions/

[20] https://www.dhs.gov/sites/default/files/publications/ia/ia_combatting-targeted-disinformation-campaigns.pdf

[21] https://www.digitalmarketplace.service.gov.uk/g-cloud/services/101266738436022

[22] https://www.chathamhouse.org/2019/10/eu-us-cooperation-tackling-disinformation

[23] Hernandez RG, Hagen L, Walker K, O’Leary H, Lengacher C. The COVID-19 vaccine social media infodemic: healthcare providers’ missed dose in addressing misinformation and vaccine hesitancy. Hum Vaccin Immunother. 2021 Sep 2;17(9):2962-2964. doi: 10.1080/21645515.2021.1912551. Epub 2021 Apr 23. PMID: 33890838; PMCID: PMC8381841.

[24] https://msutoday.msu.edu/news/2021/ask-the-expert-social-medias-impact-on-vaccine-hesitancy

[25] https://www.pixelo.net/visuals-vs-text-content-format-better/