Last Thursday, BuzzFeed News revealed that an internal Facebook report concluded that the company had failed to prevent the “Stop the Steal” movement from using its platform to subvert the election, encourage violence, and help incite the Jan. 6 attempted coup on the US Capitol.
Titled “Stop the Steal and Patriot Party: The Growth and Mitigation of an Adversarial Harmful Movement,” the report is one of the most important analyses of how the insurrectionist effort to overturn a free and fair US presidential election spread across the world’s largest social network — and how Facebook missed critical warning signs. The report examines how the company was caught flat-footed as the Stop the Steal Facebook group supercharged a movement to undermine democracy, and concludes the company was unprepared to stop people from spreading hate and incitement to violence on its platform.
The report’s authors, who were part of an internal task force studying harmful networks, published the document to Facebook’s internal message board last month, making it broadly available to company employees. But after BuzzFeed News revealed the report’s existence last week, many employees were restricted from accessing it.
Do you work at Facebook or another technology company? We’d love to hear from you. Reach out to ryan.mac@buzzfeed.com or via one of our tip line channels.
“Is there a reason the Workplace Note has been taken down?” one employee wrote on the message board after the report became restricted. “I suspect employees would prefer to read it for themselves and draw their own conclusions.”
“It’s pretty common that critical writing about the company gets removed under some trumped-up excuse if it gains any internal or external traction, it’s not about the public visibility but the morale effects I imagine,” another worker said.
Given the newsworthiness and historical significance of the report and its revelations about the events of Jan. 6, BuzzFeed News is publishing the full text below.
“The authors never intended to publish this as a final document to the whole company,” a Facebook spokesperson said in a statement. “They inadvertently published it to a broad audience and they simply restricted it to the internal working group it was intended for.”
The spokesperson added that it was the authors who restricted access to the report.
The company has defended its work to protect the 2020 election. Last month in testimony before the House Energy and Commerce Committee, Facebook CEO Mark Zuckerberg said that though the company had not caught all election interference before the insurrection, it had “made our services inhospitable to those who might do harm.”
“We are committed to keeping people safe on our services and to protecting free expression, and we work hard to set and enforce policies that meet those goals,” he wrote in prepared comments to that committee. “We will continue to invest extraordinary resources into content moderation, enforcement, and transparency.”
On Tuesday, Monika Bickert, Facebook’s vice president of content policy, is set to testify in a Senate Judiciary Committee hearing on algorithmic amplification on technology platforms alongside executives from YouTube and Twitter.
Here is the full text of Facebook’s internal report. Some graphics were not reproduced due to their technical nature.
Stop the Steal and Patriot Party: The Growth and Mitigation of an Adversarial Harmful Movement
[The Facebook report included a cover image here, featuring a burning US Capitol building and a cartoon corgi dressed as a firefighter.]
TLDR
Intro
Many of us remember election night and the few days following. The satisfaction at having made it past the election without major incident was tempered by the rise in angry vitriol and a slew of conspiracy theories that began to steadily grow. At the time, veterans of 2016 recalled the spike in fear, anger, and uncertainty, the growth of mega-groups like Pantsuit Nation. We all asked ourselves whether what we were seeing in the wake of the election was the same thing, or something more nefarious. Hindsight is 20/20, at the time it was very difficult to know whether what we were seeing was a coordinated effort to delegitimize the election, or whether it was protected free expression by users who were afraid and confused and deserved our empathy. But hindsight being 20/20 makes it all the more important to look back to learn what we can about the growth of the election delegitimizing movements that grew, spread conspiracy, and helped incite the Capitol Insurrection.
The first Stop the Steal Group emerged on election night. It was flagged for escalation because it contained high levels of hate and violence and incitement (VNI) in the comments. The Group was disabled, and an investigation was kicked off, looking for early signs of coordination and harm across the new Stop the Steal Groups that were quickly sprouting up to replace it. With our early signals, it was unclear that coordination was taking place, or that there was enough harm to constitute designating the term. It wasn’t until later that it became clear just how much of a focal point the catchphrase would be, and that they would serve as a rallying point around which a movement of violent election delegitimization could coalesce.
“Delegitimization” (D14N) as a concept is new territory, both for analysis and policy. Many D14N workstreams were spun up in the wake of election night, but few policies or knowledge existed around the issue. Our research during the US2020 IPOC came from rapid work on topic classifiers, CIRD pipelines, regex and classifier tracking in HELLCAT, and manual analysis via CORGI modeling, and we were able to launch demotions and some enforcement directed at the issue, but work remains to develop a firm policy framework around addressing the issue. In this note we will describe the harms we were later able to observe within the StS movement, how follow-on movements like Patriot Party (PP) were able to grow in its wake, and how we might use what we learned to better capture coordinated harm in the future.
Early Indicators of Harm
From the earliest Groups, we saw high levels of Hate, VNI, and delegitimization, combined with meteoric growth rates — almost all of the fastest growing FB Groups were Stop the Steal during their peak growth. Because we were looking at each entity individually, rather than as a cohesive movement, we were only able to take down individual Groups and Pages once they exceeded a violation threshold. We were not able to act on simple objects like posts and comments because they individually tended not to violate, even if they were surrounded by hate, violence, and misinformation. After the Capitol Insurrection and a wave of Storm the Capitol events across the country, we realized that the individual delegitimizing Groups, Pages, and slogans did constitute a cohesive movement.
Some of our first indicators use off-platform signals, finding that designated organized hate groups were involved in organizing Storm the Capitol (StC) events using CORGI fanouts, and were involved in pushing Stop the Steal. We also found that there was high membership overlap between StS Groups and Proud Boy (a designated DOI org) and militia Groups.
We looked at the content of Groups and Pages, comparing the rates of hate speech, vni, and DOI references in StS, PP, and StC Groups using the HELLCAT tables, which aggregate a myriad of integrity-based content signals to the complex entity level. This allowed us to see that StS groups had considerably more hate, vni, and references to conspiracy and militias than the average civic Group as a whole.
In addition to HELLCAT, we were built fast turnaround classifiers and CIRD pipelines to identify high risk Groups and other complex entities. These CIRD pipelines were wired to demotions, as well as aggregated to surface high risk complex entities. Misinfo escalations were also frequent, although the volume far outstripped 3PFC or escalation review capacity. Together, these approaches allowed us to flag individual Groups and Events with high levels of harm for review through HEROCO or the Events queue.
These content-based approaches allowed us to observe how harm manifested in the movement as a whole, showing that the terms were steeped in hate and VNI. This helped us see that there was a problem, but network analysis helped us understand coordination in the movement, and how the harm was able to spread as a network. Understanding the growth of the network will help us to better tackle harmful networks in the future.
Coordination
We were able to observe direct coordination for Stop the Steal through investigative work, relying on external sources for leads.
The terms Stop the Steal and Patriot Party were amplified both on platform and off. Ali Alexander and the Kremer sisters repeated slogans at rallies, and spread them through super Groups like Women4Trump and Latinos for Trump. The Kremer Sisters were admins of both Women4Trump, and the original Stop the Steal Group. After January 6th, Amy Kremer confirmed on platform that she was an organizer for the Stop the Steal rally that precipitated the Capitol Insurrection.
Ali Alexander worked on and off platform, using media appearances and celebrity endorsements. We also observed him formally organizing with others to spread the term, including with other users who had ties to militias. He was able to elude detection and enforcement with careful selection of words, and by relying on disappearing stories.
This sort of deep investigation takes time, situational awareness, and context that we often don’t have. What sort of behavioral signals might we be able to leverage to observe coordination when we lack the time or background for in depth investigations? What sort of analyses and models might we build to help us identify these networks in the future?
Group Inviters
One way to observe coordination in a movement is by looking at growth hacking. Growth hacking might not always be bad. A democratic movement, a movement seeking human rights, or even an advertising movement, may all employ legitimate techniques to grow their audience quickly. However, when the growth is mixed with the signals of harm we described above, this rapid growth indicates the spread of harm, and may indicate coordinated harm.
Stop the Steal was able to grow rapidly through coordinated Group invites: 67% of StS joins came through invites. Moreover, these invites were dominated by a handful of super-inviters: 30% of invites came from just 0.3% of inviters. Inviters also tended to be connected to one another through interactions — they comment on, tag, and share one another’s content. These were inviters with more than 500 invitees each. In the top StS Group, there were 137 super inviters, with 500 invitees each. Of these users, 88 were admins of other StS Groups, suggesting cooperation in growing the movement. These super-inviters had other indicators of spammy behavior: 73% had bad friending stats, with a friend request reject rate above 50%. 125 of them likely obfuscated their home locations. 73 of them were members of harmful conspiracy Groups. We also saw that inviters to these Groups tend to be connected. At the beginning of January, before the post-insurrection spike in StS and PP Groups, half of all inviters with > 100 invitees also engaged with one another either directly through messaging and tagging, or with one another’s content in the previous month, suggesting that many inviters were connected to one another.
[The Facebook report included a graphic here, showing a network of how “most heavy inviters are connected to one another.”]
This growth occurred despite our attempts to prevent it: the Groups Task Force identified risks around Group inviting leading to the rapid growth of anti-quarantine Groups. Super-inviters were able to quickly grow new Groups, both allowing the rapid growth of harmful Groups, and helping to avoid enforcement as backup Groups replaced disabled Groups. In response, a cap of 100 invites/person/day was implemented. We released an additional new invite rate limit of 30 adds/hour (now deprecated) during the growth of Stop the Steal Groups for users adding new friends (< 3 days) to new groups (< 7 days) to Groups with some certain ACDC properties. However, all of the rate limits were effective only to a certain extent and the groups were regardless able to grow substantially.
Any successful movement also has organic growth that should not be discounted. A third of the growth came from self joins, and while the plurality of the inviting came from a handful of users, 82% of inviters invited fewer than 10 others. This combination of growth hacking with organic growth made exemplified how complicated harmful network movements can be. In order to explore this growth and the extent to which it was driven by amplification of the slogans, we explored the way the Content flowed through the broader StS network, in Groups and beyond.
Understanding the Network
Using Information Corridors, we were able to identify the larger community where StS and election delegitimization was discussed most heavily. We started by identifying users who posted the most using delegitimizing language, and who used a wide variety of terms. These were our high StS engagers. We then fanned out to everyone they interacted with, and identified those users who were also using a lot of Stop the Steal language, or who had a high propensity for doing so based on our classifiers. This network of high StS users was our Information Corridor (IC). It identifies the part of the social network on platform where the harmful content is circulating. For an overview demo of Stop the Steal Information Corridors, more detail see here.
Out of 6,450 high engagers, 4,025 (63%) of them were directly connected to one another, meaning they interacted with one another’s content or messaged one another. When using the full Information Corridor, 77% were connected to one another. This suggests that the bulk of the Stop the Steal amplification was happening as part of a cohesive movement.
[The Facebook report includes a network diagram here, showing how “Information Corridors allow us to identify the part of the network where harm circulates.”]
By tracking these language networks, we can better capture the coordinated harm that flows through the network. Members of the corridor produced 33% more hate, 31% more VNI compared to the broader community around the high engagers. Members of an information corridor are vulnerable to the harmful message that is being propagated because they are subject to, and most likely to engage with, that harmful content. Amplifiers in the IC are users who are connected to many other of these vulnerable users, so named because anything they say reaches a larger audience. By looking at patterns in amplifier language, we can better understand the harms that are being pushed through the IC. Amplifiers posted 98% more VNI and 40% more hate. The core of that network had 85% more VNI, and 45% more hate. We also identified the core of the IC — the set of users who all tightly engage with one another, using k-core decomposition.
[The Facebook report includes a graphic here, showing relationships between “closely connected users at the center of the network.”]
In order to understand how the movement perpetuates harm, we also need to understand the extent to which it persists beyond the coordinators and amplifiers. We also want to understand the extent to which users who interacted with coordinators and high engagers are also producing harm. To do this, we looked at audience closeness around the inviters described above. Users who engaged the most with inviters with at least 50 invitees. Those users who interacted the most with those inviters produced 92% more VNI, and 49% more hate. Relatedly, we also found that information corridors help link the users in the core of the StS network to those in the periphery, helping spread the message across the full network.
Overall, we were able to show that where PP and StS signals were being amplified through content and inviting, there was also higher levels of hate and violence, suggesting that these movements were harmful and that the harm was perpetuated through a network that we were able to define.
Growth of Patriot Party (PP)
Stop the Steal wasn’t the only movement that grew around the D14N theme. Patriot Party was another movement that grew out of and eventually in competition with StS, showing similar levels of harm. Many of the coordinators for PP expressed a disappointment with the StS movement’s failure to do what it promised, and a need to go further by bringing on systemic change through a new political party. On the other site, StS admins and real-life leaders had a large amount of celebrity and official-ness about them (the ones that weren’t banned from platform — Trump, Roger Stone, Alex Jones…etc.) that they didn’t necessarily want to be seen as deflecting from the traditional Republican Party to start a scrappy, potentially angrier Patriot Party.
Admins of PP attempted to recruit members from StS and Joe Biden is NOT my President Groups. Popular posts and frequent posters on PP pages and groups often used the Stop the Steal slogan, especially prior to inauguration. We also saw that PP was able to grow within StS corridors: members of the StS IC were 6% more likely to use the term “Patriot Party.” In the end, PP never grew as much as StS, in large part because of the lessons we learned from StS and were able to rapidly apply to PP.
[The Facebook report includes a pair of graphics here, showing how “Information Corridors” allowed the company to “track additional linguistic signals that grow within the network.”]
The leaders of PP had mixed success recruiting from StS supporters. As StS Groups were disabled, we their members flock to PP Groups: 20% of the Groups that members of disabled StS Groups joined were PP Groups. We were able to mitigate this growth by feature limiting Groups that many users joined after being disabled as an election Break The Glass measure. However, StS groups weren’t the main source for PP Groups: only 6.5% of actioned PP Group members were part of an actioned StS Group, and only 1.1% of actioned StS Group members joined an actioned PP Group, with only 3 out of roughly 1000 shared admins. Moreover, we saw that PP was primarily pushed by amplifiers within the StS IC who were not fully successful: we did not see widespread use of the PP term by less engaged members of the IC — learning from our previous work on StS, we were able to stop PP before it was able to spread.
[The Facebook report included a graphic here, showing “Group membership Jaccard similarity.”]
Crisis Response
Tracking Evolving, Inter-Related Movements
One of the most effective and compelling things we did was to look for overlaps in the observed networks with militias and hate orgs. This worked because we were in a context where we had these networks well mapped. During crises there are likely to be multiple escalations ongoing at once, with different teams focusing on different networks around DOI, misinfo, and other harms. By combining these, we could better understand how the nature of the harm being coordinated, and the myriad of tactics being used. As PP arose, showing the connection between PP and StS helped us understand the harm being perpetuated by PP in context, when the harm might have been less apparent alone.
We were also able to add friction to the evolution of harmful movements and coordination through Break the Glass measures (BTGs). We soft actioned Groups that users joined en masse after a group was disabled for PP or StS, this allowed us to inject friction at a critical moment to prevent growth of another alternative after PP was designated, when speed was critical. We were also able to add temporary feature limits to the actors engaging in coordinating behaviors, such as the super posters and super-inviters in the Groups that were removed, to prevent them from spreading the movement on other surfaces. These sets of temporary feature limits allowed us to put the breaks on growth during a critical moment, in order to slow the evolution of adversarial movements, and the development of new ones. Our ongoing work through the disaggregating networks taskforce will help us to make more nuanced calls about soft actions in the future in order to apply friction to harmful movements.
Signals of coordinating harm
In addition to the network evolution tracking described above, several signals were particularly useful for helping us identify coordinated harm. Specifically:
-
Content signals: Aggregating many signals from content within a complex entity helped us get to a comprehensive view of what was happening within those entities in order to understand harms broadly. We used HELLCAT tables to understand the relationship between text signals related to a movement or escalation and hate and violence. It also allowed us to compare signals from many ongoing escalations and harms, such as looking for relationships between StS, StC, and PP with Qanon and militias. We were also able to use the CIRD table to quickly and easily search for complex entities within new text signals. These tables also enabled us to quickly spin up D14N classifiers for both content and complex entities once a designation did occur.
-
Signals of rapid growth and amplification: An important signal was the rate of growth of Groups. This growth appeared to be through amplification and coordination: the Groups share common admins and super inviters, with these influential individuals participating across Groups.
-
Rapid URL sharing is another way that a movement can be amplified and spread quickly, representing the movement through off-platform sources that are harder to enforce on. Building off our lessons learned about amplification, we’ve also built tools (Information Corridors) to help us understand the growth of slogans and terms within a movement, in order to understand how it’s being amplified and how quickly it’s growing.
-
Branding: Not all movements will have common branding, but when they do, it is a clear sign of coordination. PP Groups and Pages used the same or similar logos to identify official sources.
-
Admin-only Groups and formal organizational structure represented on the platform: PP had admin-only Groups where formal coordination was organized. This does not always occur on platform, especially with adversarial networks, but is a clear signal when it does.
[The Facebook report included three examples of Patriot Party logos here.]
Conclusion
Gaps:
There are many lessons we can take away from our successes and challenges with mitigating StS and PP which are critically valuable for understanding gaps in detection, enforcement, and policy.
-
Designation differences between STS and Storm the Capitol made it hard to enforce because we couldn’t count strikes. The seams between policy areas made it harder to have a unified effort to tackle the delegitimizing harm as a whole, instead forcing us to target different parts of the problem piecemeal, with the larger wave of the movement seeping through the cracks.
-
We were able to observe growth of PP through StS, but this was a very manual process. It made us question what we were missing, and would rise from the ashes once we turned our attention away. Moreover, StS and PP were in competition with one another, enforcing on StS maye have helped PP to grow. We need tools and protocols for handling evolution of movements in the future, and for quickly designating new movements around old harms that arise when the field is cleared of competition.
-
We have little policy around coordinated authentic harm. While some of the admins had VICN ties or were recidivist accounts, the majority of the admins were “authentic.” StS and PP were not directly mobilizing offline harm, nor were they directly promoting militarization. Instead, they were amplifying and normalizing misinformation and violent hare in a way that delegitimized a free and fair democratic election. The harm existed at the network level: an individual’s speech is protected, but as a movement, it normalized delegitimization and hate in a way that resulted in offline harm and harm to the norms underpinning democracy.
-
What do we do when a movement is authentic, coordinated through grassroots or authentic means, but is inherently harmful and violates the spirit of our policy? What do we do when that authentic movement espouses hate or delegitimizes free elections? These are some of the questions we’re trying to answer through research and tool building in the Disaggregating Harmful Networks Taskforce, and that we’re wrestling through in the Adversarial Harmful Networks policy xfn.
-
A policy of coordinated authentic harm needs a broader definition of coordination to handle network or movement level harms and the interplay between organic and inorganic growth. It was hard to establish coordination (outside of the same logo usage) across hundreds of Groups/Pages due to the movement not being driven by a few actors, but rather being “adopted” and “promoted” by authentic users.
-
We need a range of full spectrum interventions from hard action to soft action in order to better handle the growth of organic harmful movements. Our narrow definition of coordination is centered around hard punitive actions. In order to slow the growth of movements, we should learn from our BTGs and apply a range of count-interventions, friction, soft actions, and hard actions in order to promote a healthier community beyond targeting the worst of the violators.
-
Enforcement lacked a single source of truth. Bulk enforcement, continuous enforcement, and adhoc enforcement had inconsistent labeling and case attribution, which made recidivism analysis difficult, made it harder to track the evolution of the movements, and made retrospectives and follow-up research more difficult.
Next Steps:
Luckily, we’ve learned a lot from the US2020 IPOC and the StS and PP cases. Here are our next steps.
-
Building new methods around network disaggregation and adding them to our tooling. Stay tuned for future integration of core-periphery modeling and information corridors into CORGI tooling.
-
Teaching investigators how to use the tools and techniques we’re developing through communities of interest such as Network Tools for Investigators Group, the Actor Investigation XFN, and improved documentation.
-
Using these cases, and tools, to help us understand organic coordinated harm, and harm within networks for further policy development. Stay tuned to more notes like this one as we continue to learn more!
-
Test out these new methods on ongoing investigation. For example, we are using our disaggregating network techniques to identify users for counter-speech interventions around Us hate groups. We’re also working on a set of cases in Ethiopia and Myanmar to test the framework in action. We’re working with BONJOVI to put together protocols for investigators.
Please get in touch if you have a use case that would benefit from using these tools and protocols! We’d love to work with you to help you track a broader harmful network and understand coordination within it.