A political leader uses social media to spread misinformation and hate. Followers are spurred to violence. People are killed.
It is a toxic brew that has surfaced repeatedly across the world — in Myanmar, India, Sri Lanka, the Philippines, Brazil and now the United States.
Facebook, Twitter and YouTube banned President Trump from their platforms for inciting last week’s deadly mob attack on the Capitol. But in other countries, social media giants have been far slower to shut down misinformation and hate speech, often failing to remove inflammatory posts and accounts even after they’ve contributed to lynchings, pogroms, extrajudicial killings or ethnic cleansing.
“Facebook is really taking serious action on what’s happening in the U.S., but we’ve been raising the issue of government instigation of violence for many years when they didn’t take any action,” said Thet Swe Win, an activist in Myanmar, where military and Buddhist leaders long used the social media platform to foment hatred against the Rohingya Muslim minority.
He and others view the tech titans’ actions against Trump and right-wing conspiracists in the U.S. not as a sign of a stronger commitment to blocking dangerous content, but of an entrenched double standard in the way internet companies enforce their rules in their wealthiest market — where they face heavy scrutiny from regulators and the public — versus in other countries.
To many activists, the biggest culprit is Facebook, the world’s largest social media platform and an indispensable information source, especially in societies where the free press faces restrictions. The company has only recently begun to acknowledge its outsize role in the spread of ethnic and religious strife.
The company said its ban on Trump was not a change in policy but enforcement of its existing rules.
“We have established policies for dealing with praise of violence on the platform,” a Facebook spokesperson said. “They apply impartially to all users around the world, including politicians, heads of state and leaders.”
But the company has often been late to recognize or remove inflammatory speech in non-English languages, in nations where civic institutions are weak and where it does not maintain full-time staff.
In some countries, such as Vietnam and India, Facebook has deliberately ignored its own standards in order to placate powerful governments and to protect its business.
“We realized years ago that Facebook as a global tech company does not enforce its own platform rules equally or consistently across the world,” said Nalaka Gunawardene, a media analyst in Sri Lanka. “Some markets seem more important to them economically and politically.”
Asian countries have emerged as key markets for Facebook as its growth slows in the U.S. and Europe. Two-thirds of Vietnam’s people use the platform, and more than half in the Philippines and Myanmar. But these are also among the countries where the company has drawn the most fire for failing to block harmful speech.
Last year, Facebook apologized for failing to stop the proliferation of anti-Muslim rhetoric in Buddhist-majority Sri Lanka — an island nation of 22 million people, one-third of whom use Facebook — despite years of warnings from civil society groups.
Posts that urged, “Kill all Muslims,” and false allegations that a Muslim shopkeeper was adding sterilization pills to customers’ food “may have contributed” to riots in 2018 that left at least three people dead and many mosques and Muslim-owned shops destroyed, according to an independent investigation commissioned by the company.
Following the violence, the government briefly blocked Facebook, which blamed its failures on a lack of artificial intelligence and human monitors versed in the main languages spoken in Sri Lanka, Sinhala and Tamil.
Facebook executives flew in from India to handle the crisis because the company had no staff in Sri Lanka. It reflected what was then the standard practice driving the company’s breakneck global growth: making the app available in scores of languages and letting it run with minimal oversight.
“Especially in emerging economies, these companies seem to want market access without adequate social responsibility,” Gunawardene said.
The company says it has since hired staff in Sri Lanka and formed partnerships with two local news organizations to help fact-check posts. An extremist Buddhist monk was banned from the platform, although many others who peddle hate speech remain active, often under pseudonyms, Gunawardene said.
In Myanmar, it wasn’t until August 2018 — a year after the start of a military-led offensive against Rohingya Muslims — that Facebook banned 20 individuals and organizations, including Myanmar’s military chief, and removed pages that together were followed by almost 12 million people. The company said it was the first time it had banned a country’s military or political leaders.
But by then, hundreds of thousands of Rohingya had fled the country and a top United Nations official had described the state-sponsored attacks as a “textbook example of ethnic cleansing.”
“It took a really long time to respond, but in the U.S. it was just days,” Thet Swe Win said. “Maybe this is the privilege of living in a First World country.”
Facebook says it has significantly expanded its capacity to remove harmful content in dozens of languages through human and automated monitoring.
But founder Mark Zuckerberg long insisted the company shouldn’t police the speech of politicians, arguing that people have a right to hear from their leaders and that officials’ statements are already heavily scrutinized by the news media. Last year, after an outcry over Trump’s incendiary posts about anti-racism protests across the U.S., Facebook and Twitter began applying labels to some of Trump’s posts to alert users to possible misinformation.
That level of public scrutiny doesn’t exist in countries such as India, where Prime Minister Narendra Modi eschews press conferences, many media organizations hew to the government line and members of the Hindu nationalist ruling party have sometimes encouraged violence in public statements and online.
“The companies’ policies portray a very Silicon Valley understanding of politics as opposed to how it plays on the ground,” said Pratik Sinha, founder of AltNews, an Indian fact-checking website.
India is Facebook’s largest market, with nearly 300 million users. When T. Raja Singh, a lawmaker from Modi’s Bharatiya Janata Party, used his account to denounce India’s Muslim minority as traitors and argue they should be shot, the social media giant didn’t ban him or immediately take down the posts.
The Wall Street Journal reported last year that Facebook’s lead public policy executive in India worried that taking action against Singh or other BJP officials who espoused hate speech would harm the company’s business interests. After the Journal report was published, the company banned Singh from the platform.
Much as in the U.S., where social media companies’ actions against Trump have angered his supporters, members of Modi’s party accuse Facebook of political bias when the company removes accounts loyal to the prime minister.
But many pro-BJP accounts continue to promote conspiracy theories and religious hatred or otherwise violate Facebook’s community standards, according to research by the Digital Empowerment Foundation, a New Delhi-based nonprofit.
“Facebook claims they shut down so many bad accounts, but there are so many that are flourishing,” said Osama Manzar, the group’s founder.
Manzar’s organization has accepted grants from WhatsApp, the Facebook-owned messaging platform, to train Indians in “digital literacy,” based on the company’s contention that users in India and emerging economies are more susceptible to online hoaxes and misinformation. At one session in 2019, trainers focused on a public-service video that had been misleadingly edited as if to show a real-life kidnapping. The clip went viral, sparking mob attacks in several Indian cities that left two dozen dead.
Manzar said the Jan. 6 attack in Washington showed that violence based on misinformation — Trump’s baseless claims of election fraud — could happen in the U.S. too.
“Never in history at this level have we been confronted with such loud lies, confident and powerful lies,” Manzar said. “This has made people believe in a truth based on faith rather than science and facts. America is just as susceptible.”
Still, few believe social media companies are prepared to take a tougher line against misuse of their platforms globally.
A right-wing backlash against its Trump bans could affect revenues in the U.S., making companies more reliant on foreign markets. And autocratic governments could tighten regulations on social media as a way to stifle dissenting voices.
“We will need to watch how countries with much weaker institutions would pass government regulations that might deal even greater harm than platforms’ lax policies,” said Jonathan Corpus Ong, an associate professor at the University of Massachusetts Amherst who has studied online disinformation.
Sinha, the fact-checker, said Facebook and Twitter’s bans on Trump “will have very little bearing on how things play out in India” because the companies face far less accountability from the Indian government or public. The companies, he argued, simply made a face-saving move in the final moments of Trump’s presidency.
“Neither Facebook nor Twitter would have taken that action if Trump was staying in power,” he said.