Captive Ethiopian soldiers walk towards Mekele Rehabilitation Center in Mekele, the capital of Tigray region, Ethiopia, on July 2, 2021.
Captive Ethiopian soldiers walk towards Mekele Rehabilitation Center in Mekele, the capital of Tigray region, Ethiopia, on July 2, 2021. (Photo by YASUYOSHI CHIBA/AFP via Getty Images)
News

How Facebook Is Stoking a Civil War in Ethiopia

Online hate is adding fuel to the country’s deadly conflict, and researchers say Facebook is failing to stop it.

Every time Lucy Kassa publishes an article, she knows what will come next. As an Ethiopian freelance journalist covering her country’s intensifying civil war, Kassa has reported on killings, starvation, sexual violence, and other atrocities in the conflict. After each report, Facebook and other platforms erupt with threats against her. 

“It’s an everyday reality. Whenever I publish a story, there is a smear campaign on Facebook,” Kassa said. “The content is hate speech. It’s defamation, and its aim is to bully me and stop me from investigating, to harass and threaten me.”

Advertisement

After Kassa reported in May on a 13-year-old girl who suffered horrific burns from a possible incendiary weapons attack, a pro-government Facebook account with over 200,000 followers posted a photo of Kassa and openly called for her arrest—a serious incitement in a country where dozens of journalists have been detained during the conflict. In the weeks after, Kassa faced a wave of harassment across social media platforms that included death threats and threats of sexual violence. The Facebook post is still up months later, with over 6,000 likes and more than 1,000 comments. 

As Facebook struggles to address hate speech, radicalization, and misinformation in the United States, recently leaked internal company documents have made it clear that the problem is far worse and less addressed in countries across the global south. Even in Ethiopia—which Facebook has designated its highest risk level and repeatedly made assurances it is dedicating resources to monitoring—researchers and journalists say that hate is still spreading unabated and the platform is stoking ethnic and political conflict.

“People criticize them for how little they do in the U.S.,” said Timnit Gebru, Google’s former chief AI ethicist. “Imagine elsewhere: What we’re talking about is them doing absolutely nothing, as far as I’m concerned.” 

Advertisement

Stoking a deadly civil war

Ethiopia, Africa’s second-most-populous nation with 110 million people and dozens of languages, has been in a state of civil war for almost a year. Prime Minister Abiy Ahmed launched a large-scale offensive last November in the country’s northern Tigray region following an attack from local Tigrayan forces on a military facility. The offensive escalated a longstanding feud between Abiy’s government and the Tigray People’s Liberation Front group, which had ruled Ethiopia for decades until Abiy came to power in 2018. The fighting has deepened pre-existing political conflicts that often fall along ethnic lines, primarily pitting government forces and their allies in the large Amhara region against forces in the minority Tigray region, which makes up about 7 percent of the population. On Nov. 2, Abiy declared a state of emergency and called on citizens to take up arms as a Tigrayan forces counteroffensive pushed toward the country’s capital. 

The war has been devastating and destabilizing for the entire region, spilling over into multiple areas of the country and bringing in forces from neighboring Eritrea who back Abiy’s government. There are reports of war crimes and targeted killings from the government, Eritrean troops, and Tigray People’s Liberation Front rebels. Rights groups also allege that government forces and allied militias have carried out massacres, gang rapes, and ethnic cleansing. Shallow graves surround villages and mutilated bodies float in rivers. A government blockade has cut millions off from aid and made it extremely difficult to receive reliable information from affected areas, where the government is also launching airstrikes. The United Nations aid chief warned in October that hundreds of thousands in Tigray face famine. The fighting has displaced at least 1.7 million people and created intense polarization, which is also playing out online. 

Advertisement

Last month a video went viral on Facebook showing a man telling a large crowd of people that anyone who associates with certain ethnic minorities is “the enemy.” It was reposted multiple times before the platform removed it. The same account that called for Kassa’s arrest also appeared to celebrate the Fano, a notorious Amhara militia, for carrying out an extrajudicial killing. That post that remains online. Another account with over 28,000 followers posted an instructional video on how to use an AK47 with a caption that suggested every Amhara should watch it. The post has been up since April and has nearly 300,000 views. In September, a local media outlet published unproven allegations on Facebook that members of the ethnic Qimant minority were responsible for a shooting. That same day a government-aligned militia and mob attacked a Qimant village, looting and burning down homes. The post remains on Facebook.

On Oct. 30, researchers saw a viral Facebook post written in Amharic that appeared to be a clear incitement to commit genocide and made urgent calls for inter-ethnic violence. The post remained online for about 24 hours and began to be reposted by multiple accounts before Facebook removed it.  

Disinformation has circulated widely on Facebook during the war, researchers say, while the country’s repressive media landscape has made factual reporting dangerous. Dozens of journalists have been arrested since the conflict escalated in 2020, and two reporters have been killed this year—including one whom police had previously detained and questioned about his coverage of the conflict. In February, three armed men in plainclothes barged into Kassa’s home, knocked her down, seized her laptop, and threatened to kill her if she continued reporting. Kassa fled her home over the harassment, but on social media the death threats continued.

Advertisement

“Usually it’s my family members and colleagues that are concerned about me and sending me these,” Kassa, who doesn’t use Facebook, said. “After some time, I told them to stop because it was disturbing me.”

Facebook says that Ethiopia is a priority for the company and that it has increasingly dedicated resources to preventing hate and misinformation in the country.

“Over the past two years, we have actively focused and invested in Ethiopia, adding more staff with local expertise, operational resources, and additional review capacity to expand the number of local languages we support to include Amharic, Oromo, Somali, and Tigrinya. We have worked to improve our proactive detection so that we can remove more harmful content at scale. We have also partnered extensively with international and local experts to better understand and mitigate the biggest risks on the platform,” a Facebook spokesperson said.

But Ethiopian researchers and journalists say Facebook has done little to visibly stop the hate that’s spreading on its platform. Instead, they say fact-checking and moderation often falls to a group of volunteers who send Facebook spreadsheets of posts to investigate and frequently have to explain to staffers why content on their platform is dangerous.  

“They completely lack context. Every time we talk to them, they’re asking for context. That’s been a big issue—they don’t understand what’s happening in the country,” said Berhan Taye, an independent researcher who has been documenting online hate regarding Ethiopia alongside other volunteers.

Advertisement

The other problem is that even when researchers do flag unambiguously hateful content through Facebook’s reporting tools, Taye says they’re often told it doesn’t violate Facebook policies. Taye and the group of volunteers she works with have found that the only way to get some hate speech and misinformation removed from the platform is to send it directly to the company’s human rights staff.  

“The reporting system is not working. The proactive technology, which is AI, doesn’t work,” Taye said.

Facebook’s internal documents have highlighted the limits of its moderation technology, with a 2021 report stating that the company may take action on “as little as 3-5 percent of hate” on the platform. The company’s budget for fighting misinformation is also overwhelmingly geared toward the U.S., where less than 10 percent of its daily users reside, and Facebook’s AI can lack the ability to detect hate speech in non-English languages. An internal audit in 2020 found that Facebook did not have automated detection systems for flagging hate speech in either of Ethiopia’s largest languages, Amharic and Oromo.

Facebook says it has added content moderators in local Ethiopian languages in recent years and works with fact-checking organizations, but it would not give numbers of how many content moderators work in these languages. It also stated that it’s added automated detection systems in Amharic and Oromo. But researchers say their efforts are falling short and have criticized Facebook for taking action only once the situation deteriorated.

Advertisement

“This is nowhere close to enough at all, and this is after the fact. This is the result of them being worried?” said Gebru, who has repeatedly warned platforms about rising hate in Ethiopia over the last year. 

“Even saying, ‘We’ve added local language support.’ How can you operate in a country before you have that?”

Across the global south, Facebook is the internet

Facebook’s struggles in Ethiopia are part of a pattern of failures to prevent its platform from amplifying hate speech and extremism in vulnerable countries. 

The company played a significant role in inciting genocide against Rohingya Muslims in Myanmar, according to a 2018 United Nations report that led Facebook founder Mark Zuckerberg to issue a letter to rights groups claiming the company would prioritize detecting hate speech on its platform. In India, the company has similarly failed to prevent hate campaigns targeting minority groups, and calls for violence against Muslims remained on the platform even during the deadly riots that killed dozens of Muslims in Delhi last year. Internal documents reviewed by the Washington Post showed that until recent years Facebook lacked algorithms that could detect hate speech in Hindi and Bengali languages, which hundreds of millions speak. Similar gaps in Facebook’s ability to detect misinformation and hate existed in countries and languages all over the world.

Advertisement

The issue of extremism and hate speech circulating on the platform is additionally concerning in countries outside the U.S., researchers say, because Facebook and WhatsApp are so central to how many nations abroad use the internet—a situation that results from the company’s persistent effort to expand into new markets and further its growth. 

One initiative linked to growing Facebook’s foreign user base was the Free Basics program, which critics have described as a kind of “digital colonialism.” Free Basics offered an app in dozens of countries that users could download to access Facebook and other limited services without paying for data services, making Facebook the primary gatekeeper between users and the internet.  

“We had a lot of internal data through surveys, polls, focus groups we did all over the global south that showed the great majority of people in those countries equivocated Facebook with the internet,” said one former Facebook employee who worked on the Free Basics program.

Facebook and WhatsApp’s dominance in large parts of the world has exacerbated the dangers of hate speech on these platforms, making people reliant on a service that lacks strong enough safeguards to prevent it from causing offline harm. It’s a problem many researchers believe Facebook should have been proactively preparing to address before moving into new countries, but one that former employees say the company did not appear to take seriously.

Advertisement

“There was a thing that we would say all the time, that a rising tide lifts all boats,” the former Facebook employee said. “We really bought in, as employees, to this idea that it was a win-win for all. We really didn’t have any concern for the power we were asserting over markets.”

‘It can be difficult to find someone who cares’

Internal pushes for greater accountability in smaller markets and the global south failed to gain traction with Facebook’s policy makers, whistleblowers say. Sophie Zhang, a former data scientist at the company, said she attempted for years to sound the alarm over platform manipulation campaigns that used fake likes and engagement to prop up authoritarian political figures in countries such as Honduras and Azerbaijan. Zhang tried to escalate her complaints to managers, executives, and other teams at the company, before Facebook ultimately fired her in 2020.

“It can be difficult to find someone who cares,” Zhang said.

“When Facebook does care, it’s usually because of two factors: First, Mark and Facebook employees are human and they need to sleep at the end of the night. And secondly, if bad press happens, that would impact Facebook’s ability to make money.”

At one point during her effort to highlight fake engagement campaigns, Zhang says her organizational leadership suggested that she stop trying to fix things and let the situation become a PR problem for Facebook—only then would the company devote resources to it. 

“For the countless press interviews she’s done since leaving Facebook, we have fundamentally disagreed with Ms. Zhang's characterization of our priorities and efforts to root out abuse on our platform,” a Facebook spokesperson said.

Frances Haugen, another whistleblower and former Facebook product manager, testified in an October Senate hearing that the company would not fix itself and instead repeatedly chose its own profit margins over the public good. This was often the case in foreign countries, Haugen has alleged, where building out hate speech–detection tools for local languages would be a costly endeavor that would cut into its revenue. In Haugen’s testimony, she singled out Ethiopia as an example of a country where Facebook is “literally fanning ethnic violence.”

Until Facebook makes changes, journalists and researchers focused on Ethiopia feel they have no choice but to continually raise the alarm over how hate speech and extremist content is proliferating on Facebook in hopes of stopping its spread. As Berhan Taye continues to fill spreadsheets upon spreadsheets of hateful posts to send to Facebook, she says it’s almost like the situation is holding her hostage.  

“The only reason that we’re engaging with them is that we know without our effort of getting content taken down, there’s no way to have it taken down,” Taye said. “It should be Facebook doing this work. They’re the ones making billions off this content globally.”