Nationalist propaganda on Facebook has contributed to the genocide of Rohingya Muslims in Myanmar according to a condemning U.N. report. Thousands had been killed and sexually abused, over 700.000 had to flee since August 2017.
By Annemarie Andre
Facebook wants to give people the power to connect – that’s what the mission statement says. But Rafiq Copeland, a humanitarian advisor for the international nonprofit organization Internews in Myanmar, has serious doubts about that. Within his organization he helps to build healthy media environments in countries where it’s most needed. He thinks that Facebook played a huge role in the genocide, because it simply wasn’t their priority. “Even when they were explicitly warned, their world view didn’t allow them to take the warnings seriously,” he says. Copeland is convinced that Facebook’s employeesin California saw the platform as entirely positive and had too little understanding of the situation of a country like Myanmar.
The Rohingya have been seen as outsiders in Myanmar despite living there for centuries. They are regarded as Bengalese immigrants, Muslims, and aren’t granted citizenship status. Although their situation had been quite tense for decades, it was Facebook, who gave hard-line nationalists plenty of reach and a space, where they could act without law restrictions. In a country where Facebook is regarded as the internet it was easy for them to spread fake news, which would stir hatred among different ethnic groups.
The situation finally escalated after a rebel group attacked police stations in 2017. As a result, the military started to not only attack rebels but also normal citizens.
In 2018 the U.N. directly addressed Facebook in its report of the fact-finding mission in Myanmar. “The response of Facebook has been slow and ineffective,” it says and, “the extent to which Facebook posts and messages have led to real-world discrimination and violence must be independently and thoroughly examined.” The Human Rights Council also regrets that Facebook at that time was unable to provide them with country-specific data about the spread of hate speech on its platform. Only after the report was released Facebook deleted people and organisations of the platform, which hold a lot of influence and were closely linked to the military. An investigation by the New York Times in 2018 claims that there have been at least 700 people involved in hateful campaigning, some of them closely linked to the military.
Reuters found a post from August 2017 which said in Burmese: “Kill all the kalars that you see in Myanmar; none of them should be left alive.” This post was translated into English by Facebook’s speech recognition software as following: “I shouldn’t have a rainbow in Myanmar.” For Copeland posts like this show how Facebook’s idea to aim for growth without understanding the local context or having staff on the ground, failed. “Facebook was a dominant provider of the information landscape in Myanmar,” he says and adds, “when you don’t have any oversight of what is happening, then you will have all sorts of stuff on the platform.”
Additionally, many users of the platform were confronted with daily news for the first time and didn’t have a high digital information literacy. “When people are used to have a free and independent media, that provides them with factual information, you know you have to make decisions about what’s accurate and what’s not,” he says and adds, “you build a set of skills without really noticing that you’re doing it,” Copeland says.
The humanitarian advisor finds it patronizing to call the Burmese not well-equipped to make good decisions about what’s trustworthy and what’s not. “I think that countries, who had a strong media landscape were more robust against those risks,” he says. To him, having experience of good media is a good start to prevent hate-speech and propaganda from spreading on social networks.
Photo: Wikimedia Commons | Internally displaced Rohingyas in Rakhine state, 2012 | © DFID Burma (CC 2.0)