According to a report from Amnesty International, Facebook algorithms promoted hate content against the Rohingya minority, prior to Myanmar's military committing widespread violence against the group in 2017.
In responding to the report regarding the alleged actions of Facebook's parent company, Meta, Amnesty International Secretary General Agnès Callamard said: "While the Myanmar military was committing crimes against humanity against the Rohingya, Meta was profiting from the echo chamber of hatred created by its hate-spiraling algorithms."
Amnesty International maintains that Meta knew its algorithms were amplifying potentially hateful content against the Rohingya.
Within this process, however, the social media giant chose to do nothing, according to the report.
"Actors linked to the Myanmar military and radical Buddhist nationalist groups flooded the platform with anti-Muslim content, posting disinformation claiming there was going to be an impending Muslim takeover, and portraying the Rohingya as 'invaders,'" Amnesty said in its report.
For a controversial post that was reportedly shared more than 1,000 times, one Muslim human rights defender was characterized as a "national traitor."
The comments following the piece also called for the murder of Muslim minorities.
"Don't leave him alive. Remove his whole race. Time is ticking," one person commented on the image, according to the Amnesty report.
Another controversial post: Senior General Min Aung Hlaing wrote in 2017: "We openly declare that absolutely, our country has no Rohingya race."
Last year, a group of Rohingya refugees sued Facebook, alleging the company failed to act in the face of "dangerous rhetoric" that led to the massacre of thousands.
According to reports, more than 25,000 Rohingya were murdered during the conflict and thousands more were raped in Myanmar's Rakhine State.
This experience apparently prompted more than 700,000 people to flee the region.
A year prior to the genocide, according to documents viewed by Amnesty International, Meta's internal research allegedly found that Facebook's recommendation systems "grow the problem" of extremism.
Facebook's hate-speech policies prohibit users from posting content targeting a person or group of people with "violent speech or support in written or visual form."
Facebook also condemns hateful speech that promotes "subhumanity," or "dehumanizing comparisons" and "generalizations that state inferiority."
© 2022 Newsmax. All rights reserved.