WASHINGTON – In the wake of the US election, social media platforms such as Facebook are seeing a dramatic increase in violent content, including posts that seek to incite others to commit violence, Amnesty International USA said today.
In the last week alone, Facebook saw a 45% increase in violence and incitement trends. Amnesty International is calling on social media platforms including Facebook, Twitter and YouTube to prohibit such content in line with international human rights law.
“No company can ignore its human rights responsibilities,” said Michael Kleinman, Director of Amnesty International USA’s Silicon Valley Initiative. “Companies like Facebook, Google and Twitter themselves acknowledge this responsibility, as they already have clear policies against posts that incite violence. Yet we have seen again and again how easily users can weaponize these platforms, including how their focus on maximizing user engagement sometimes leads them to privilege false and incendiary content.”
“In reports including Surveillance Giants and Toxic Twitter, Amnesty International has documented the ways in which social media companies maximize user engagement and fail to protect users’ human rights. We are extremely concerned that companies remain unprepared to address possible incitement to violence, given heightened tensions and user engagement following the election.”
Social media platforms’ focus must include potential incitement from public officials and others in positions of influence, including the President, given his history of incendiary comments. The Rabat Plan of Action, developed by the UN Office for the High Commissioner for Human Rights to balance the freedom of expression against incitement to hatred, highlights the importance of considering “the speaker’s position or status in society” when determining when speech constitutes incitement to violence.
To that end, Amnesty International calls on all social media platforms to proactively enforce their existing policies in line with their human rights responsibilities, including via transparent and impartial application of content moderation policies that comply with international human rights law.