American actor-writer Sacha Baron Cohen recently said in a scathing speech that had Facebook existed in the 1930s, the company would have permitted Hiter to run political ads on the “Jewish problem”. Cohen’s analogy isn’t implausible — authoritarian governments across the world are harnessing the power of Facebook (and its subsidiaries such as Instagram and WhatsApp) akin to the Nazi’s use of radio for propaganda.
Facebook’s reluctance to include politicians in its third-party fact-checking program allows disinformation to thrive.
Recently, Bharatiya Janata Party (BJP) MLA Himanta Biswa Sarma shared a video of All India United Democratic Front (AIUDF) supporters sloganeering in support of party leader Maulana Badruddin Ajmal at Silchar airport. Sarma falsely claimed that chants of ‘Pakistan zindabad’ were raised. The misinformation was debunked by Alt News and Facebook fact-checking partners Boomlive and The Quint. After Boomlive’s story, Facebook added a ‘false information’ label on Sarma’s post which was later removed.
A Facebook spokesperson told The Indian Express that the tag was erroneously put on Sarma’s post. Queries sent by Alt News to the social networking site drew a similar response that content shared by politicians is not eligible for fact-checking.
But we noticed a discrepancy in Facebook’s policies.
Differential treatment of political leaders
During farmer protests in September this year, a video of Prime Minister Modi’s mock funeral which was taken out in Tamil Nadu in 2017 was misidentified as farmers carrying out the procession. In 2018, the same clip was also shared as a procession held in Pakistan. This was debunked by Boomlive, The Quint and India Today (also a third-party fact-checker with Facebook). Many of the links from two years ago (1, 2, 3) do not carry the ‘false information’ warning but quite a few posts from this year carry an ‘altered video’ label including one by Congress leader Udit Raj.
Facebook defines politicians as “candidates running for office, current officeholders – and, by extension, many of their cabinet appointees – along with political parties and their leaders.” Both Raj and Bagga should be identified as politicians per this definition yet their posts contain warnings. Facebook had removed the red flag from Himata Biswa Sarma’s post citing the very same reason.
In fact, Facebook recently added a ‘false information’ label to a YouTube video shared by Donald Trump who still holds the office of the President of the United States.
So @Facebook fact checks kick in only AFTER you pressure Facebook comms people on Twitter.
— The Real Facebook Oversight Board (@FBoversight) November 19, 2020
Inconsistent standards of Facebook fact-checking
In the recent US presidential elections, warning labels were also applied to Trump’s premature claims of victory, though a Buzzfeed analysis revealed that they had no real impact. But would Facebook mirror similar steps during elections in other parts of the world? Unlikely.
Both Facebook and Twitter assured during a recent Senate Judiciary Committee hearing that the platforms have programs in place to “prevent misinformation aimed at discouraging people from voting” in the upcoming Georgia Senate seat races. While Twitter has globally banned political ads, Facebook placed a temporary ban on political advertising in the US. The ad ban started a week before the Presidential race and has now been extended for at least another month.
But Facebook’s policy is limited to the US. India too has elections lined up including polls in West Bengal, a state nearly nine times more populated than Georgia. But Facebook has not announced any policies to combat the dissemination of false news in India.
The tech giant’s global policy is vastly based on western politics, especially the US. The company has failed to control the spread of politically-motivated disinformation campaigns in parts of the developing world — whether it was hate speech pushing Ethiopia to the brink of genocide or posts inciting ethnic violence in Myanmar.
Facebook relies on third-party fact-checkers to tackle the proliferation of misinformation. But fact-checkers are barred from reviewing political speech. “I don’t think Facebook or internet platforms should be arbitrators of truth. That is a dangerous line to get down to in terms of what is true and what isn’t. And I think political speech is one of the most sensitive parts in a democracy. People should be able to see what politicians say and there is a ton of scrutiny already. Political speech is the most scrutinised speech by the media and I think that will continue,” said Facebook CEO Mark Zuckerberg in an interview with CNBC.
Political speech may be heavily scrutinised in the US but that is far from the truth in India. The recently-concluded US presidential race was live fact-checked by most channels. Donald Trump may be a disinformation machine but has held several press conferences, whereas PM Modi has held none in the past six years. Most speeches made by the Prime Minister also pass unchecked to viewers.
Two years ago, ABP News managing editor Milind Khandekar and news anchors Punya Prasun Bajyee and Abhisar Sharma had to tender resignations amid reports of pressure to keep PM Modi’s name out of critical pieces. The ouster was preceded by Ramdev’s Pantanjali withdrawing advertisements from the channel.
India ranks 142 out of 180 countries on the 2020 global press freedom index. While Facebook maintains a blanket policy for every country under the garb of protecting “free speech” journalists in India are imprisoned even for tweets.
Facebook imagines that content put out by politicians is the “most scrutinised” but going back to Sarma’s tweet carrying divisive misinformation reveals this is untrue. The tweet was reported at face value by several news organisations until debunked by independent fact-checkers.
Alt News also checked if Facebook added the ‘false information’ label on the video shared by non-political pages or individuals. “We apply strong warning labels and notifications on fact-checked content so people can see what our partners have concluded and decide for themselves what to read, trust and share,” according to Facebook’s third-party fact-checking program.
Facebook has developed technologies that detect identical and nearly-identical photos and videos. These can be adapted to semi-automate the fact-checking process by auto-marking images and videos on the platform already pronounced misleading by fact-checkers. The third-party outlets can also be aided through efficient interfaces that allow them to mass-tag content.
Multiple attempts to ascertain whether Facebook’s policy of “rating” misinformation is manual and solely dependent on its fact-checking partners went answered.
So we did some digging of our own.
Last month, a video of a woman vandalising her husband’s milk parlour after she found he was married to another woman was widespread with a communal spin. The couple hailed from the Hindu community but false messages alleged that a Muslim man had ‘trapped’ a Hindu woman. Several fact-checking outlets including Alt News and Facebook partner Boomlive had debunked the video. Boomlive’s story carries a link by one Jaggu Patel who had shared the video with the false message. His post carries a ‘false information’ warning.
However, other links carrying the exact video with identical misinformation have not been marked.
A similar pattern was observed for all debunked content.
Last year, a video of a man beaten by a group of people for attacking a woman was shared with a false ‘love jihad’ angle. After Boomlive’s fact-check report, most links are no longer available from 2019, barring a few that don’t carry a warning. The same video was revived last month with similar misinformation but differently worded. The Quint also published a fact-check article on the clip. Some of the links carrying the video have been labelled by Facebook while others are not. But most links have been taken down from the platform.
The above examples show that Facebook does not seem to follow a transparent and uniform fact-checking policy. The social media giant says that it demotes content fact-checked by its partners which was found true in some cases where viral links were no longer accessible. However, this hasn’t been mirrored in every place. Furthermore, many debunked videos and images resurface on the platform with altered claims yet the posts do not carry warnings based on previous fact-checks. Does Facebook require each link to be flagged by its fact-checking partner or is the process automated? This remains unclear. In the case of politicians too, we discovered that Congress leader Udit Raj’s post was flagged but BJP MLA Himanta Biswa Sarma’s was not.
In an earlier piece, an internal analysis by Alt News had revealed that most Facebook fact-checking partners in India, barring Boomlive and The Quint, have scanty coverage of BJP-led misinformation.
Misinformation in India is weaponised and disseminated in an organised manner in order to shape political narratives. We have repeatedly seen members of political parties including senior leaders who enjoy ministerial posts share false information to their massive supporter base. Unlike several western developed countries, India does not have the privilege of robust mainstream media that questions those in power. Facebook’s refusal to fact-check political speech and content posted by politicians is a poor policy decision based on a myopic understanding of how politics and media interface in many parts of the world.