Arshad Shaikh probes the accusations against social media giant Facebook for spreading hate and not doing enough to curb its platform being misused to disseminate Islamophobia and polarizing content.
American social media giant Facebook (FB) is no stranger to controversy. It has been targeted over user privacy, election manipulation, the spread of fake news, and copyright infringement. Facebook, a company that crossed a market cap of $ 1 Trillion in June this year and which claims to have more than a third of the entire global population as its ‘active monthly users’ was again in the news recently over accusations by whistleblower Frances Haugen. Her revelations called ‘Facebook Papers’ made to the Securities and Exchange Commission and obtained by a consortium of news organizations show, among many other things, that FB did little to clamp down on reported instances of hate content against minorities in India. According to a story by The New York Times (“In India, Facebook Grapples with an Amplified Version of Its Problems” by Sheera Frenkel and Davey Alba dated October 23, 2021), Facebook did not maintain enough resources in India to tackle anti-Muslim posts (Islamophobia) and misinformation about a myriad of issues.
The leaked information shows some extremely revealing and yet damaging facts about the internal workflows of FB and its policies when it comes to dealing with countries besides the United States. For example, in February, a pair of Facebook employees set up a dummy account of a 21-year-old woman from North India. They began to document what the FB app was throwing into her timeline. After some general stuff, she was flooded with pro-Modi propaganda and Islamophobic posts. An internal memo accessed by the Washington Post called the dummy account an “integrity nightmare” and the stark disparity in the FB user experience between the US and India. It also became known through the ‘Facebook Papers’ that FB was very much aware of the anomaly that left the platform susceptible to abuse by hatemongers and authoritarian regimes and yet did little to address the problem.
It is reported that in 2020, FB spent 84% of its allocated resources to tackle misinformation in the United States, although it makes up only 10% of its user base. The rest of the world gets a measly 16%. So how exactly does this sharing of content work out on the Facebook platform? Why is the bad content being shared more than the good content? Why the disparity in tackling hate? Will Facebook do anything about it, and why is the Government of India silent about this so far?
The Algorithm that cannot filter hate
The Facebook timeline is where you post your messages, images, and videos so that your friends get a glimpse of you and your life story after they land on your page. In addition, they can leave public messages (text and photos) for you, so in a way, their posts become part of your history.
A section called Newsfeed is controlled by the Facebook algorithm that is supposedly designed to select and share the most relevant and engaging stories out of the several thousand potential stories. The official version of how the algorithm works or what is thrown at your newsfeed depends on the following factors. The first priority is to those stories that you comment on, share, click, and spend time reading (called engagement). The four main factors that shape your newsfeed are (1) Who posted the story: if you have engaged with the author before, Facebook thinks you would be interested in their posts. (2) How other people engaged with the post: the more others have engaged with that post, the more likely it is for FB to show it to you too. (3) What type of post is it: different people engage with and spend time on different posts. For example, some love watching videos while others like reading news stories. (4) When it was posted: the more recent a story was posted, the more likely it will be shown to you by Facebook. Although this scheme appears quite innocent and logical, it is creating havoc in the world.
Systemic Outrage Amplification
To appreciate this phenomenon, let us tune in to what Tristan Harris, Co-founder and President of the Center for Humane Technology as he testified before a United States Senate Subcommittee hearing on Privacy, Technology and Law titled ‘Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds’. Harris testified that: “A business model that preys on human attention, which means that we are worth more as human beings and citizens of this country when we are outraged, polarized narcissistic and misinformed. It means the business model was successful at steering our attention using automation. There is a decentralized incentive for yellow journalism that wants to make each of us yellow journalists because we are more rewarded the more extreme things we say. We are raising entire generations of young people who will have come up under these exaggerated prejudices, division, mental health problems, and an inability to determine what’s true. They walk around like a bag of cues and triggers that can be ignited. If this continues, we will see more shootings, more destabilization, more children with ADHD, more suicides and depression, and deficits cultivated and exploited by these platforms. We should aim for nothing less than a comprehensive shift to a humane, clean “Western digital infrastructure” worth wanting”.
Taming the online Frankenstein
At the heart of this challenge of the inability or reluctance exhibited by Facebook and other social media platforms to control polarizing posts and hate content on its feeds lies the debate about the ‘inviolable right to free speech’ and the extent to which that right can be curbed by society and the state through regulation and legislation. But more specifically, the debate about algorithms and amplification is all about the business or revenue model of social media platforms that work on proof of consumption of its content (engagement) to those who advertise on those platforms. The “Like Share Subscribe” culture established by social media results in giving precedence to content that is partisan, controversial, and polarizing over stuff that is balanced, clean, and accommodative. It gives unprecedented power to those who divide society and shun uniting it, who prefer elimination over assimilation and believe in hurting rather than healing. Since their ideas are peddled on a larger scale because of the nature of the algorithm, their content gets a bigger audience. They are awarded more traction and prominence by the mainstream media and ultimately get more votes if they are into politics. The correct approach would be to remove the holy aura surrounding the right to free speech. Like any other human activity, speech must be subject to the same moral standards that govern other forms of human behavior governed by laws and protocols. This online Frankenstein feeding on hate and animosity delivers the revenue to the platform, which cannot do away with the algorithm, so the vicious cycle continues. As the IE editorial (26 October) said: “For impartial and reasonable regulation of the digital sphere, the political class, too, must be willing to sacrifice the quick gains it has reaped on social media, sometimes at the expense of the guiding principles of constitutional democracy”.
Facebook is accused of spreading hate
One of the biggest social media platforms, Facebook, has been accused of not doing enough to stop hate from being spread on its platforms. At the heart of the problem lies the algorithm behind the newsfeed, which throws more polarizing content towards users than the balanced stuff. FB itself acknowledges that they do not have resources to block the hate content. However, it is also the fact that the deployment of resources is skewed in favor of the American market, making up only 10% of its user base. The algorithm is a slave to social media platforms’ revenue model that derives ad revenue based on engagement. The whole issue throws back fundamental issues like free speech and the state’s power to regulate our freedom.
(The author is a freelance journalist based in Pune, India).