Facebook’s policies are failing to tackle fake and misleading content.
Social media giant Facebook finds itself making headlines now and then. The recent one is related to Facebook spending the equivalent of 319 years labeling or removing false and misleading content posted in the U.S. during last year. According to internal documents accessed by the Wall Street Journal, the employees at the social media company have raised concerns about the spread of harmful and misleading information on its platforms.
The documents detail how the company’s employees and contractors spent more than 3.2 million hours in 2020 looking for, flagging, and removing incorrect or misleading information. 2.8 million of those hours, or approximately 319 years, were spent to look for misleading content posted in the U.S. According to the Journal, Facebook spent three times as much time on “brand safety” or “making sure ads don’t appear next to content that advertisers might find offensive.”
The documents further explain Facebook’s lapse on issues such as gang violence, human trafficking, drug cartels, and the spreading of violent and often deceptive information.
Misinformation drives more engagement
Many social media giants have proved incapable of fighting misinformation and fake news. Yet, new features are tested every year to slow the spread of misinformation on specific platforms. As per The Washington Post, researchers at the New York University and the Université Grenoble Alpes in France conducted a study to analyze user behaviors on Facebook during the 2020 U.S. presidential election. They found that between August 2020 to January 2021, sources known for releasing misinformation got six times more “likes, shares, and interactions” on Facebook than trustworthy news sources, like CNN or the World Health Organisation (WHO).
Facebook had taken vital steps against misinformation on its platform since the 2016 U.S. presidential election when Russia used the social media platform to boost the winning chances of Donald Trump. After this incident, Facebook came under scanner for allowing misinformation during Britain’s 2016 referendum and the 2020 U.S. election. Despite the company’s efforts, various reports have demonstrated how misleading content is still evidently spreading on Facebook.
Facebook’s drive to eliminate fake news is falling apart
For many years, social media platforms have redesigned content strategies beyond a simple choice of only keeping or removing content. In 2019, Facebook introduced updates to its “Remove, Reduce and Inform” content strategy to fight fake news. Facebook’s community standards added a new section to track updates added every month as part of the strategy.
However, the social media giant still failed to keep misinformation in check every year. Whether it is the recent Covid-19 misinformation, climate change or other topics, Facebook has been ineffective in administering its policies to fight harder against fake content.
Thumbs down by the government
In the past, countries have banned or temporarily suspended social media platforms for instigating social unrest. Countries have recently taken decisive actions against social media platforms for not propagating misleading or illegal content. Last week, a Russian court had fined Facebook and Twitter for not deleting content that Moscow considers to be illegal. The Tagansky district court handed Facebook five fines that equal to US$287,850. On the other hand, Twitter received two fines of US$74,409.80. In 2019, Germany imposed a €2 million (US$2.3 million) fine on Facebook for violating a law designed to fight hate news. As per German authorities, Facebook had provided “incomplete” information in mandatory transparency reports about illegal content, such as hate speech.
Similarly, Brazilian President Jair Bolsonaro has also temporarily banned social media companies from removing specific posts and user accounts. The ban was implemented to prevent further the spread of misinformation about the upcoming presidential election.
Header image courtesy of Unsplash