Facebook Explains Three-Part Misinformation Strategy, in the Face of Credibility Crisis

[ad_1]

Facebook faces a credibility crisis as it is used as a vector for misinformation and manipulation of the public around the world. It has been linked to violence in many parts of the world, and so on Thursday, the social network held a briefing to explain how it is trying to limit misinformation on its platform. The briefing detailed the efforts made by the social media giant to curb the circulation of false information and fake news. Facebook emphasised that it uses a three-part strategy to address misinformation called “Remove, Reduce, and Inform”. Interestingly, the latest attempt by Facebook comes at a time when its credibility is at an all-time low. The US company has just been blocked in Myanmar for allowing misinformation and fake news that could lead to violence over the military coup. The platform is also being abused by individuals spreading fake information around the ongoing farmers’ protest in India.

During the virtual briefing, Facebook highlighted its partnership with over 80 third-party fact checkers that are certified by the nonpartisan International Fact-Checking Network (IFCN) and cover 60 languages around the world. These include prominent names such as AFP, BOOM, and Fact Crescendo that exist in a large part of Asia Pacific — alongside other countries worldwide.

Apart from the fact-checking partners, Facebook has its native similarity detection system that is touted to rate more content than what third-party fact checkers see.

“When we talk about misinformation here at Facebook, we are talking about false information that is often shared unintentionally,” said Alice Budisatrijo, Product Policy Manager for Misinformation at Facebook.

While Facebook doesn’t have a policy to restrict people to share the content that’s only genuine and true in its nature, it does have policies to remove misinformation. The company claimed that the policies enabled it to remove content that could lead to real world violence or imminent harm, prohibit manipulated media or deep fakes, militarised social movements such as Kenosha Guard in the US and violence-inducing conspiracy networks including QAnon, and restrict content that is linked to voter suppression. However, there are several instances where Facebook was found to have allowed misinformation and fake news — despite clear and detailed policies.

A report based on a research from the German Marshall Fund Digital noted that misinformation on Facebook was more popular in 2020 than in 2016.

Budisatrijo admitted that it’s impossible for Facebook to enforce what all is there on the platform is true.

“There can be different degrees of truth, or people can have different opinions about what is true or false,” she said. “However, we take our responsibility seriously in this space. We are a service for more than two billion people around the world. So, we know how essential it is for people to have access to credible information, and for us to remove harmful content.”

Facebook removed COVID-19 misinformation on its platform to some extent in the recent past to exemplify its efforts towards curbing fake news. It also implemented new advertising policies in relation to the pandemic. The platform also started removing ad targeting options, such as “vaccine controversies”.

Using inputs from fact checkers, Facebook is also claimed to reduce distribution of content that is deemed false, altered, or partly false on its platform and Instagram. It is also touted to reduce the dispense of pages and domains that spread fake news. Similarly, the platform claimed that it reduced the distribution of spammy and sensational content that could coincide with misinformation.

Aside from removing and reducing contain spreading misinformation, Facebook said that it informed users about false content through labels across its platform and Instagram.

“Heads of states or anyone, nobody is exempted from the Facebook community standards,” said Budisatrijo while answering a question on whether it avoids labelling posts by politicians across the globe. “So, if they post misinformation that could be harmful, hate speech, bullying, or any other types of harmful content that violates the community standards, we remove them regardless of who they are.”

Facebook and other social media platforms have often been impacted by biases while moderating user content. However, Budisatrijo stated that the company’s approach to work with IFCN-certified fact checkers who had at least six months of track record was aimed to minimise bias while reducing misinformation.


What will be the most exciting tech launch of 2021? We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts, Google Podcasts, or RSS, download the episode, or just hit the play button below.

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here