Study Shows Misinformation Dominates Facebook Engagement Than Real News

It’s the age of information, or should we say misinformation. It’s always the social media giants that come under the radar of spreading false information in the name of news. A new peer-reviewed study from researchers at New York University and the Université Grenoble Alpes in France shows that misinformation received six times more engagement on Facebook than real news.

The study, which looked at posts of more than 2500 new Facebook publishers between August 2020 and January 2021, found that the pages that post more misinformation regularly got more likes, shares, and comments. Unsurprisingly, the political spectrum had the increased engagement, with the right-wing having a much higher propensity to share misleading news than publishers in other political categories. 

However, a Facebook spokesperson, in an attempt to defend the organization, pointed out the fact that the study looks at engagement and not “reach”- a term used by the company to describe how many people see a piece of content on Facebook regardless of whether they interact with it. 

In August, it cut off a group of researchers access to data that would help them understand the problem of misinformation. Facebook cited that continuing to give third-party researchers access to the data could violate a settlement with the Federal Trade Commission that it entered into following the Cambridge Analytica scandal.

In August, the company released a ‘transparency report that laid out the most viewed post from April to June. Just days later, The New York Times revealed that Facebook had first scrapped plans to release a report in the first quarter because the most-viewed post was an article that wrongly linked the coronavirus vaccine to a Florida doctor’s death.

A recent survey also showed that Facebook users were less likely to be vaccinated against coronavirus than any other type of news consumer.

Twitter Launches New Process To Flag False Information

On Tuesday, Twitter reported that it will allow selected users in the US, South Korea, Australia to flag tweets for misinformation. It is safe to assume that the feature will eventually be expanded if successful. 

The test basically adds another category under the ‘Report Tweet’ section. Click on those three dots in the top right corner of a tweet, select “Report Tweet,” and the option to flag it for being “misleading” is there right between the existing options “It’s abusive or harmful” and “It expresses intentions of self-harm or suicide.”

“We’re testing a feature for you to report Tweets that seem misleading—as you see them,” Twitter explained. “We’re assessing if this is an effective approach so we’re starting small.”

The newly added feature will allow users to report any tweet that they think is misleading or serves half-truth. However, this doesn’t necessarily mean you’ll see anything actually happen, a fact that Twitter fully acknowledges.

“Although we may not take action on this report or respond to you directly, we will use this report to develop new ways to reduce misleading info,” Twitter tells you when you go to report a misleading tweet. “This could include limiting its visibility, providing additional context, and creating new policies.”

You can flag the tweet in question under ‘Politics’, ‘Health’ or ‘Something else’, giving users more capacity to address misinformation on the platform. This might also become a new avenue for misuse, or for reporting tweets simply because they run counter to your opinion.

The idea, therefore, is not to stamp out specific tweets that each user reports. But if say 100 or 1000 people flag the same tweet, then it is likely to go under Twitter’s surveillance. It will help Twitter to take action against such tweets, hence improving the tweet process.

Companies like Facebook and Twitter and consistently working on improving their algorithms to avoid amplifying false information or misinformation. 

 

Exit mobile version