By Agencies
Facebook is rating its users on their trustworthiness as part of its effort to combat fake information, a Facebook manager revealed to the Washington Post.
Tessa Lyons, Facebook’s product manager in charge of the news feed, told the American daily newspaper that the social media giant had been working on the new ratings system for the past year.
It was developed after users started falsely reporting verified content as fake.
Social media companies have been under increased pressure to verify content posted on their platforms since it was revealed they had been used by various actors to wage misinformation campaigns that may have impacted elections across the world.
Like Twitter and Youtube, Facebook has since created mechanisms for users to flag hate speech or fake news which is then forwarded to independent fact-checkers tasked with verifying the information.
But it’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Lyons told the Washington Post.
“One of the signals we use is how people interact with articles.
“For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true,” she added.
Facebook has not revealed the metrics it uses to rate users’ reputations to prevent bad actors from gaming the system, it said.