Shorouk Express
Meta’s removal of third-party fact checkers from its platforms will allow “racist misinformation” to spread, MPs have warned.
The social media giant denied that it was ending fact-checking on its app, but was instead “moving to a system which is more scalable” following “feedback” from its users.
In January, Meta boss Mark Zuckerberg announced the change, saying at the time that fact-checkers were too “politically biased” and were having an impact on “free expression” – a move seen by many as an attempt to get closer to the pro-free speech stance of the incoming Donald Trump administration.
During an appearance before MPs on the Science, Innovation and Technology Select Committee, Chris Yiu, Meta’s director of public policy for northern Europe said the decision to replace third-party fact checkers with a community notes system in the US, and to reduce content moderation on some topics was based on feedback that debate on sensitive issues was being “suppressed”.
The committee noted that Meta had previously submitted evidence to it that third-party fact-checking was a key part of the firm’s approach to combating misinformation.
Mr Yiu was asked by MPs on the committee how this fitted with leaked guidelines for the changed content moderation approach and community notes system which showed the site would now allow a range of transphobic, racist and antisemitic statements, including statements such as: “trans people don’t exist”, “immigrants are filthy” and “Jews are greedier than Christians” – which were labelled “racist misinformation” by the committee.
In response, Mr Yiu said: “I accept that some of those are difficult for the communities that are affected to hear.
“We have received feedback over the months and years that in some cases, some areas of debate were being suppressed too much on our platform, and that some conversations, whilst challenging, should have a space to be discussed.
“We retain clear rules and community standards prohibiting content which is designed to incite violence.”
Responding to Mr Yiu, Emily Darlington MP asked whether Meta believed there was a “genuine debate” around the example statements on trans people, Jewish people and immigrants.
“We’ve had feedback that topics which have become part of mainstream discourse – conversations around some of the issues that happen among members of the public, that happen in newspapers, were being suppressed on our platforms in a way which was too aggressive,” Mr Yiu said.
“Where people make statements that violate our policies, they will be actioned equally. We’ve had the feedback that there are topics where the view was that there there should be more room for debate and conversation.”
Committee chair Chi Onwurah MP warned Meta that by allowing such statements to be appear on the platform, they were in fact “amplifying” such content.
“It’s not the same as a conversation in somebody’s home,” she said.
“This is something that people are going to see in their feeds.”
Chris Morris, chief executive of independent fact-checking charity Full Fact, said Meta was “dismantling” processes that lead users to “good information” online.
“Meta’s claim that it is not ending fact checking in the US isn’t credible and lacks context,” he said.
“Replacing experts trained to establish factual accuracy with a community notes model designed to reach consensus risks skewing information circulating on Meta platforms towards what some users think rather than what the evidence says.
“Community notes have a role to play in improving our online conversation, but is not the same as independent fact checking.”
Elsewhere during the session, where representatives from TikTok and Elon Musk’s X were also giving evidence, Ms Darlington clashed with X executive Wilfredo Fernandez over the firm’s content policies.
She read out a violent threat made to her in the replies to her own post on X, as well as a string of racist, antisemitic, homophobic and violent posts from the same account that were also posted to the platform.
“Is this acceptable under the guise of free speech on X these days?,” she asked Mr Fernandez.
The X executive said the comments were “abhorrent” and that he would ensure “our teams take a look”, but when pressed on whether the account in question would be removed he added that he “can’t make any assurances”.
“I can assure you that our teams will review under our terms of service, and I’m sorry that you had to experience that,” he said.
Mr Fernandez also refused to be drawn on whether the sharing of videos of Mr Musk’s recent gesture at a Donald Trump rally – widely interpreted as a Nazi salute – was appropriate and made X a safer platform.
“I think there’s a lot of discussion around that, and that’s why the platform allows for people to debate it and discuss it,” he said.