Skip to main content
Faculty and Staff homeNews home
Story
5 of 69

Meta will end fact-checking of social media posts; URI expert says it’s not a surprise

KINGSTON, R.I. – Jan. 14, 2025 – Meta, the social networking giant that owns Facebook, Instagram and Threads, has announced that it will end its fact-checking program to moderate user content and replace it with a “community notes” model. In a video on Jan. 7, Meta CEO Mark Zuckerberg said the changes would reduce mistakes […]

KINGSTON, R.I. – Jan. 14, 2025 – Meta, the social networking giant that owns Facebook, Instagram and Threads, has announced that it will end its fact-checking program to moderate user content and replace it with a “community notes” model.

In a video on Jan. 7, Meta CEO Mark Zuckerberg said the changes would reduce mistakes caused by the system the company created to moderate its platforms, while simplifying moderation policies and restoring “free expression on our platforms.” The move will replace Meta’s use of third-party fact-checkers with a community-based system, similar to that used by X, formerly Twitter, in which users can add notes to posts that may be misleading or inaccurate.

The changes affect Facebook and Instagram, two of the largest social media platforms in the world with billions of users, along with Threads.

The move includes changes in Meta’s moderation policies for political topics, including those that reduced the amount of political content. It also will eliminate some content policies on such volatile issues as immigration and gender. Zuckerberg said the changes would mean the company would “catch less bad stuff” but reduce the number of “innocent people’s posts and accounts” that are taken down.

Renee Hobbs, URI professor of communication studies and founder and director of the Media Education Lab. (URI Photo)

He characterized the move as Meta returning to “our roots around free expression” and noted a shift in the country’s political and social landscape evidenced by the election of former President Donald Trump.

Rhody Today asked URI professor Renee Hobbs, an internationally recognized authority on media literacy education, for her views on Meta’s announcement. She is the founder and director of the Media Education Lab, which aims to improve the quality of media literacy education through research and community service, and is the founding co-editor of the Journal of Media Literacy Education, an open-access, peer-reviewed journal for the global media education community.

Meta had initially put together a rigorous fact-checking system that was launched on Facebook in 2016. Does this shift in policy come as a surprise? 

Not at all. Mark Zuckerberg has been signaling this shift for some time, and it’s clear that the company’s approach to content moderation was flawed and imperfect in many ways. 

Meta’s business model is centered on the attention economy, and the company has long understood how the activation of strong emotions drives revenue. Over the past 10 years, the company has had to adapt to the changing ways that social media has been used to inform, entertain, and persuade, as propaganda and disinformation became a growth industry with the rise of state actors in Russia and China, as well as homegrown conflict entrepreneurs, who found it quite profitable to stir up anger and hate.

Disinformation is now a game played for fun and profit by people all over the world—including some of our local neighbors right here in Rhode Island. 

What do you feel drove this decision? 

The presidency of Donald Trump is a major reason for the shift because Trump has signaled his willingness to enact regulatory vengeance against Big Tech for their support of Democratic politicians. If Congress decides to modify Section 230 of the Communications Decency Act, they could remove the “safe harbor” protections that tech platforms now have which limit legal action against them for conveying harmful or defamatory content.

That’s why the Facebook tagline on X.com now reads: “We believe people can do more together than alone and that each of us plays an important role in helping to create this safe and respectful community.”

What are the pros and cons for users and social media in general? 

Most people are unaware of how content is currently moderated on social media platforms or how their own behavior—in the form of likes, clicks, and shares—influences the content that is shown to them. Some users will notice no difference as content moderation is removed. But when people encounter content they do not wish to see, it may affect their use of the platform. For example, I stopped using Twitter after my feed began to include pornographic video clips. I decided to stop using X, and I migrated to BlueSky, another social media platform, where content moderation is more robust.  

Social media platforms are used by many people as their primary news source. Given this ability to influence so many, should social media platforms be held to a “higher standard” where verified truth and facts outweigh so-called free speech? 

Absolutely not. The framers of the Constitution would have been strongly opposed to any kind of hierarchy that set up “verified truths.” The Enlightenment thinkers recognized that no one individual or group has access to “Capital T Truth,” and that people actually need the widest possible marketplace of information—including truths, fictions, satires, opinions, and even false and ugly stuff—in order to engage in the practice of genuine discernment. This core value is at the heart of the university, too. Everyone is a seeker of truth. 

This kind of discernment, which we now call media literacy for citizenship, is something that must be done by citizens themselves—it can’t be outsourced. 

What has been the response by scholars who study social media? 

Some people think that citizens don’t have the motivation or skills to engage in discernment, and these people make astute observations about how vulnerable we are to demagogues and propagandists. Perhaps we enjoy disinformation for its entertainment value more than we respect information for its truth value. 

How does Meta’s removal of fact-checking affect the ability to find reliable sources on the internet?

People’s ability to engage in fact-checking is influenced now by artificial intelligence and algorithms. Social media platforms are a manifestation of the town square, and the right to free speech is important to the practice of democracy. For most of human history, when people acted outrageously in public, a community would turn on “the mutter brigade” and use public shaming to inspire people to cease their troublesome behavior. The manifestation of so-called “cancel culture” was an example of this. 

Meta is now counting on the public to “keep Facebook clean.” That’s possibly a good thing because, after all, it was indeed dangerous to have one or three big companies as the arbiters of truth. When everyone trusted The New York Times, back in the day, it might have been easier, but with the rise of political polarization, Meta no longer has much “backup support” from mainstream news organizations or neutral third-party fact-checkers whose work is widely respected by all. 

In media literacy we say, “All media messages are selective and incomplete—they all have a point of view.” This puts the burden of responsibility in the hands of consumers.

Latest All News