The news that Meta/Facebook has ended a nearly 10-year-old fact checking program has many brands & leaders asking how they should respond, but these moves are not happening in a vacuum…
Ensuring accurate information, and specifically fact checking, is only one element in a larger moderation strategy...there were probably people inside Meta/Facebook who supported this but they've been gone for a long time.
For example, Check Your Fact, an official Meta/Facebook fact checker, is owned by The Daily Caller's Tucker Carlson/Neil Patel, which is funded by and has close ties to billionaires like former VP Cheney and the Koch Brothers who profit from dis/misinformation about, among other things, policy in the Middle East, the military, and denying/delaying climate change tech and science.
If you look at how Big Tech – including but not limited to Meta/Facebook, OpenAI, Google, Microsoft, and Amazon – is accelerating AI-generated content and fake profiles, they see themselves as world builders (absent fiduciary responsibility, but that's another story altogether) and they seem less and less focused on creating value for & centering customers and users.
Zuckerberg and his board have been more focused on controlling narratives that drive extreme profits for themselves & their close confidantes. There probably is a way for the company to responsibly use community notes to replace fact checking, but they clearly aren’t positioned for or interested in doing so.
Basically, and I made this joke ~3 years ago and hoped it wouldn't come to pass, billionaire titans like Elon Musk and Mark Zuckerberg find having to listen to users/customers irritating and probably would prefer being creating bots to replace us aka "bots selling to bots managed by bots" (algorithms, AI tools, etc).
If this sounds dystopian, that’s because it is. Both Cory Doctorow's work on the "enshittification" cycle and Meta/Facebook's admission last week that they are generating AI images of users without permission and creating fake profiles that compete with actual users (people) indicate this has been happening for a while.
All of the above is important context for answering "How should communicators, organizations, and users prepare for this shift?"
I worked in news/social media/brand marketing for over a decade, and now in disinformation monitoring & research, and I don't see any easy answers. I believe in harm reduction, and I think brands, communicators, and journalists can and should practice it, and also make up their own minds about what works for them...but just jumping from one billionaire owned platform to another (or a model that could easily be captured by one) creates a lot of risk in short, medium, and long term outlooks.
As long as brands want to play on platforms that are, essentially, modeled on plantations, praising or ignoring the owner's decisions, no matter how harmful or poorly executed, is the cost of doing business. And, as Meta/Facebook, TikTok, and Google have shown, that cost can go up or down at anytime without any real reason.
The alternative is to move to owned channels, decentralized platforms/apps, and independent news sources. This is less sexy and, as outlined in a great thread on Mastodon, requires some imagination.
I'm happy to so and am already investing my energy/time there, but I recognize many people are still enthralled with the IPO Captains of Industry social media/news model...I hope that will change.