This is proving to be a monumental week on the tech and politics beat. On Tuesday, Meta announced that it would be ending its third-party fact-checking program and cutting down its rules barring hate speech in a move clearly targeted at appeasing one person—incoming president Donald Trump.
The Supreme Court will also take up TikTok’s lawsuit against the US government and its attempts to ban the app nationwide. We are now less than two weeks out from the deadline for a sale or an extension, so the court doesn’t have a lot of time to save the app—if that’s even what it ends up doing.
This week’s events will radically alter the future of the internet in the US, and probably not for the better.
Let’s talk about it.
This is an edition of the WIRED Politics Lab newsletter. Read previous newsletters here.
Reel-ing Into a Second Trump AdministrationThere’s a reason why mean-spirited TikTok users write “post this on IG Reels” in comment sections on the platform.
Often, I’ll come across it as a comment on a video of a teenager on TikTok doing something that could easily be considered embarrassing, like singing out of key or obsessing over something others find cringe. Even more upsettingly, you’ll find that comment, word for word, on videos from people with disabilities simply existing. These commenters say this because they believe it’s easier to be nasty to people on Instagram and other Meta platforms. The subtext is that if these users posted to Reels instead of TikTok, they’d receive the harassment the commenters believe they deserve.
And yet, it’s TikTok staring down the barrel of a nationwide ban this week. Meta’s Mark Zuckerberg, on the other hand, has decided to make his platforms more dangerous to appease the incoming president.
As you’ve likely already heard, Meta announced on Tuesday that it would be ending its third-party fact-checking program, replacing it with X-style Community Notes, and “restoring free expression” to its platforms. To accomplish the latter part, the company will relocate its trust and safety operation from California to Texas, purportedly to avoid liberal bias (but not conservative bias, I guess?) and focus its moderation filters on illegal content like terrorism and child sexual abuse material rather than what it calls “lower-severity violations.”
All of this comes with an update to Meta’s Community Guidelines, including its Hateful Conduct policy, that essentially allows users to make blatantly homophobic, transphobic, sexist, and racist posts without consequences, as my colleague Kate Knibbs reported this week. For Platformer, Casey Newton noted that, lost amongst other changes, Meta removed a sentence from its guidelines “explaining that hateful speech can ‘promote offline violence.’” Doing so immediately following the anniversary of January 6 is truly something to behold!
Why destroy protections for Meta’s most vulnerable users? In a video statement on Tuesday, Zuckerberg explained that the policies were “out of touch with mainstream discourse” and that “recent elections also feel like a cultural tipping point towards once again prioritizing speech.” (Zuckerberg, no one’s idea of a political theorist, didn’t really explain why fact-checking, itself the sort of speech that free-speech activists have long held is the appropriate reaction to bad speech, isn’t worth prioritizing, nor did he explain what he has against the many forms of speech that Meta will still suppress. Free expression, it seems, is identical with whatever Meta happens not to be banning at a given moment.)
Both Meta’s and TikTok’s moderation systems are far from perfect and consistently make mistakes. Earlier this week, Taylor Lorenz reported that Meta restricted queer content and hashtags as “sensitive content,” including hashtags like #trans, #lesbianpride, and #bisexualpride. As recently as October, TikTok laid off hundreds of content moderation workers, replacing them with AI.
But it’s Instagram, and especially its Reels product, that has a reputation for harassment. With Zuckerberg’s decision to rescind policies barring hateful speech, he’s made clear that distinction is worth the possibility of political clout come Inauguration Day.
There’s the possibility that other platforms could follow suit. Elon Musk was the first to set the precedent of rolling trust and safety when he bought X two years ago. After Meta’s announcement on Tuesday, YouTube declined to comment when asked by The Wall Street Journal if it would make similar fact-checking and policy changes, which certainly seems to leave the door open.
It appears, though, that TikTok’s parent company is still interested in fact-checking and moderation. Speaking with CNN, one Meta fact-checker said that it will remain operational with funding coming from, among others, Bytedance.
For US users, this only really matters as long as TikTok exists within the US. Tomorrow, the Supreme Court will hear oral arguments in the case dealing with the government’s attempts to ban the popular app nationwide. If SCOTUS doesn’t save TikTok by January 19, or some magical deal with an American owner isn’t consummated out of the ether, the app—which many users, who have speech rights of their own, see as a safer space than its alternative—will be gone.
The ChatroomX has seen at least two mass exoduses since Elon Musk took over the platform in 2022. The first occurred soon after Musk closed the deal, and the second came not long after the most recent US election was called for Trump.
In light of this week’s news, are you planning to abandon Meta platforms? Do you no longer feel safe on Facebook or Instagram? I’d love to hear about how your social media habits are changing.
Share your thoughts in the comments below, or send them to [email protected].
WIRED ReadsMeta Follows Elon Musk’s Lead, Moves Staffers to Billionaire-Friendly Texas: In his Tuesday video statement, Zuckerberg said that Meta will move whatever’s left of its trust and safety teams to Texas. The decision is meant to make them appear less politically biased, but really just makes the company look more MAGA.Meta Now Lets Users Say Gay and Trans People Have ‘Mental Illness’: When Meta upended its fact-checking program, it also rewrote some of its policies on what users are allowed to say on the platform. Now, users can accuse gay and trans people of being mentally ill due to their identities.Meta’s Fact-Checking Partners Say They Were ‘Blindsided’ by Decision to Axe Them: Many of Meta’s fact-checking partners rely on the company’s funding to stay afloat. This week, they had the rug pulled out from under them when Meta announced its fact-checking changes without giving them a heads-up.Want more? Subscribe now for unlimited access to WIRED.
What Else We’re Reading🔗 Students Charged in ‘To Catch a Predator’ TikTok Scheme: A group of Massachusetts college students have been charged with kidnapping and conspiracy after coordinating a To Catch a Predator–like “sting” operation on campus and posting it to TikTok. (New York Times)
🔗 Facebook Deletes Internal Employee Criticism of New Board Member Dana White: Meta is removing employee criticism over its hiring of UFC CEO Dana White from an internal company messaging system. Some employees commented about a 2023 video of White slapping his wife at a bar on New Year’s Eve. (404 Media)
🔗 Heritage Foundation Plans to ‘Identify and Target’ Wikipedia Editors: The Heritage Foundation, the Project 2025 publisher, recently told investors that it plans to use facial recognition software and hacked material dumps to identify Wikipedia editors. (Forward)
The DownloadOn Friday, I’ll be joining a handful of my other WIRED colleagues in covering oral arguments in the TikTok v. US case. Our live blog of the day’s events will be up on our site before things get started at 10 am ET. C-SPAN is streaming the arguments live here.
Also, this is a TikTok of the chillest January 6 get-together.
That’s it for today—thanks again for subscribing. You can get in touch with me via email, Instagram, X, and Signal at makenakelly.32.
GIPHY App Key not set. Please check settings