In the final lead-up to the US presidential election, Elon Musk has thrown the full weight of his celebrity and his resources behind former president and Republican nominee Donald Trump. He has appeared with Trump on the campaign trail; pumped money into a pro-Trump PAC (which has, in turn, bought ads on the platform he owns); and made X a hotbed of right-wing conspiracy theories, some of which he has personally boosted, that many experts say are designed to undermine faith in the outcome of the elections.
But Musk’s behavior is also having another effect: It’s taking scrutiny off other tech leaders and companies, even as they cozy up to Trump or roll back policies that would protect the information ecosystem ahead of a major election.
“In a race to the bottom, Elon Musk paved the way for a new, toxic tech basement,” says Nora Benavidez, senior counsel at the nonprofit Free Press. “Yet, as long as other platforms aren’t quite as abysmal, they squeak by under cover of Twitter’s failures.”
Social media companies have largely replaced traditional media conglomerates as gateways to information. According to a Pew Research study, 54 percent of adults now get at least some of their news from social media. And that number is much higher for people below the age of 50. Sixty-four percent of people aged 30–49 and 78 percent of people 18–29 get news from social media.
“[Musk] is smart enough to understand that to control the narrative, you want to control the media,” says a former Twitter employee. “And social media is media.”
But these companies don’t face the same kind of restrictions and responsibilities that traditional media do. Section 230 prevents social media companies from being held legally liable for the content on their platforms. Content moderation is largely voluntary, except in cases where the content itself is illegal (like child sexual abuse material), which has its own consequences. A 2020 study from the Harvard Misinformation Review found that people who relied on social media for their news were more likely to believe misinformation about the Covid-19 pandemic. A different Pew Research study, also from 2020, found that Americans who relied on social media for their news had lower political knowledge than those who didn’t.
But for years, public pressure from government officials, civil society, and the media pushed tech companies to invest in teams and tools that could at least somewhat address issues of hate speech or misinformation on their platforms, so they could say they were making a good-faith effort to deal with the issue.
Musk’s purchase of Twitter signaled a change, according to six former trust and safety employees from Twitter and Meta.
When Musk took over Twitter in October 2022, he quickly fired more than 50 percent of the company’s workers, including almost all of the company’s trust and safety and policy staff—the people tasked with creating and enforcing the platform’s policies around things like hate speech, violent content, conspiracy theories, and mis- and disinformation. Since then, Meta, Google, Amazon, and Discord have all made cuts to trust and safety staff.
Shortly after Musk purged Twitter of its trust and safety teams, other companies began layoffs. In November 2022, Meta laid off 11,000 employees, including many trust and safety employees. In January 2023, Google followed suit, axing 12,000 people. Earlier this year, Twitch, which is owned by Amazon, disbanded its Safety Advisory Council.
“I think that Elon really opened the floodgates,” says one former Meta employee. “So then other tech brands were like, ‘We can do that too, because we won’t be the black sheep for it.’”
Meta spokesperson Corey Chambliss tells WIRED that the company has “40,000 people globally working on safety and security—more than during the 2020 cycle, when we had a global team of 35,000 people working in this area,” though he did not address how many of those people are staff versus outsourced workers.
Musk’s sudden firings made it so that “anybody else could come along and nicely fire their teams and give them severance and it was nicer. Better,” says a former Twitter employee who was fired by Musk.
After Musk fired the trust and safety staff, experts warned that this cut, coupled with Musk’s “free speech absolutism,” would allow toxic content to flood the platform and ultimately cause an exodus of users and advertisers, leading to Twitter’s eventual demise. Hate speech and misinformation did increase, and advertisers did pull their dollars. Last year, X fired members of what remained of its elections team. Around the same time, Musk posted on X, saying, “Oh you mean the ‘Election Integrity’ Team that was undermining election integrity? Yeah, they’re gone.”
But X is still alive and kicking.
Musk’s behavior, say the former employees, acted as cover for other platforms that saw trust and safety work as a burdensome cost. The work of teams focused on ad sales or user engagement drives growth and money for platforms. Trust and safety teams, former employees say, do not. This makes them easy targets when companies tighten their belts.
“I think [layoffs] were something Mark [Zuckerberg] wanted to do for a long time,” says the former Meta employee. “And so, if Twitter can get away with having less good technology and less good infrastructure than other companies, and is still getting rid of thousands of people, which is proportionally way more than any amount laid off by Google or anyone else, then I think that that kind of empowered other companies too.”
Chambliss says that Musk’s decisions did not play a role in Meta’s layoffs, referring to a 2023 post about the company’s “year of efficiency.”
It’s not just staffing that has shifted since Musk took the helm at X. Google and Meta have made significant changes to how they handle political content and mis- and disinformation.
Last year, YouTube, which is owned by Google, announced that it would “stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US presidential elections.” Google spokesperson Elena Hernandez told WIRED, “There were no cuts to the Trust and Safety teams that work on elections. We continue to make significant investments in the people, policies, and systems that enable Google and YouTube to be a reliable source for election-related news and information.” Hernandez did not respond to questions about whether the company would be updating its policies around election fraud claims in anticipation of the US presidential elections.
Recent reporting from The New York Times found that lies about the election have since spread widely on the platform. In previous years, a story like that might have put pressure on YouTube to enforce or change its policies, but now, conservative activist Christopher Rufo wrote on X, “in a post-Elon environment, YouTube’s response is: ‘The ability to openly debate political ideas, even those that are controversial, is an important value—especially in the midst of election season.’”
“I think the public antics of Musk are diverting attention away from other companies who continue to launch products or make policy changes that demand careful thinking and transparency,” says Sabhanaz Rashid Diya, founder of the Tech Global Institute, a think tank focused on tech policy and a former Meta employee.
Earlier this year, Meta announced that it would no longer recommend political content to users on Threads and Instagram—though what exactly counts as politics remains unclear. Last year, the company removed restrictions on ads claiming the 2020 election was stolen and on Covid-19 misinformation. In July it removed restrictions on Trump’s Facebook account, which has 35 million followers. In August, less than three months ahead of the elections, it also wound down the tool Crowdtangle, which allowed journalists and civil society to monitor content on Meta’s platforms. (After taking over X, Musk announced that he would charge $40,000 for access to the platform’s API).
While X under Musk has presented a whole host of new issues for advocates and civil society, less attention has been paid to other, established platforms having many of the same problems they’ve had for years. Reporting from WIRED found that nearly four years after the January 6, 2021, insurrection at the Capitol, militia groups are still organizing on Facebook, with the platform even autogenerating pages for interested users. Meta’s and TikTok’s systems still can’t reliably detect ads containing election disinformation. Amazon’s Alexa told users the 2020 election was stolen.
“We do not have visibility on whether ad models in the lead-up to major elections have been changed, or whether researchers have been able to meaningfully engage with platforms and user metrics to study key information trends,” says Diya. “We still have unresolved questions about basic product or policy features that warrant continued scrutiny and should not be sidelined amidst specific individuals or one company monopolizing airtime.”
“Musk doesn’t take responsibility for anything on his platform, so it makes it a lot easier for other platforms to do the same,” says Alexandra Pardal, CEO of Digital Action, a nonprofit advocacy group focused on human rights and technology. “He’s changed norms about what’s acceptable and not, about what a responsible social media platform looks like. Musk has successfully masked [these changes] by making this the Elon Show.”
In a recent interview with the Acquired podcast, Meta CEO Mark Zuckerberg said that he regretted allowing Meta to take responsibility for things that he saw as outside its mandate or control. “People are basically blaming social media and the tech industry for all these things in society—if we’re saying, we’re really gonna do our part to fix this stuff, I think there were a bunch of people who just took that and were like, oh, you’re taking responsibility for that? Let me, like, kick you for more stuff.” Bloomberg also noted a shift in Zuckerberg’s approach to the 2024 election—namely, to avoid saying much about it at all.
Musk is obviously not the only driving force for this change. Tech companies have come under increasing scrutiny from both sides of the aisle, particularly from the GOP. This has made it particularly painful—and risky—for them to take action on certain topics. For instance, last year, a federal judge issued an injunction preventing social media companies from talking to the government, saying that Biden officials “engaged in a broad pressure campaign designed to coerce social-media companies into suppressing speakers, viewpoints, and content disfavored by the government.” For companies like Meta, that meant that their threat detection teams were unable to alert or hear from federal agencies about issues. The Supreme Court ruled this summer that complainants lacked standing.
In August, Zuckerberg issued a letter to Jim Jordan’s congressional Subcommittee on the Weaponization of the Federal Government, saying that the company had, indeed bowed to government pressure to remove misinformation about Covid-19. (Three former Meta employees who spoke to WIRED on the condition of anonymity say that they did not get the sense that government pressure was behind Meta’s choices to suppress or remove Covid-19 misinformation at the time.) Pardal says that if Musk was not behaving so outrageously, she doubts that Zuckerberg “would be saying he made a mistake.”
And Trump, it seems, approves. In an interview with the Barstool Sports podcast Bussin’ With the Boys, Trump said, “I actually believe [Zuckerberg’s] staying out of the election, which is nice.” The former president has also claimed that other tech executives, including Sundar Pichai, Jeff Bezos, and Tim Cook, are supporters. Bezos, who owns The Washington Post, prevented an editorial endorsing Vice President Kamala Harris from running, the same day that executives from his space company, Blue Origin, met with Trump, signaling a willingness on the part of the industry to cooperate with a possible second Trump administration. (Bezos appeared unmoved, even in the face of more than 200,000 people canceling their subscriptions to the paper.)
But none of these nods toward Trump are as obvious as Musk’s support for the former president, and the use of X to seed mis- and disinformation about the election, says Pardal. “[Musk] has drawn attention away from other tech companies and onto himself, when it comes to tech harms,” says Pardal. “Now we are all talking about what Elon says next, and moving away from the discussion about the decline in platform safety at a time of exponentially rising risks.”
You can follow all of WIRED’s 2024 presidential election coverage here.
GIPHY App Key not set. Please check settings