Fixing the Web by Laying Off the Fixers?
The tech layoffs continue, perfectly timed to increase the risk of threats to the elections 4 billion people are voting in this year
I woke up to news stories about the growing layoffs in the tech sector, and how this appears to be ravaging pay for tech workers. So, I am reposting a story I originally authored with Glenn Ellingson and published in Tech Policy Press in January of this year. If you worked in a trust and safety or integrity role at a tech company, you’re eligible to join (free) the Integrity Institute, where you’ll find a community of bright folks like yourself and some events specifically tailored to supporting people affected by the layoffs.
The Unbearably High Cost of Cutting Trust & Safety Corners
In 2023, social media companies decided to cut corners by laying off thousands of employees who were experts on combating abusive behaviors and harmful content. Laying experts off may have saved these companies money in the short term – but at what cost, and will these cuts come back to haunt them?
Predictably, harmful content thrived last year. On X, the platform formerly known as Twitter, hate speech and propaganda increased, and a verification system that helped users to more easily identify trustworthy accounts was discarded in favor of one where anyone willing to pay a pittance can obtain a coveted blue checkmark. On Facebook and Instagram, Russian disinformation campaigns continued alongside the ongoing invasion of Ukraine, and Instagram’s recommendation engine helped connect and promote vast networks of accounts belonging to pedophiles consuming and distributing child sexual imagery and videos. While these are some specific examples, they are not isolated ones.
Regulators around the world, who have imposed billions in fines for previous trust and safety failures, are alarmed by perceived backsliding by social media companies. The US Federal Trade Commission, which fined Meta $5 billion for failures to protect user privacy in a 2020 settlement, is alleging Meta is putting children at risk through new violations of the terms of that settlement . The European Union, which recently fined Meta another $1.3 billion for related violations has launched an investigation into X for its “failure to counter illegal content and disinformation" under the Digital Services Act and Digital Markets Act. X, or any other company, is deemed noncompliant with these acts, may face penalties up to 6% of the company's total global revenue or suspending them from operating in the EU. Similarly, for failing to disclose information regarding child abuse content on the platform, and sent a legal memo warning Google, TikTok, Discord, and Twitch that they needed to ensure compliance with the Online Safety Act to avoid joining X in facing civil penalties. Beyond these regulatory investigations social media companies are facing a wave of civil lawsuits. that Meta violated the Children’s Online Privacy Protection Act, and traumatized victims of the mass shooting in Buffalo last May are suing YouTube and Reddit for radicalizing the shooter. Some social media companies, such as the random chat app Omegle, have been effectively sued out of existence by civil litigation from users who were harmed on the platform.
Platforms not sufficiently addressing harmful content also become risky places for advertisers, who drive 90% or more of revenue at the social media companies. On X, advertising revenue has decreased 55% or more since it was acquired and its trust and safety experts were laid off. More recently, as its owner Elon Musk has disseminated debunked conspiracy theories, like Pizzagate, and seemingly endorsed , many of the platform’s largest advertisers stopped advertising on X. Likewise, on Instagram and Facebook, advertisements encouraging people to visit Disneyland, buy erectile dysfunction medication, and use the dating apps Match and Tinder appear in between short-form videos sexualizing children. Since this revelation, the Match group, along with other advertisers, has stopped promoting its brands on Meta’s products, which is a direct hit to the company’s main source of revenue.
But there is an even worse threat looming for these companies – losing users and their attention to sell to advertisers. If products generate enough bad experiences and harm people enough, people will seek less noxious alternatives. Recent polls reveal that of the largest social media platforms, Facebook and X consistently have the highest rates of users reporting negative experiences. In fact, nearly 1 in 3 users report seeing content that they thought was bad for the world in the previous 28 days. Moreover, the majority of users state that this content is likely to increase hate, fear, and/or anger between groups of people, misinform people, and fuel greater political polarization. Additionally, most US adults who use these platforms report feeling annoyed by their negative experiences.
These platforms track user sentiment, and they would be aware of users’ ever growing negative sentiment – or they would have been before cutting staff responsible for improving users’ experiences. In a series of experiments conducted by staffers at Meta, where the company’s researchers withheld algorithmic protections from harmful content for a percentage of users over the span of at least two years, they found that many users began to disengage and even quit the platform altogether. In contrast, the users who received the strongest algorithmic protections from harmful content actually started to engage more over time as their experience improved. Logically then, companies seeking to build long-term value should take actions that minimize harmful experiences, even if that means short-term fleeting decreases in user engagement. However, public documents reveal that Meta often resists launching interventions that protect its users if it affects their short-term engagement, which may explain why Facebook stopped growing in the US in recent years.
Similarly, X, which has most aggressively slashed protections this year, has experienced the fastest declines in user activity. One Pew Research study revealed that a majority of US Twitter users have taken a break or left the platform in the last year. Further supporting this is global web traffic to X decreasing 14% globally, and 19% in the US year-over-over. Perhaps most strikingly, X CEO Linda Yaccarino seemed to confirm this in her remarks at the Code Conference where she admitted declining active users.
Short term cost cutting can be very expensive. Billion-dollar regulatory and legal fines make headlines and dent company coffers. An advertiser exodus can create sudden, crippling revenue drops. But fleeing users – brands, influencers, and consumers – threatens irrelevance and extinction for the social media platforms we all use today. Today's giant brands – such as Facebook, Instagram, and X -- may seem too big to fail. But young hungry alternatives are springing up all over, though no clear favorites have yet emerged to shoulder the giants aside. But in technology the only constant is change. Just ask MySpace or Yahoo.
What’s in the pipeline?
I submitted an op-ed on meaningful transparency for social technology companies to the Washington Post, so fingers crossed? Throw salt over my shoulder?
The Neely Social Media Index survey of US adults from Q2 is in and I’ve started analyzing those data, along with wrapping up analyses of the sister survey we ran with ~3,100 Polish adults. In a past newsletter, I mentioned how great the Google Translate function in Google Sheets is. Well, I must revise that statement. It’s great, but the default setting is to auto-detect the language on a cell-by-cell basis, and not on the whole spreadsheet. Therefore, all of the survey responses that were supposed to be Yes/No, were actually translated as Not/No. Apparently, the Indonesian word for “Not” is spelled the same as the Polish word for “Yes.” Fortunately, you can add an argument to the function specifying the code for what language you want it to use for the text-to-be-translated. PL is the code for Polish, but I couldn’t find any comprehensive listing of language codes for this function.
Stanford’s Institute for Human-Centered Artificial Intelligence gathering was postponed due to COVID-19 afflicting one of the principals. This has yet to be rescheduled.
As part of the Platform Data handbook I’m building with some colleagues, we are including a databrary containing links to all of the publicly available social media data we can find. It’s a work in progress, but should give you a taste of what is to come in the following weeks. If you know of data that we haven’t yet included, please message or email me.
What I’m reading
What happened to Stanford spells trouble for the election -- Renee DiResta, NY Times
AI employees should have a “right to warn” about looming trouble -- Alex Kantrowitz, Big Technology
A comprehensive list of 2024 tech layoffs -- Cody Corrall & Alyssa Stringer, TechCrunch
CEOs explain why tech layoffs are happening in 2024 -- Ana Altchek, Business Insider
Bay Area tech’s ‘layoff surge’ has slashed salaries, report says - Stephen Council, SF Gate
Layoffs in tech sector reach nearly 100,000 year to date: tracker -- Brandon Evans, Seeking Alpha
The Tech Bloodbath is Far from Over: Industry limps to midpoint of 2024 -- Marc Ethier, Poets & Quants
Generative AI misuse: A taxonomy of tactics and insights from real-world data -- Nahema Marchal, Rachel Xu, Rasmi Elasmar, Iason Gebriel, Beth Goldberg, & William Isaac, Google DeepMind
Who’s in the corner of T&S (Trust & Safety)? -- Alice Hunsberger, Everything in Moderation
Warning labels for social media are a terrible idea -- Owen Scott Muir, The Frontier Psychiatrists
Taking the power back: How diaspora community organizations are fighting misinformation spread on encrypted messaging apps -- Joao Ozawa, Sam Woolley, & Josephine Lukito, Harvard Kennedy School Misinformation Review