Fixing the Web EU Election Week: 6/6/24
We Worked On Election Integrity At Meta. The EU – And All Democracies – Need to Fix the Feed Before It’s Too Late
We Worked On Election Integrity At Meta. The EU – And All Democracies – Need to Fix the Feed Before It’s Too Late
Matt Motyl & Jeff Allen, published in Tech Policy Press
Later this week, voters across Europe will go to the polls for critical elections that will decide control of the EU parliament – and the future of Europe. It’s just one of the many important elections globally. Votes are already cast in India, South Africa, and Mexico, with the UK and US to follow. As data scientists who specialized in election integrity while at Meta, it was our job to find accounts that were engaging in behaviors that could interfere with elections, like those managed by Russia’s Internet Research Agency. We know first hand what it looks like when the system is blinking red – and when a social platform is amplifying disinformation from foreign actors or is allowing hate speech that threatens free and fair elections.
Days out from the start of the EU elections – and five months out in the US – we’re there.
In April, an investigation revealed that Facebook and Instagram are letting a well-known and growing network of pro-Kremlin pages push political ads to millions of Europeans with virtually no moderation. At the same time, the company has shut down vital tools used by researchers, journalists, and election observers, flouting new rules under the Digital Services Act (DSA) to drive transparency and access. Combined with a hollowing out of election integrity teams at Meta, Twitter, and other platforms, as well as a proliferation of electoral deep-fakes and increased actions by Russia and China to influence 2024 elections, these actions have created a perfect storm: growing risks and declining safeguards.
We’ve seen this before: in the US in 2016, in India over the last several weeks, and in the EU in 2018, despite internal Facebook memos that warned executives that changes to the platform's algorithm were generating "misinformation, toxicity, and violent content" around the world. Meta ignored warnings and chose instead to seek engagement and profits, in part out of fear of angering the political right – even as the researchers recognized the potential long-term damage to democracies.
By now, the problem is well understood. Fortunately, we know what it takes to fix it.
The key driver of election disinformation – and the key aspect of social media platforms being exploited by bad actors – are the algorithms. Algorithmic systems are how social media platforms determine what content a user will see. The basic components of recommendation systems on large online platforms are similar – they shape the content that is recommended, what shows up in your feeds and searches, and how advertisements are delivered. In lay person's terms, you don’t control so much of what you see on your “page,” despite claims to the contrary. The social media company does.
These systems have been weaponized over and over again – exploited by bad actors to sew confusion, target election workers and voters, amplify disinformation and so on. The fixes are not complicated, and are well known. First, engagement-based ranking systems are problematic because people are disproportionately likely to engage with more harmful content that is divisive and contains misinformation. Second, if accounts aren’t verified as belonging to real people who are who they say they are (as opposed to an intelligence operative in Macedonia masquerading as an American with extreme political views), disinformation, hate speech, and other threats to democratic processes will proliferate.
Our organization, the Integrity Institute, published a report in February on mitigating algorithmic risks and a series of proposals on what a responsible recommender system around elections looks like. There’s a clear roadmap for election integrity. The platforms know this too. In fact, if you went on to Facebook or Instagram in the days leading up and immediately following the US election in 2020, your feed would have looked entirely different. Giving in to pressure from democracy and transparency advocates, and burned from their experience in 2016, Meta (then Facebook) implemented a series of “break glass” measures to improve the integrity of its platforms. In short, this came down to algorithms: users were shown credible news sources, not engagement-driven disinformation; election lies and threats were disallowed and aggressively policed.
The dam held just long enough that Meta likely played a meaningful role in safeguarding the 2020 election from threats that succeeded in prior elections. (Notably these guardrails were removed following the election, just before the January 6, 2021 insurrection at the US Capitol was organized partly on Facebook and other social media platforms.) What this tells us is that social media companies can make elections safer. They just choose not to. We either need to encourage them to do the right thing, or force them. Or both.
In the EU, Commissioner Thierry Breton has rightfully announced an investigation into Meta for election disinformation. This is laudable, but investigations alone aren’t enough. Fortunately, Commissioner Breton and the EU have other tools at their disposal. In 2023, the first wave of regulations under the landmark DSA took effect; further requirements kicked in earlier this year.
Under the DSA, one of the most ambitious regulatory regimes for tech companies and online platforms, the EU has extraordinary power and reach to act. In fact, some of the DSA’s most significant requirements are meant to ensure that platforms implement risk mitigation measures, and that “systemic risks” to society are minimized. The EU, by law, could demand evidence from platforms about how their algorithms are optimized, and the role they play in the spread of harmful content. While they cannot demand specific mitigation measures, forcing platforms to be honest about the scale of harmful content on their services, and what is causing its spread (e.g., algorithmic recommendations, or engagement-based classifiers that place such content higher in the ranking queue), can pave the way for accountability.
Based on what we know about what platforms can do in the context of elections, the Commission should be watching closely and demand evidence for sufficient platform action, and explanations where there isn't any. In crisis situations where platforms do not take sufficient action, Articles 36 and 48 of the DSA may even permit the EU to deem platforms as out-of-compliance and fine them up to 6% of their global revenue. Even at this late hour, these platforms could decide proactively to launch sufficient election-related protections ahead of the EU election, and set a model for subsequent elections around the world. And, as the 2020 US elections showed, even a brief period post-election, given the late date, could have an impact.
Few countries have the regulatory power of the DSA, though. In the US and UK, for instance, “safe by default” could and should be the rule in upcoming elections. The US has few protections in place, and no legislation looks like it is set to pass let alone have impact before November. While the UK passed the Online Safety Act last year, it is unclear what effect it will have on election harms. As in the EU, time is running out.
Elections raise the stakes substantially for platforms. It’s the most critical time to ensure they aren’t amplifying false content or other communications meant to stoke violence or election delegitimization algorithmically. And they are the most powerful moments to show that we have solutions that work. We can have a social internet designed to help individuals, societies, and democracies thrive. The EU can help make this happen – and show the world that we can choose safer elections, and a better internet.
What’s coming in the pipeline?
I led a workshop on researcher access to social media platform data last week for some civil society organizations, regulators, and researchers in the European Union. If you’re interested in seeing more about what is going into this handbook, the Table of Contents is here and a brief slide deck I presented to the European Digital Media Observatory is here. If you have specific questions about accessing, understanding, or working with platform data, please DM me or comment on this post.
If you’re interested in contributing, please do let me know. Any relevant knowledge or background is welcomed, though design is top-of-mind right now. I created a wireframe for the web app, but it’s just a draft and could definitely be improved.
I received the data from a wave of the Neely Social Media Index survey that we conducted on a nationally representative sample of Polish adults. I’ve started the process of cleaning the data, but this may take a little longer than usual because the data labels are all in Polish. Despite being Polish myself, I do not speak or read the language, but am hoping that there’s a straightforward way to apply Google translate to the spreadsheet. If you have recommendations, I’d love to hear them.
I’m supporting the Civic Health Project’s Social Media Detoxifier, which assesses the probability that a reader will perceive a social media post as toxic, and a custom GenAI large language model to generate civil responses that users could choose to use in response to the toxic post they encountered. A slide deck summarizing the project is available here, and if you’re interested, you can request a demo here. I’ve also reviewed much of the relevant research and started writing a review of it that I’ll share here.
I had a couple of presentations (one on the need for data transparency, and another on researcher access to social media data) accepted to the annual Trust and Safety Research Conference taking place at Stanford in September.
Some News
A social media ban won’t keep my teenagers safe -- it just takes away the place they love -- Anna Spargo-Ryan, The Guardian
The Anxious Generation wants to save teens. But the bestseller’s anti-tech logic is skewed -- Blake Montgomery, The Guardian
Google confirms the leaked Search documents are real -- Mia Soto, The Verge
CEO of Google says it has no solution for its AI providing wildly incorrect information -- Sharon Adarlo, Futurism
Laid-off TikTok staffers describe feeling ‘blindsided’ after a ‘very chaotic ride’ -- Shriya Bhattarchya & Dan Whateley
TikTok is reportedly splitting its source code to create a US-only algorithm (with forced sale in US in the backdrop) -- Richard Lawler, The Verge
Quantifying the impact of misinformation and vaccine-skeptical content on Facebook -- Jennifer Allen, Duncan Watts, & David Rand, Science
Instagram is training AI on your data. It’s nearly impossible to opt out -- Jesus Diaz, Fast Company
Meta AI is summarizing some bizarre Facebook comment sections -- Emma Roth, The Verge
A Devil’s bargain with OpenAI -- Damon Beres, The Atlantic
A guide to investigating digital ad libraries -- Craig Silverman, Digital Investigations
Testing theory of mind in large language models and humans -- James Strachan et al., Nature Human Behavior