Fixing the Web without the Stanford Internet Observatory?
A vital internet research institute that studied how the spread of misinformation and harms to children online shuttered following failed political lawsuits
Five years ago, Stanford University opened the Internet Observatory to bring together experts from many different disciplines to study abuses of the internet and help inform public policy discussions. The Internet Observatory quickly lived up to its remit in conducting rigorous research and advising numerous US Congressional and Senate subcommittees. They shed light on how foreign actors attempt to interfere in US politics and sow divisions. They created the Journal of Online Trust & Safety for peer-reviewed interdisciplinary research trying to figure out the sources of and solutions to the biggest harms people face on the internet. They created a gathering place for those of us working in technology companies, civil society organizations, non-profit organizations, government, and academia to come together to share what we knew to address some serious problems. They created open-source tools for people wanting to analyze social media platform data. From my perspective, they were doing great work in a very difficult space.
Last year, the Stanford Internet Observatory helped identify a massive network of millions of accounts on Facebook, Instagram, and Twitter that were buying and selling child sexual abuse material (CSAM). These revelations galvanized support in Washington, leading to Senate hearings with CEOs of some of the companies that owned the platforms and to the White House assembling the Kids Online Health & Safety Taskforce.
This task force came to Stanford to learn from dozens of experts (including yours truly!) to discuss how harms to kids manifest and proliferate online, and what the best methods for protecting kids online are. It was an exceptional event for bringing together people who scarcely would encounter one another. For example, I randomly sat at a table with VPs of two big tech companies, a very senior member of the White House Task Force, a parent who runs an education-oriented non-profit, and a freshman student. These kinds of gatherings are far too rare, but are the most useful in my mind. As a researcher with subject matter expertise in a specific niche, I generally know what other researchers will question or recommend; I have far less insight into the politics going on behind the scenes in government, or in companies where I haven’t worked, or what kids are actually doing in middle school. I suspect this is true for people from these other disciplines. Convenings such as these are… or were made possible by the Stanford Internet Observatory.
Unfortunately, after years of failed lawsuits from groups led by former Trump White House officials, Stanford decided to close the Internet Observatory. The lawsuits always struck me as peculiar because they largely alleged that the Observatory (along with a few other research-focused non-profits, like the Atlantic Council’s Digital Forensics Lab, Graphika, and the University of Washington’s Center for an Informed Public) were infringing on first amendment freedom of speech rights. Yet, none of these groups actually had any power to remove anybody’s speech because they didn’t control any of the social media platforms they studied. Sure, they could say that bad things X, Y, and Z appeared on some platform, but that platform was under no obligation to do anything with those notifications. Even if, somehow, the 10ish person team at the Stanford Internet Observatory were able to compel trillion-dollar private companies to remove misinformation on their platforms, the first amendment wouldn’t be relevant. The first amendment only protects speech from being censored by the government, not by private businesses or researchers.
Even though those lawsuits failed, Stanford University spent millions to fight the challenges, which may have led them to decide they couldn’t continue funding the organization. Many of the leading staff have left or are not having their contracts renewed. Fortunately, it appears that the Internet Observatory’s journal, projects, and tools are being handed over to Jeff Hancock and the Social Media Lab. And, former SIO leader Renee DiResta’s new book on how propagandists manipulate algorithms to shape public opinion just came out this week.
What’s coming in the pipeline?
I’ve been working on getting the back-end of the Platform Data Guide set-up, but think I’ll need to wait another week before sharing the alpha version of the tool. Stay tuned for that. In the meantime, my fabulous research assistant and colleague at the Integrity Institute, Spencer Gurley, has begun the process of compiling all of the publicly available social media data we can find. It’s a work in progress, but should give you a taste of what is to come in the following weeks. If you know of data that we haven’t yet included, please message or email me.
With the help of Google Sheets’ amazing “=googletranslate()” function, I translated the Polish Neely Social Media Index data. I’m planning to analyze those data very soon, and share our first international results. I have no idea how they will compare to the US data, but I’m eager to find out!
I’m in the final stage of getting approval to access the researcher API for TikTok for a project I’m doing with Jonathan Stray, Ben Smith, Emerson Johnston, and others.
I’ve read dozens of research papers on large language models, toxic political conversations online, and counterspeech strategies, and it’s been (mostly) really interesting. I’m working on a summary that I’ll share here soon.
Some News
In a first, OpenAI removes influence operations tied to Russia, China, and Israel -- Shannon Bond, NPR
How Americans Navigate Politics on TikTok, X, Facebook, and Instagram -- Colleen McClain, Monica Anderson, & Risa Gelles-Watnick, Pew Research Center
Online Hate and Harassment: The American Experience 2024, Center for Technology and Society via the Anti-Defamation League
Google’s and Microsoft’s AI Chatbots refuse to say who won the 2020 US election -- David Gilbert, Wired
Bridging the divide: Translating research on digital media into policy and practice -- Issie Lapowsky, Knight Foundation
Propagandists are using AI too--and companies need to be open about it -- Josh Goldstein & Renee DiResta, MIT Technology Review
The one simple trick to measuring abuse in tech’s $440 billion ads business -- Rob Leathern, Tech Policy Press
WhatsApp Channels, used by millions, has no clear election rules -- Rebecca Kern, Politico
Meta quietly rolls out Communities on Messenger -- Aisha Malik, Tech Crunch
Don’t be fooled by Meta’s transparency display: Election integrity risks remain -- Mozilla
What aren’t the OpenAI whistleblowers saying? -- Casey Newton, Platformer
Inside Anthropic, the AI company betting that safety can be a winning strategy -- Billy Perrigo, Time
That robot makes me feel important: New research on teens and generative AI -- Jacqueline Nesi, Techno Sapiens