41 days until the US elections and life for this political psychologist and tech integrity professional is just as chaotic as you might imagine.
On the politics side, assassination attempts are pretty rare (though, not as rare as I first thought), and we’ve had two so far in about a month’s time. Perhaps not surprisingly, the Secret Service’s approval rating has hit a 10-year low (yes, Gallup does track this!). Now, the Secret Service has the third highest rate (36%) of Americans saying they’re doing a poor job, only bested by the Departments of Veterans Affairs (37%) and Justice (38%).
Nonetheless, I made a new YouTube video weaving together my previous blog posts on political violence and added in a bunch of new data and shared more on some of my experiences being targeted as a researcher. If you haven’t subscribed to the channel yet, please do.
On the tech integrity side, I’ve been speaking a lot. I gave presentations to:
the European Digital Media Observatory on data access and how it comports (or fails to comport) with the transparency requirements in the Digital Services Act,
the Cyberlaw Clinic at Stanford University on how recommendation algorithms work, and how they could be optimized for societal value, and
a few groups of lawyers who want to understand classification, ranking, and the measurement of harms by tech companies.
This week, I’m speaking at the annual Trust & Safety Research Conference at Stanford on the need for meaningful transparency from tech companies if we are to help them make their products safer for their users, and on how to work with large data from these companies (it ain’t what they taught in my graduate stats or methods courses in psychology or political science!). If you’re attending, please grab me and say hello.
And, bears (oh my).
What’s in the pipeline?
I’ve got an op-ed on data transparency that likely will be coming out in the next week or so. I’ll share that once I’ve gotten the green light.
With Elise Liu, one of the best product managers I ever worked with and a friend of mine, I’m working on another op-ed on the lessons that we learned from trying to make social media safer that should also be applied to the world of generative AI.
I’ve got a bunch of analyses of what is likely the last regular quarterly Neely Social Media Index survey that I need to wrap and write-up. If you found the survey valuable, please do let me know as it’d be helpful for us as we try to convince grantors to fund it moving forward.
What I’m reading
Texas’ mandatory age verification law will weaken privacy and security on the internet -- Christine Runnegar & Dan York, Internet Society
People think that social media platforms do (but should not) amplify divisive content -- Steve Rathje, Claire Robertson, Billy Brady, & Jay Van Bavel, Perspectives on Psychological Science
More Americans -- especially young adults -- are regularly getting news on TikTok -- Rebecca Leppert & Katerina Eva Matsa, Pew Research Center
Algorithms should not control what people see, UN Chief says, launching Global Principles for Information Integrity -- Vibhu Mishra, UN News
Where Facebook’s AI slop comes from -- Jason Koebler, 404 Media
Facebook loses jurisdiction appeal in Kenyan court paving the way for moderators’ case to proceed -- Evelyne Musambi, Associated Press
Elon Musk’s X backs down in Brazil -- Jack Nicas & Ana Ionova, New York Times
Meta bans Russian state media for ‘foreign interference’ -- Katie Paul, Reuters
Federal Court: TikTok may be liable for a 10-year-old’s death -- Abby Vesoulis, Mother Jones
Social networks can’t be forced to filter content for kids, says judge -- Adi Robertson, The Verge
Court blocks California’s online child safety law -- Adi Robertson, The Verge
California bill targeting social media addiction in teens passes State Senate -- Tim Fang, CBS News
‘We want you to be a proud boy:’ How social media facilitates political intimidation and violence (PDF download) -- Paul Barrett
Twitter ‘ceased to exist’ after Australia’s eSafety commissioner demanded answers about child sex abuse material, X’s lawyer argues -- Josh Taylor, The Guardian
Training compute of frontier AI models grows by 4-5x per year -- Jaime Sevilla & Edu Roldan, Epoch AI
Balancing trust and safety: Lessons from the CrowdStrike incident -- John Paul Cunningham, Security