Computer Says Maybe

Alix Dunn

Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly. read less
TechnologyTechnology
Society & CultureSociety & Culture

Episodes

Net 0++: Concrete arguments for AI
Oct 25 2024
Net 0++: Concrete arguments for AI
In our third episode about AI & the environment, Alix interviewed Sherif Elsayed-Ali, who’s been working on using AI to reduce the carbon emissions of concrete. Yes, that’s right — concrete.This may seem like a very niche place to focus a green initiative on but it isn’t; concrete is the second most used substance in the world because it’s integral to modern infrastructure, and there’s no other material like it. It’s also one of the biggest carbon emitters in the world.In this episode Sherif explains how AI and machine learning can make the process of concrete production more precise and efficient so that it burns much less fuel. Listen to learn about the big picture of global carbon emissions, and how AI can actually be used to actually reduce carbon output, rather than just monitor it — or add to it!Sherif Elsayed-Ali trained as a civil engineer, then studied international human rights law and public policy and administration. He worked with the UN and in the non-profit sector on humanitarian and human rights research and policy, before embarking on a career in tech and climate.Sherif founded Amnesty Tech, a group at the forefront of technology and human rights. He then joined Element AI (today Service Now Research), starting and leading its AI for Climate work. In 2020, he co-founded and became CEO of Carbon Re, an industrial AI company spun out of Cambridge University and UCL, developing novel solutions for decarbonising cement. He then co-founded Nexus Climate, a company providing climate tech advisory services and supporting the startup ecosystem.
Net 0++: Big Dirty Data Centres
Oct 18 2024
Net 0++: Big Dirty Data Centres
This week we are continuing our AI & Environment series with an episode about a key piece of AI infrastructure: data centres. With us this week are Boxi Wu and Jenna Ruddock to explain how data centres are a gruesomely sharp double-edged sword.They contribute to huge amounts of environmental degradation via local water and energy consumption, and impact the health of surrounding communities with incessant noise pollution. Data centres are also used as a political springboard for global leaders, where the expansion of AI infrastructure is seen as being synonymous with progress and economic growth.Boxi and Jenna talk us through the various community concerns that come with data centre development, and the kind of pushback we’re seeing in the UK and the US right now.Boxi Wu is a DPhil researcher at the Oxford Internet Institute and a Research Policy Consultant with the OECD’s AI Policy Observatory. Their research focuses on the politics of AI infrastructure within the context of increasing global inequality and the current climate crisis. Prior to returning to academia, Boxi worked in AI ethics, technology consulting and policy research. Most recently, they worked in AI Ethics & Safety at Google DeepMind where they specialised in the ethics of LLMs and led the responsible release of frontier AI models including the initially released Gemini models.Jenna Ruddock is a researcher and advocate working at the intersections of law, technology, media, and environmental justice. Currently, she is policy counsel at Free Press, where she focuses on digital civil rights, surveillance, privacy, and media infrastructures. She has been a visiting fellow at the University of Amsterdam's critical infrastructure lab (criticalinfralab.net), a postdoctoral fellow with the Technology & Social Change project at the Harvard Kennedy School's Shorenstein Center, and a senior researcher with the Tech, Law & Security Program at American University Washington College of Law. Jenna is also a documentary photographer and producer with a background in community media and factual streaming.Further readingGoverning Computational Infrastructure for Strong and Just AI Economies, co-authored by Boxi WuGetting into Fights with Data Centres by Anne Pasek
Net 0++: Microsoft’s greenwashing w/ Holly Alpine
Oct 11 2024
Net 0++: Microsoft’s greenwashing w/ Holly Alpine
This week we’re kicking off a series about AI & the environment. We’re starting with Holly Alpine, who just recently left Microsoft after starting and growing an internal sustainability programme over a decade.Holly’s goal was pretty simple: she wanted Microsoft to honour the sustainability commitments that they had set for themselves. The internal support she had fostered for sustainability initiatives did not match up with Microsoft’s actions — they continued to work with fossil fuel companies even though doing so was at odds with their plans to achieve net 0.Listen to learn about what it’s like approaching this kind of huge systemic challenge with good faith, and trying to make change happen from the inside.Holly Alpine is a dedicated leader in sustainability and environmental advocacy, having spent over a decade at Microsoft pioneering and leading multiple global initiatives. As the founder and head of Microsoft's Community Environmental Sustainability program, Holly directed substantial investments into community-based, nature-driven solutions, impacting over 45 global communities in Microsoft’s global datacenter footprint, with measurable improvements to ecosystem health, social equity, and human well-being.Currently, Holly continues her environmental leadership as a Board member of both American Forests and Zero Waste Washington, while staying active in outdoor sports as a plant-based athlete who enjoys rock climbing, mountain biking, ski mountaineering, and running mountain ultramarathons.Further Reading:Microsoft’s Hypocrisy on AIOur tech has a climate problem: How we solve it
The stories we tell ourselves about AI
Sep 20 2024
The stories we tell ourselves about AI
Applications for our second cohort of Media Mastery for New AI Protagonists are now open! Join this 5-week program to level up your media impact alongside a dynamic community of emerging experts in AI politics and power—at no cost to you. In this episode, we chat with Daniel Stone, a participant from our first cohort, about his work. Apply by Sunday, September 29th!The adoption of new technologies is driven by stories. A story is a shortcut to understanding something complex. Narratives can lock us into a set of options that are…terrible. The kicker is that narratives are hard to detect and even harder to influence.But how reliable are our narrators? And how can we use story as strategy?The good news is that experts are working to unravel the narratives around AI. All so that folks with public interest in mind can change the game.This week Alix sat down with three researchers looking at three AI narrative questions. She spoke to Hanna Barakat about how the New York Times reports on AI; John Tanner, who scraped and analysed huge amounts of YouTube videos to find narrative patterns; and Daniel Stone, who studied and deconstructed metaphors that power collective understanding about AI.In this ep we ask:What are the stories we tell ourselves about AI? And why do we let industry pick them?How do these narratives change what is politically possible?What can public interest organisations and advocates do to change the narrative game?Hanna Barakat is a research analyst for Computer Says Maybe, working at the intersection of emerging technologies and complex systems design. She graduated from Brown University in 2022 with honors in International Development Studies and a focus in Digital Media Studies.Jonathan Tanner founded Rootcause after more than fifteen years working in senior communications roles for high-profile politicians, CEOs, philanthropists and public thinkers across the world. In this time he has worked across more than a dozen countries running diverse teams whilst writing keynote speeches, securing front page headlines, delivering world-first social media moments and helping to secure meaningful changes to public policy.Daniel Stone is currently undertaking research with Cambridge University’s Centre for Future Intelligence and is the Executive Director of Diffusion.Au. He is a Policy Fellow with the Chifley Research Centre and a Policy Associate at the Centre for Responsible Technology Australia.
Exhibit X: Tech and Tobacco
Jul 26 2024
Exhibit X: Tech and Tobacco
Here is something you’re probably tired of hearing: Big Tech is responsible for a bottomless brunch of societal harms. And they are not being held accountable. Right now it feels as though we hear constantly about laws, regulation, courts. But none of it is effective in litigating against Big Tech.In our latest podcast series Exhibit X, we’re looking at how the tides might finally be turning. Legal accountability could be around the corner, but only if a few things happen first.To start, we look back to 1964. When Big Tobacco was winning the ‘try your best to profit from harm’ race. Research showed cigarettes were addictive and also caused cancer — and yet the industry evaded accountability for decades.In this episode we ask questions like:Why wasn’t a report in 1964 showing cigarettes are addictive and cause cancer enough to transform the industry?What can we learn about corporate capture of research on tobacco?How did academia and experts shape the outcomes of court cases?Prathm Juneja was Alix’s co-host for this episode. He is a PhD Candidate in Social Data Science at the Oxford Internet Institute Working at the intersection of academia, industry, and government on technology, innovation, and policy.Further readingC-SPAN: Tobacco Settlement The Cigarette Papers - Full Online VersionThe Truth Tobacco Industry Documents Big Tobacco and the Historians Tobacco Litigation Documents A Tobacco Whistle-Blower's Life Is TransformedInventing Conflicts of Interest: A History of Tobacco Industry Tactics Tobacco Industry Research CommitteeExperts Debating Tobacco Addiction
What the FAccT? Evidence of bias. Now what?
Jul 12 2024
What the FAccT? Evidence of bias. Now what?
In part four of our FAccT deep dive, Alix joins Marta Ziosi and Dasha Pruss to discuss their paper “Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool”.In their paper they discuss how an erosion of public trust can lead to ‘any idea will do’ decisions, and often these lean on technology, such as predictive policing systems. One such tool is the Shot Spotter, a piece of audio surveillance tech designed to detect gunfire — a contentious system which has been sold both as a tool for police to surveil civilians, and as a tool for civilians to keep tabs on police. Can it really be both?Marta Ziosi is a Postdoctoral Researcher at the Oxford Martin AI Governance Initiative, where her research focuses on standards for frontier AI. She has worked for institutions such as DG CNECT at the European Commission, the Berkman Klein Centre for Internet & Society at Harvard University, The Montreal International Center of Expertise in Artificial Intelligence (CEIMIA) and The Future Society. Previously, Marta was a Ph.D. student and researcher on Algorithmic Bias and AI Policy at the Oxford Internet Institute. She is also the founder of AI for People, a non-profit organisation whose mission is to put technology at the service of people.  Marta holds a BSc in Mathematics and Philosophy from University College Maastricht. She also holds an MSc in Philosophy and Public Policy and an executive degree in Chinese Language and Culture for Business from the London School of Economics.Dasha Pruss is a 2023-2024 fellow at the Berkman Klein Center for Internet & Society and an Embedded EthiCS postdoctoral fellow at Harvard University. In fall 2024 she will be an assistant professor of philosophy and computer science at George Mason University. She received her PhD in History & Philosophy of Science from the University of Pittsburgh in May 2023, and holds a BSc in Computer Science from the University of Utah. She has also co-organized with Against Carceral Tech, an activist group working to ban facial recognition and predictive policing in the city of Pittsburgh.This episode is hosted by Alix Dunn. Our guests are   Marta Ziosi and Dasha PrussiFurther Reading Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool Refusing and Reusing Data by Catherine D’Ignazio