TEC Talks

Notre Dame Technology Ethics Center

Hosted by Kirsten Martin, director of the Notre Dame Technology Ethics Center (ND TEC), TEC Talks features conversations on a broad range of topics in technology ethics. These could be anything from the ways we develop and deploy AI and how we fight misinformation to the notion of privacy online and corporate responsibility when it comes to people’s data.Each episode takes one article, idea, case, or discovery and examines the larger implications for the field of tech ethics, with the goal being to make this work accessible to a wide audience. Because when it comes to tech, it’s not enough to just ask “What can we do?” We also need to think about “What should we be doing?”

Play Trailer
Welcome to TEC Talks
Mar 23 2022
Welcome to TEC Talks
This trailer episode is a brief introduction to TEC Talks, a podcast focused on the impact of technology on humanity.Here’s the transcript:Kirsten Martin  0:02Hi, I’m Kirsten Martin, director of the Notre Dame Technology Ethics Center, better known as ND TEC, and I’m here to introduce you to TEC Talks.TEC Talks started out in 2021 as a virtual live event series. Videos of all these sessions are available at techethics.nd.edu.Now we’re trying something new: TEC Talks, the podcast.The show, like our center, focuses on the impact of technology on humanity. Because when it comes to tech, it’s not enough to just ask “What can we do?” We also need to think about “What should we be doing?”Throughout this podcast, I’ll be bringing you conversations—typically 15–30 minutes long—on a broad range of topics relevant to technology ethics. These could be anything from the ways we develop and deploy AI and how we fight misinformation to the notion of privacy online and corporate responsibility when it comes to people’s data.Each episode will take one article, idea, case, or discovery and examine the larger implications for the field of technology ethics. Our goal is to make these ideas accessible to a wide audience, whether you’re a scholar working on similar issues or someone, in my life, like a retired executive in North Carolina, still reading hard copies of the Wall Street Journal and New York Times and tracking the daily weather. Just as an example.Hi, Dad.TEC Talks is available wherever you get your podcasts and at techethics.nd.edu. I hope you’ll check it out.Follow ND TEC on Twitter and LinkedIn
Al, Anti-Discrimination Law, and Your (Artificial) Immutability
Nov 16 2022
Al, Anti-Discrimination Law, and Your (Artificial) Immutability
How could a personal characteristic like eye movement affect, say, whether you get a loan?Host Kirsten Martin is joined by Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute (OII) at the University of Oxford. She founded and leads OII’s Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies.Sandra came on the show to talk about her paper “The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law,” which is forthcoming in the Tulane Law Review.Most people are familiar with the idea of anti-discrimination law and its focus on protected-class attributes—e.g., race, national origin, age, etc.—that represent something immutable about who we are as individuals and that, as Sandra explains, have been criteria humans have historically used to hold each other back.She says that with algorithms, we’re now being placed in other groups that are also largely beyond our control but that can nevertheless impact our access to goods and services and things like whether we get hired for a job. These groups fall into two main categories: people who share non-protected attributes—say, what type of internet browser they use, how their retinas move, dog owners, etc.—and people who share characteristics that are significant to computers (e.g., clicking behavior) but for which we as humans have no social concept.This leads to what Sandra calls “artificial immutability” in the attributes used to describe us, or the idea that there are things about ourselves we can’t change not because they were given by birth but because we’re unaware they’ve been assigned to us by an algorithm. She offers a definition of what constitutes an immutable trait and notes that there can be legitimate uses of them in decision-making, but that in those cases organizations need to be able to explain why they’re relevant.Episode LinksPaper Discussed in the Episode: “The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law”Sandra’s BioEpisode TranscriptAt the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics whose work our guest is particularly excited about. Sandra highlighted University of Cambridge psychologist Amy Orben and her research on online harms, particularly in the context of young people’s use of social media.Follow ND TEC on Twitter and LinkedIn
Algorithmic Fairness is More Than a Math Problem
Oct 19 2022
Algorithmic Fairness is More Than a Math Problem
Host Kirsten Martin is joined by Ben Green, an assistant professor at the Gerald R. Ford School of Public Policy and a postdoctoral scholar in the Michigan Society of Fellows at the University of Michigan. Specializing in the social and political impacts of government algorithms, with a focus on algorithmic fairness, smart cities, and the criminal justice system, Ben is also an affiliate of the Berkman Klein Center for Internet & Society at Harvard University and a fellow of the Center for Democracy & Technology.He came on the show to talk about his paper “Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness,” which recently appeared in Philosophy & Technology.Ben begins by explaining the aforementioned “impossibility of fairness,” an idea that describes the incompatibility of different mathematical notions of what makes a system fair. By focusing on meeting one of these formal definitions of fairness, an algorithm that is mathematically “fair” can nevertheless yield decisions that re-entrench real-world injustices, including those it may have been designed to counter.Asking whether the ultimate purpose of an algorithm is to satisfy a mathematical formalism or rather improve society, Ben puts forward an alternative notion of what he calls substantive algorithmic fairness—his detailed diagram of which, labelled Figure 2 in the paper, made a lasting impression on Kirsten. His approach still envisions a role for mathematical conceptions of fairness, but it repositions them as one consideration in a broader process where the primary concern is accounting for and mitigating both upstream inequalities that exist before an algorithm is deployed and downstream harms present afterwards.Episode LinksPaper Discussed in the Episode: “Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness” (Note: Figure 2 referenced in the episode appears on p. 17.)Ben’s BioEpisode TranscriptAt the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Ben highlighted four he says are working at the intersections of AI, ethics, race, and real-world social impact:Rashida Richardson (Northeastern University)Anna Lauren Hoffmann (University of Washington)Lily Hu (Yale University)Rodrigo Ochigame (Leiden University)Follow ND TEC on Twitter and LinkedIn
Provoking Alternative Visions of Technology
Oct 5 2022
Provoking Alternative Visions of Technology
Host Kirsten Martin is joined by Daniel Susser, an assistant professor in the College of Information Sciences and Technology and a research associate in the Rock Ethics Institute at Penn State University. A philosopher by training, he works at the intersection of technology, ethics, and policy, with his research currently focused on questions about privacy, online influence, and automated decision-making.Daniel came on the show to talk about his short essay “Data and the Good?” that recently appeared in Surveillance & Society.Considering the intersection of scholarship in privacy law and surveillance studies, he notes how research in these fields tends to focus on critiques of existing technologies and their potential harms. While he and Kirsten are quick to emphasize how necessary this kind of work is, Daniel describes his paper as a provocation meant to push researchers, himself included, to at the same time put forward substantive alternatives for how technology could or should be used. He says there are understandable reasons why this doesn’t happen more often, but that absent competing visions for our technological future, we are beholden to those crafted by the technology industry.Episode LinksPaper Discussed in the Episode: “Data and the Good?”Daniel’s BioEpisode TranscriptAt the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about.In addition to citing classic texts in science and technology studies by Langdon Winner and Phil Agre as well as The Convivial Society blog, which applies classic writing in philosophy of technology to contemporary problems, Daniel highlighted three people working to advance alternative visions of technology: Ruha Benjamin (Princeton University)Salomé Viljoen (University of Michigan)*James Muldoon (University of Exeter)*Salomé was also the guest for episode 10 of TEC Talks, “Moving Data Governance to the Forest From the Trees.”Follow ND TEC on Twitter and LinkedIn
Moving Data Governance to the Forest From the Trees
Sep 21 2022
Moving Data Governance to the Forest From the Trees
Host Kirsten Martin is joined by Salomé Viljoen, an assistant professor of law at the University of Michigan Law School and an affiliate of the Berkman Klein Center for Internet & Society at Harvard University. She studies the information economy, particularly data about people and the automated systems it trains, and is interested in how information law structures inequality and how alternative legal arrangements might address that inequality.Salomé came on the show to talk about her paper “A Relational Theory of Data Governance,” which appeared in The Yale Law Journal.The paper proposes a new framework for thinking about how we govern the use of people’s data, so she and Kirsten begin by discussing the current/traditional approach focused on the privacy of individual transactions and the degree to which we consent to share our own information. However, Salomé explains what this approach misses, saying how in the digital economy, data isn’t collected to make decisions about any one person. Instead, it’s used to understand populations of people with similar interests, backgrounds, etc. and then predict things about them, such that opting out of sharing your own data doesn’t change the inferences being made about you.Based on Salomé’s argument, Kirsten compares putting all our attention on the handoff of our data rather than on what happens with it afterwards to the old adage about missing the forest for the trees. Salomé then details what she means by moving toward a relational theory of data governance, one that accounts for population-level impacts of big data, recognizes both its potential benefits and harms, and prioritizes the scrutiny of data flows most likely to affect vulnerable communities in disproportionately negative ways (e.g., facial recognition data).Episode LinksPaper Discussed in the Episode: “A Relational Theory of Data Governance”Salomé’s BioEpisode TranscriptAt the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Salomé highlighted four:Beatriz Botero Arcila (Sciences Po)Ignacio Cofone (McGill University)Elettra Bietti (New York University and Cornell Tech)Amanda Parsons (University of Colorado Boulder)Follow ND TEC on Twitter and LinkedIn
It’s AI, Not a Personality Detector (Part 2)
Sep 7 2022
It’s AI, Not a Personality Detector (Part 2)
In this second of a two-part episode, host Kirsten Martin continues her conversation with Luke Stark, an assistant professor in the Faculty of Information and Media Studies at Western University in London, Ontario, and Jevan Hutson, an associate at Hintze Law PLLC. Luke researches the historical, social, and ethical impacts of computing and artificial intelligence technologies, and Jevan‘s practice focuses on the intersection of privacy, security, and data ethics.They came on the show to talk about a paper they coauthored titled “Physiognomic Artificial Intelligence,” which appeared in the Fordham Intellectual Property, Media and Entertainment Law Journal.In the first episode, Luke started with the troubling history of physiognomy and phrenology. These two pseudosciences were widely discredited in the early 20th century, but their notions that people’s external appearances can be a way to access internal truths about them have made a comeback in the form of AI systems that purport to be able to perform this type of analysis. Jevan also discussed some of the troubling commercial applications in areas like hiring, education, and criminal justice where we’re already seeing this “physiognomic AI” deployed.Part 2 picks up with Kirsten asking Jevan about the menu of regulatory options he and Luke propose in the paper to remedy the fundamental problems with these systems. Jevan describes why they think physiognomic AI should be barred completely and the existing legal frameworks through which that might happen. Kirsten adds that the gap between AI ethicists and other technologists is larger in this area than just about any other, and Luke suggests computer vision isn’t the only field of study where physiognomic impulses can still be found.Episode LinksPaper Discussed in the Episode: “Physiognomic Artificial Intelligence”Luke’s BioJevan’s BioEpisode TranscriptAt the end of each episode, Kirsten asks about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Luke and Jevan highlighted three:Deb Raji (Mozilla Foundation)Catherine Stinson (Queen’s University)Jennifer Lee (ACLU of Washington)Follow ND TEC on Twitter and LinkedIn
It’s AI, Not a Personality Detector (Part 1)
Aug 24 2022
It’s AI, Not a Personality Detector (Part 1)
It’s a TEC Talks first: two guests! Host Kirsten Martin is joined by Luke Stark, an assistant professor in the Faculty of Information and Media Studies at Western University in London, Ontario, and Jevan Hutson, an associate at Hintze Law PLLC. Luke researches the historical, social, and ethical impacts of computing and artificial intelligence technologies, and Jevan‘s practice focuses on the intersection of privacy, security, and data ethics.They came on the show to talk about a paper they coauthored titled “Physiognomic Artificial Intelligence,” which appeared in the Fordham Intellectual Property, Media and Entertainment Law Journal.And with two guests, the conversation went a little longer than usual, so we’ve decided to break it into two parts.In part 1, Luke starts with a quick overview of physiognomy and phrenology, two pseudosciences with racialized and gendered histories that claim people’s inner traits can be discerned from their physical/behavioral characteristics and the shapes of their skulls, respectively. Although physiognomy and phrenology were widely discredited in the early 20th century, the notion that external appearances can be a way to access internal truths has made a comeback in the form of AI systems that purport to be able to perform this type of analysis.Jevan discusses some of the troubling commercial applications in areas like hiring, education, and criminal justice where we’re already seeing this “physiognomic AI” deployed. Luke also addresses why one human being making inferences about another—something we all engage in all the time with, as he points out, very mixed results—is fundamentally different from a computer trying to do the same. He says that this is simply beyond the capabilities of artificial intelligence, with Kirsten noting that because the flaw is in the concept of physiognomic AI itself, no amount of additional data will fix the problem.Episode LinksPaper Discussed in the Episode: “Physiognomic Artificial Intelligence”Luke’s BioJevan’s BioEpisode TranscriptAt the end of each episode, Kirsten asks about another scholar in tech ethics (or several) whose work our guest is particularly excited about. However, because we split this conversation into two parts, you’ll have to come back September 7 for the second to get Luke’s and Jevan’s recommendations. Stay tuned. :)Follow ND TEC on Twitter and LinkedIn
When Privacy is a Facade for Data Extraction
Aug 10 2022
When Privacy is a Facade for Data Extraction
Host Kirsten Martin is joined by Ari Waldman, professor of law and computer science at Northeastern University, where he is the director of the Center for Law, Information, and Creativity. A leading authority on law, technology, and society, he studies how law and technology affect marginalized populations, with particular focus on privacy, misinformation, and the LGBTQ community.Ari came on the show to talk about his book Industry Unbound: The Inside Story of Privacy, Data, and Corporate Power, published in 2021 by Cambridge University Press.Intended for both a general audience of technology practitioners and more research-focused tech scholars, the book begins with interviews meant to construct a “day in the life” of people working at tech companies—which in one instance included something called “the bro meeting”—and their thoughts on privacy. Ari says his two biggest takeaways from a sociological perspective were the limits to these employees’ conceptions of what constitutes “privacy” and a false consciousness of what their companies were actually doing (or not doing) on that front.He and Kirsten talk about how compliance is routinely used as a way to advance the goals of industry rather than the rights of users, with the corporate idea of privacy even shaping the regulatory approach of laws like Europe’s General Data Protection Regulation (GDPR), such that companies don’t have to change their underlying models of data extraction.Kirsten and Ari also cover parallels between privacy and diversity compliance, the problems with notice and consent, how privacy shouldn’t be confused with encryption and security, the way siloed teams hinder information flow and negatively impact the product design process, and what it would take to shift the culture around privacy.Episode LinksBook Discussed in the Episode: Industry Unbound: The Inside Story of Privacy, Data, and Corporate PowerAri’s BioEpisode TranscriptAt the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Ari highlighted seven whose scholarship relates in one way or another to the issues he tackles in Industry Unbound:Lauren Edelman (UC Berkeley)Julie Cohen (Georgetown University)Salomé Viljoen (University of Michigan)Alicia Solow-Niederman (University of Iowa)Rashida Richardson (Northeastern University)Kate Weisburd (George Washington University)Matthew Tokson (University of Utah)Follow ND TEC on Twitter and LinkedIn
Lost in Translation: When Machines Learn Language
Jul 27 2022
Lost in Translation: When Machines Learn Language
Host Kirsten Martin is joined by Amandalynne Paullada, a postdoctoral fellow at the University of Washington’s Department of Biomedical Informatics and Medical Education. Amandalynne recently earned her Ph.D. in computational linguistics from Washington, where her dissertation examined the social impact of natural language processing, or NLP, wherein computers are programmed to learn human languages.She came on the show to talk about a paper she authored in The Gradient titled “Machine Translation Shifts Power,” which was a runner-up for the inaugural Gradient Prize.Amandalynne and Kirsten begin by discussing the use of platforms like Google Translate to analyze the social media feeds of people seeking to enter the United States, a task for which those tools were not designed and one where mistranslations can have significant negative repercussions for the individuals being vetted. They then talk about translation being a means to exert power since well before the advent of machine learning and the questions raised by using technology to translate a language you don’t otherwise understand.Their conversation also covers the implications of tech giants having access to massive amounts of natural language data, minimizing the role of trained interpreters; the notion of “paranoid reading” with respect to translated texts; and how NLP and human translators can work together to produce translations that are not just technically but also contextually accurate.Episode LinksArticle Discussed in the Episode: “Machine Translation Shifts Power”Amandalynne’s BioEpisode TranscriptAt the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Amandalynne highlighted three:Chelsea Barabas (MIT)Ranjit Singh (Data & Society Research Institute)Danielle Carr (UCLA)Follow ND TEC on Twitter and LinkedIn
Creative Speculation: Computer Science Taps Science Fiction
Jul 13 2022
Creative Speculation: Computer Science Taps Science Fiction
Host Kirsten Martin is joined by Casey Fiesler, an assistant professor in the Department of Information Science (and Computer Science, by courtesy) at the University of Colorado Boulder. Her research currently focuses on big data research ethics, ethics education, ethical speculation in technology design, technology empowerment for marginalized communities, and broadening participation in computing, with much of this work supported by the National Science Foundation, Mozilla, and Omidyar.Casey came on the show to talk about a paper she authored in the Colorado Technology Law Journal titled “Innovating Like an Optimist, Preparing Like a Pessimist: Ethical Speculation and the Legal Imagination.”Kirsten and Casey begin with the notion of unanticipated consequences in the development of new technology and Casey’s efforts, drawing on both legal education and science fiction, to get computer science and information science students thinking creatively about problems that could arise after a design has been deployed. They also discuss why critiquing technology is not the same thing as being against it, with Casey pointing to her love of tech as the reason she’s so invested in ways to make it better.Episode LinksArticle Discussed in the Episode: “Innovating Like an Optimist, Preparing Like a Pessimist: Ethical Speculation and the Legal Imagination”Casey’s BioEpisode TranscriptAt the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Casey highlighted two as well as an educational initiative in computer science:Amy Ko (University of Washington)Nicki Washington (Duke University)Responsible Computer Science ChallengeFollow ND TEC on Twitter and LinkedIn
An Evolutionary Case for Better Privacy Regulations
Jun 22 2022
An Evolutionary Case for Better Privacy Regulations
Host Kirsten Martin is joined by Laura Brandimarte, an assistant professor of management information systems at the University of Arizona’s Eller College of Management. Holding a Ph.D. in public policy and management from Carnegie Mellon University, she specializes in privacy and behavioral economics, including the psychology of self-disclosure and the social dynamics of privacy decision-making and information-sharing.Laura came on the show to talk about a paper she coauthored with Alessandro Acquisti (Carnegie Mellon University) and Jeff Hancock (Stanford University) titled “How privacy’s past may shape its future,” which appeared in January in Science magazine.Referencing work that points to the notion of privacy being present throughout human history, Laura explains that privacy management is about our ability to moderate what we share and with whom, not never sharing anything. But she notes that the strategies humans have developed evolutionarily to manage our privacy—e.g., having a conversation in hushed tones so no one but the person we’re speaking to hears—often don’t have an online equivalent and thus aren’t helpful in that context.Laura also discusses why an overreliance on the “notice and consent” approach to privacy—typified by a website presenting users with a long set of terms and conditions when they go to use it—makes it difficult to impossible for people to arrive at the best privacy decisions for themselves. Drawing on an analogy from the automotive industry and citing a lack of incentives for data holders to make changes to how they handle that data, she and her coauthors argue for regulations that move beyond notice and consent and shift responsibility for sound privacy practices to those gathering our data in the first place.Episode LinksArticle Discussed in the Episode: “How privacy’s past may shape its future”Laura’s BioEpisode TranscriptAt the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics whose work our guest is particularly excited about. Laura highlighted Joy Buolamwini, founder of the Algorithmic Justice League, an organization devoted to equitable and accountable AI.More on Joy BuolamwiniFollow ND TEC on Twitter and LinkedIn
Not the (Speech) Chilling Effect We Think
Jun 1 2022
Not the (Speech) Chilling Effect We Think
Host Kirsten Martin is joined by Suneal Bedi, an assistant professor of business law and ethics at Indiana University’s Kelley School of Business. Suneal’s areas of expertise include intellectual property, marketing law/ethics, brand strategy, and the First Amendment. Holding a joint Ph.D. in marketing and Ph.D. in business ethics from The Wharton School of the University of Pennsylvania as well as a J.D. from Harvard Law School, he employs multiple methods in his research to answer business-relevant questions that sit at the intersection of law, marketing, and public policy.Suneal came on the show to talk about a paper he recently published in the Harvard Journal of Law & Technology titled “The Myth of the Chilling Effect.”He and Kirsten started by talking about how the First Amendment and Section 230 of the Communications Decency Act—the provision that protects tech companies from being liable for user content posted on their platforms—are routinely misapplied in debates about content moderation on social media and elsewhere.Suneal then explained the study he conducted where he asked participants to write negative reviews of dining experiences to test whether putting restrictions on what people can post online does in fact have what’s known as a “chilling effect,” or the consequence of deterring speech in unintended ways. He did find evidence of this effect, but not in terms of the substance of what people were saying; rather, it tended to make their tone slightly more positive. He and Kirsten also discussed how a lack of content moderation can have its own type of chilling effect by excluding marginalized groups who may not feel comfortable on the platform.Episode LinksArticle Discussed in the Episode: “The Myth of the Chilling Effect”Suneal’s BioEpisode TranscriptAt the end of each episode, Kirsten asks for a recommendation about another scholar (or several) in tech ethics whose work our guest is particularly excited about. In addition to saying he’s interested in seeing what happens with Twitter in light of the Elon Musk news—a topic Kirsten has been quoted on widely in recent weeks, including in this story from CNN—Suneal cited the work of George Washington University’s Vikram Bhargava, the guest for the first episode of TEC Talks.Follow ND TEC on Twitter and LinkedIn
Don’t Take the Data and Run
May 11 2022
Don’t Take the Data and Run
Host Kirsten Martin is joined by Katie Shilton, an associate professor in the College of Information Studies at the University of Maryland, College Park, where she leads the Ethics and Values in Design (EViD) Lab. Her research focuses on ethics and policy for the design of information technologies, systems, and collections, and she is a co-principal investigator of the PERVADE project, a multi-campus collaboration focused on big data research ethics funded by the National Science Foundation.Katie came on the show to talk about a paper she recently coauthored with the members of the PERVADE team titled “Excavating awareness and power in data science: A manifesto for trustworthy pervasive data research,” which appeared in Big Data & Society.PERVADE was created to tackle unanswered empirical questions facing researchers working with big data—such as that gathered from social media platforms—and this paper in particular was a first attempt at making recommendations based on input from three main stakeholder groups: the researchers themselves, institutional review boards (IRBs) and other regulators, and social media users.Katie and Kirsten talked about how the ethical challenges of working with big data aren’t actually due to its bigness; rather, they arise because of how pervasive data collection has become. Katie explained how the traditional lab-based model for conducting ethical research doesn’t translate well to the big data space, discussing what researchers might instead learn from anthropologists, specifically ethnographers. Kirsten then brought up the applicability of this mindset not only in academia but also in corporate research environments.Oh, and if you listen closely, you’ll catch a cameo from one of Kirsten’s dogs who was determined to play tug during the interview.Episode LinksArticle Discussed in the Episode: “Excavating awareness and power in data science: A manifesto for trustworthy pervasive data research”Katie’s BioPERVADE ProjectEpisode TranscriptAt the end of each episode, Kirsten asks for a recommendation about another scholar (or several) in tech ethics whose work our guest is particularly excited about. Katie highlighted three:Anna Lauren Hoffmann (University of Washington)Emily Bender (University of Washington)Amandalynne Paullada (University of Washington)Follow ND TEC on Twitter and LinkedIn
Social Media Addiction: Adding Insult to Injury
Apr 20 2022
Social Media Addiction: Adding Insult to Injury
Host Kirsten Martin is joined by Vikram Bhargava, an assistant professor of strategic management and public policy at the George Washington University School of Business. His research focuses on technology addiction, mass social media outrage, autonomous vehicles, artificial intelligence, the future of work, and other topics related to digital technology policy.Vik came on the show to talk about a paper he recently coauthored with Manuel Velasquez of Santa Clara University titled “Ethics of the Attention Economy: The Problem of Social Media Addiction,” which appeared in Business Ethics Quarterly. In it, they “argue that addicting users to social media is impermissible because it unjustifiably harms users in a way that is both demeaning and objectionably exploitative.”Vik talked with Kirsten about how social media addiction raises ethical issues we haven’t seen before with other types of addictive products, using his morning cup of coffee to illustrate the distinction and what in the paper he and Velasquez call the “adding insult to injury argument.” Vik also discussed how the picture is further complicated by the fact that a social media account is routinely the most straightforward way to access certain social goods—e.g., job search websites—and his ideas on possible ways forward given that social media does provide benefits to society, as well.Episode LinksArticle Discussed in the Episode: “Ethics of the Attention Economy: The Problem of Social Media Addiction”Vik’s BioEpisode TranscriptAt the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics whose work our guest is particularly excited about. Vik highlighted Dartmouth’s Sonu Bedi, specifically his research on race-based filters in dating app algorithms.More on Sonu BediFollow ND TEC on Twitter and LinkedIn
Welcome to TEC Talks
Mar 23 2022
Welcome to TEC Talks
This trailer episode is a brief introduction to TEC Talks, a podcast focused on the impact of technology on humanity.Here’s the transcript:Kirsten Martin  0:02Hi, I’m Kirsten Martin, director of the Notre Dame Technology Ethics Center, better known as ND TEC, and I’m here to introduce you to TEC Talks.TEC Talks started out in 2021 as a virtual live event series. Videos of all these sessions are available at techethics.nd.edu.Now we’re trying something new: TEC Talks, the podcast.The show, like our center, focuses on the impact of technology on humanity. Because when it comes to tech, it’s not enough to just ask “What can we do?” We also need to think about “What should we be doing?”Throughout this podcast, I’ll be bringing you conversations—typically 15–30 minutes long—on a broad range of topics relevant to technology ethics. These could be anything from the ways we develop and deploy AI and how we fight misinformation to the notion of privacy online and corporate responsibility when it comes to people’s data.Each episode will take one article, idea, case, or discovery and examine the larger implications for the field of technology ethics. Our goal is to make these ideas accessible to a wide audience, whether you’re a scholar working on similar issues or someone, in my life, like a retired executive in North Carolina, still reading hard copies of the Wall Street Journal and New York Times and tracking the daily weather. Just as an example.Hi, Dad.TEC Talks is available wherever you get your podcasts and at techethics.nd.edu. I hope you’ll check it out.Follow ND TEC on Twitter and LinkedIn