Legal Decisions are Being Codified and the Models are Perpetuating Historical Biases | Episode 1.14

Technically Biased

Dec 13 2023 • 39 mins

𝗣𝗮𝘁𝗿𝗶𝗰𝗸 𝗞. 𝗟𝗶𝗻 is a lawyer and researcher focused on AI, privacy, and technology regulation. He is the author of 𝘔𝘢𝘤𝘩𝘪𝘯𝘦 𝘚𝘦𝘦, 𝘔𝘢𝘤𝘩𝘪𝘯𝘦 𝘋𝘰, a book that explores the ways public institutions use technology to surveil, police, and make decisions about the public, as well as the historical biases that impact that technology.

Patrick has extensive experience in litigation and policy, having worked for the ACLU, FTC, EFF, and other organizations that advocate for digital rights and social justice. He is passionate about addressing the ethical and legal challenges posed by emerging technologies, especially in the areas of surveillance, algorithmic bias, and data privacy. He has also published articles and papers on facial recognition, data protection, and copyright law.

This podcast episode covers some of the many crazy topics Lin dives into throughout his book. Some of which include the following discussions:

  • Robert Moses would often quote the saying “Legislation can always be changed. It’s very hard to tear down a bridge once it’s up.” Unsurprisingly then, Moses had a lot of influence in shaping the physical layout and infrastructure of New York City and its surrounding suburbs (i.e., hundreds of miles of road, Central Park Zoo, United Nations (UN) Headquarters, Lincoln Center, and more). Today, the digital landscape is similarly being built on a foundation of bias.
  • Can history be biased? How do we codify bias and build legal models that perpetuate discrimination in policy? Though not an easy question, the answer lies in the data. Lin ends his book by emphasizing that “If we are not willing to reflect on the effects of our history, then our technology will simply continue to mirror our past mistakes. History will be doomed to repeat itself.” (178)
  • It is important to understand what a model outputs and what inputs are considered in the overall assessment. Algorithms like COMPAS, which is used in the police system, consider variables such as education, which is indirectly classist, as education is a proxy for wealth. (120)
  • The government uses surveillance technology disproportionately to target immigrant communities; and the deployment of new systems and technologies are usually tested on immigrants first. This is yet another example of how those most influenced are those who are already most marginalized.
  • Bias is present throughout all stages of policing – from the criminal trial case (where judges use biased algorithms to validate their already biased perspectives, i.e., confirmation bias), to the recidivism assessment process (i.e., models like the aforementioned COMPAS), to cash bail, and many others.
  • Automated License Plate Readers (ALPRs) might seem harmless until you realize that “ALPRs often capture photographs of the vehicle, driver, and passengers [and] all of this data is uploaded to a central server that is accessible by law enforcement… The government can use ALPRs to target people who drive to immigration clinics, Planned Parenthood health centers, gun shops, union meetings, protests, or places of religious worship. ALPR vendors have stated police can use the collected information to find out where a license plate has been in the past, determine whether a vehicle was at a crime scene, identify travel patterns, and even discover vehicles that may be associated with each other.” (64-66)
  • Generative AI uses nonconsensual pornography in its training data. How can we mitigate such breaches of privacy?
  • Intellectual property and copyright law play an interesting role and work in the best interest of the AI Industry, which is incentivized to keep the space unregulated.
  • Overrepresentation is an indicator of discriminatory purposes in a model’s training data. What we can to do hedge for such bias in an algorithm’s early phases?


#AlgorithmicBias #PredatoryTech #TechnicallyBiasedPodcast #Gakovii

--- Send in a voice message: https://podcasters.spotify.com/pod/show/technically-biased/message

You Might Like

Acquired
Acquired
Ben Gilbert and David Rosenthal
Darknet Diaries
Darknet Diaries
Jack Rhysider
Hard Fork
Hard Fork
The New York Times
Marketplace Tech
Marketplace Tech
Marketplace
WSJ’s The Future of Everything
WSJ’s The Future of Everything
The Wall Street Journal
Search Engine
Search Engine
PJ Vogt, Audacy, Jigsaw
TechStuff
TechStuff
iHeartPodcasts
Rich On Tech
Rich On Tech
Rich DeMuro
Waveform: The MKBHD Podcast
Waveform: The MKBHD Podcast
Vox Media Podcast Network
The Vergecast
The Vergecast
The Verge
Fortnite Emotes
Fortnite Emotes
Lawrence Hopkinson