Escucha sin anuncios

Why AI Should Be Taught to Know Its Limits

WSJ’s The Future of Everything

05-01-2024 • 17 minutos

One of AI’s biggest, unsolved problems is what the advanced algorithms should do when they confront a situation they don’t have an answer for. For programs like Chat GPT, that could mean providing a confidently wrong answer, what’s often called a “hallucination”; for others, as with self-driving cars, there could be much more serious consequences. But what if AIs could be taught to recognize what they don’t understand and adjust accordingly? Usama Fayyad, the executive director for the Institute for Experiential Artificial Intelligence at Northeastern University thinks this could be the algorithmic answer to making future AIs better at what they do, by doing something too few humans can: recognizing their own limits. What do you think about the show? Let us know on Apple Podcasts or Spotify, or email us: FOEPodcast@wsj.com  Further reading: How Did Companies Use Generative AI in 2023? Here’s a Look at Five Early Adopters.  Your Medical Devices Are Getting Smarter. Can the FDA Keep Them Safe?  Artificial: The OpenAI Story  Learn more about your ad choices. Visit megaphone.fm/adchoices

Te podría gustar

Acquired
Acquired
Ben Gilbert and David Rosenthal
Darknet Diaries
Darknet Diaries
Jack Rhysider
Hard Fork
Hard Fork
The New York Times
Marketplace Tech
Marketplace Tech
Marketplace
WSJ’s The Future of Everything
WSJ’s The Future of Everything
The Wall Street Journal
Rich On Tech
Rich On Tech
Rich DeMuro
Search Engine
Search Engine
PJ Vogt, Audacy, Jigsaw
TechStuff
TechStuff
iHeartPodcasts
The Vergecast
The Vergecast
The Verge
Waveform: The MKBHD Podcast
Waveform: The MKBHD Podcast
Vox Media Podcast Network