How to protect your LLM against Prompt Injections

The Prompt Desk

May 1 2024 • 22 mins

In this episode, we discuss, how we might protect prompt-based applications and LLMs from prompt injection. We discuss how data validation was done in the 1960s and modern libraries and techniques that can successfully act as a first line of defense against prompt injection. We touch on the idea that using other types of models, such as decision trees, conventional NLP pipelines, embedding models, or neural networks trained on datasets different from typical LLM training data, might be used to validate inputs before sending them to an LLM.


Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.

Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

You Might Like

Acquired
Acquired
Ben Gilbert and David Rosenthal
Darknet Diaries
Darknet Diaries
Jack Rhysider
Hard Fork
Hard Fork
The New York Times
Marketplace Tech
Marketplace Tech
Marketplace
WSJ’s The Future of Everything
WSJ’s The Future of Everything
The Wall Street Journal
Search Engine
Search Engine
PJ Vogt, Audacy, Jigsaw
TechStuff
TechStuff
iHeartPodcasts
Rich On Tech
Rich On Tech
Rich DeMuro
The Vergecast
The Vergecast
The Verge
Waveform: The MKBHD Podcast
Waveform: The MKBHD Podcast
Vox Media Podcast Network