undefined cover
undefined cover
Prompt Injections to Coerce any LLM to do Anything You Want with Jonas Geiping cover
Prompt Injections to Coerce any LLM to do Anything You Want with Jonas Geiping cover
The Prompt Desk

Prompt Injections to Coerce any LLM to do Anything You Want with Jonas Geiping

Prompt Injections to Coerce any LLM to do Anything You Want with Jonas Geiping

42min |27/03/2024
Play
undefined cover
undefined cover
Prompt Injections to Coerce any LLM to do Anything You Want with Jonas Geiping cover
Prompt Injections to Coerce any LLM to do Anything You Want with Jonas Geiping cover
The Prompt Desk

Prompt Injections to Coerce any LLM to do Anything You Want with Jonas Geiping

Prompt Injections to Coerce any LLM to do Anything You Want with Jonas Geiping

42min |27/03/2024
Play

Description

Prompt Injections might be the single biggest risk to the deployment of LLM systems. In this episode of The Prompt Desk, your host Bradley Arsenault sits down with researcher Jonas Geiping to talk about his research on LLM security.

Jonas and his colleagues have found optimization techniques that allow them to find prompts that they can put into any LLM model that can make it do anything. The open-ended nature of the AI security flaws they have found pose serious risks to pretty much any company adopting LLM technology.

See the paper: "Coercing LLMs to do and reveal (almost) anything": https://arxiv.org/abs/2402.14020

Add Jonas Geiping on LinkedIn: https://www.linkedin.com/in/jonas-geiping-9684441b5/


Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.

Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Description

Prompt Injections might be the single biggest risk to the deployment of LLM systems. In this episode of The Prompt Desk, your host Bradley Arsenault sits down with researcher Jonas Geiping to talk about his research on LLM security.

Jonas and his colleagues have found optimization techniques that allow them to find prompts that they can put into any LLM model that can make it do anything. The open-ended nature of the AI security flaws they have found pose serious risks to pretty much any company adopting LLM technology.

See the paper: "Coercing LLMs to do and reveal (almost) anything": https://arxiv.org/abs/2402.14020

Add Jonas Geiping on LinkedIn: https://www.linkedin.com/in/jonas-geiping-9684441b5/


Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.

Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Share

Embed

You may also like

Description

Prompt Injections might be the single biggest risk to the deployment of LLM systems. In this episode of The Prompt Desk, your host Bradley Arsenault sits down with researcher Jonas Geiping to talk about his research on LLM security.

Jonas and his colleagues have found optimization techniques that allow them to find prompts that they can put into any LLM model that can make it do anything. The open-ended nature of the AI security flaws they have found pose serious risks to pretty much any company adopting LLM technology.

See the paper: "Coercing LLMs to do and reveal (almost) anything": https://arxiv.org/abs/2402.14020

Add Jonas Geiping on LinkedIn: https://www.linkedin.com/in/jonas-geiping-9684441b5/


Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.

Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Description

Prompt Injections might be the single biggest risk to the deployment of LLM systems. In this episode of The Prompt Desk, your host Bradley Arsenault sits down with researcher Jonas Geiping to talk about his research on LLM security.

Jonas and his colleagues have found optimization techniques that allow them to find prompts that they can put into any LLM model that can make it do anything. The open-ended nature of the AI security flaws they have found pose serious risks to pretty much any company adopting LLM technology.

See the paper: "Coercing LLMs to do and reveal (almost) anything": https://arxiv.org/abs/2402.14020

Add Jonas Geiping on LinkedIn: https://www.linkedin.com/in/jonas-geiping-9684441b5/


Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.

Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Share

Embed

You may also like