undefined cover
undefined cover
Prompting in Tool Results cover
Prompting in Tool Results cover
The Prompt Desk

Prompting in Tool Results

Prompting in Tool Results

19min |31/07/2024
Play
undefined cover
undefined cover
Prompting in Tool Results cover
Prompting in Tool Results cover
The Prompt Desk

Prompting in Tool Results

Prompting in Tool Results

19min |31/07/2024
Play

Description

If you are using systems prompts, chat completions, and tool completions with OpenAI, you might find it challenging to get the model to follow your prompts. If you are like the show hosts, you often find that there are certain instructions that your bot just refuses to listen to!

In the latest episode of The Prompt Desk, show host Bradley Arsenault shares with Justin Macorin the latest technique he has been using to improve the reliability of behaviours that the bot refuses to comply with.

That new technique is to add additional instructions in prompts mid-conversation using tool results. Instead of seeing a tool result as simply containing data, you should rethink your tool results as combining both data and a prompt on what the bot should do with that data. With this technique, your show hosts have improved the reliability of their bots significantly.

---
Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.

Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Description

If you are using systems prompts, chat completions, and tool completions with OpenAI, you might find it challenging to get the model to follow your prompts. If you are like the show hosts, you often find that there are certain instructions that your bot just refuses to listen to!

In the latest episode of The Prompt Desk, show host Bradley Arsenault shares with Justin Macorin the latest technique he has been using to improve the reliability of behaviours that the bot refuses to comply with.

That new technique is to add additional instructions in prompts mid-conversation using tool results. Instead of seeing a tool result as simply containing data, you should rethink your tool results as combining both data and a prompt on what the bot should do with that data. With this technique, your show hosts have improved the reliability of their bots significantly.

---
Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.

Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Share

Embed

You may also like

Description

If you are using systems prompts, chat completions, and tool completions with OpenAI, you might find it challenging to get the model to follow your prompts. If you are like the show hosts, you often find that there are certain instructions that your bot just refuses to listen to!

In the latest episode of The Prompt Desk, show host Bradley Arsenault shares with Justin Macorin the latest technique he has been using to improve the reliability of behaviours that the bot refuses to comply with.

That new technique is to add additional instructions in prompts mid-conversation using tool results. Instead of seeing a tool result as simply containing data, you should rethink your tool results as combining both data and a prompt on what the bot should do with that data. With this technique, your show hosts have improved the reliability of their bots significantly.

---
Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.

Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Description

If you are using systems prompts, chat completions, and tool completions with OpenAI, you might find it challenging to get the model to follow your prompts. If you are like the show hosts, you often find that there are certain instructions that your bot just refuses to listen to!

In the latest episode of The Prompt Desk, show host Bradley Arsenault shares with Justin Macorin the latest technique he has been using to improve the reliability of behaviours that the bot refuses to comply with.

That new technique is to add additional instructions in prompts mid-conversation using tool results. Instead of seeing a tool result as simply containing data, you should rethink your tool results as combining both data and a prompt on what the bot should do with that data. With this technique, your show hosts have improved the reliability of their bots significantly.

---
Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.

Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Share

Embed

You may also like