undefined cover
undefined cover
The Prompt Desk cover
The Prompt Desk cover

The Prompt Desk

The Prompt Desk

Subscribe
undefined cover
undefined cover
The Prompt Desk cover
The Prompt Desk cover

The Prompt Desk

The Prompt Desk

Subscribe

Description

Embark on a captivating exploration of Large Language Models (LLMs), prompt engineering, and generative AI with hosts Bradley Arsenault and Justin Macorin. With 25 years of combined machine learning and product engineering experience, they are delving deep into the world of LLMs to uncover best practices and stay at the forefront of AI innovation. Join them in shaping the future of technology and software development through their discoveries in LLMs and generative AI.


Podcast website: https://promptdesk.ai/podcast


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Description

Embark on a captivating exploration of Large Language Models (LLMs), prompt engineering, and generative AI with hosts Bradley Arsenault and Justin Macorin. With 25 years of combined machine learning and product engineering experience, they are delving deep into the world of LLMs to uncover best practices and stay at the forefront of AI innovation. Join them in shaping the future of technology and software development through their discoveries in LLMs and generative AI.


Podcast website: https://promptdesk.ai/podcast


Hosted by Ausha. See ausha.co/privacy-policy for more information.

53 episodes

  • Introducing Prospera Labs cover
    Introducing Prospera Labs cover
    Introducing Prospera Labs

    This episode marks a fresh start for the podcast with new co-host Raman joining Brad after their previous co-host Justin's departure. The conversation introduces Prospera Labs, their AI company, and discusses their vision for the future of the podcast and for AI agents in business. Key Points: Prospera Labs focuses on creating AI agents (they specifically avoid the term "chatbots" due to negative associations with older technology) They offer two main products: An AI receptionist that can be set up in minutes to handle calls and bookings A sales agent developed to handle their own high volume of incoming requests Future Vision: They predict AI agents will become as essential for businesses as websites were in the 1990s Plan to develop a low-code/no-code platform for entrepreneurs to build custom AI agents Focus on serving business people rather than engineers, unlike competitors Future plans include integration of video, image technology, and security monitoring Business Strategy: Current dual approach: Ready-to-use SaaS products and custom AI agent development Long-term goal to create a platform where businesses can build their own AI agents Emphasis on actually testing and using their own platform before releasing features to customers Commitment to maintaining consulting services as long as they provide genuine value Background: Raman shares his history of running a software development company for 15-18 years and coming out of retirement to join this venture, while Brad brings AI expertise and existing technology infrastructure to the partnership. The episode provides insight into their business philosophy, emphasizing the importance of creating genuine value and maintaining integrity in their AI solutions while making the technology accessible to non-technical entrepreneurs. Hosted by Ausha. See ausha.co/privacy-policy for more information.

    26min | Published on October 25, 2024

  • What we learned about LLM’s in a year cover
    What we learned about LLM’s in a year cover
    What we learned about LLM’s in a year

    In this one year anniversary episode of the podcast, your show hosts look back on what they have learned and observed over the past year. They made the following observations: Reflection on a year of LLM development: The hosts discuss how the LLM ecosystem has rapidly evolved over the past year, moving from initial hype to a focus on practical applications with demonstrable ROI. Changes in prompt engineering: They note a shift from instruction-based prompting to example-based prompting, finding the latter more reliable. Impact on software development: LLMs have significantly increased productivity and are changing how software is architected and developed, making previously complex tasks much simpler and faster to implement. Shifting skill requirements: The hosts predict that soft skills will become more valuable for engineers, while specific programming language knowledge may become less crucial due to AI assistance. Evolving perspectives: Both hosts describe how their views on LLMs have changed, moving from initial skepticism to greater optimism about the technology's potential and impact. Future of the field: They emphasize the transformative nature of LLMs on the entire field of computer science and encourage newcomers to stick with it, predicting that those who do will become the experts shaping the future of technology. --- Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security. Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool. Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me) Add Justin Macorin and Bradley Arsenault on LinkedIn. Hosted by Ausha. See ausha.co/privacy-policy for more information.

    21min | Published on October 2, 2024

  • Validating Inputs with LLMs cover
    Validating Inputs with LLMs cover
    Validating Inputs with LLMs

    In this thought-provoking episode, AI experts Bradley Arsenault and Justin Macorin delve into the cutting-edge world of input validation using large language models (LLMs). The hosts explore how this innovative approach is transforming traditional data validation methods in software applications. The conversation begins with a look at conventional input validation techniques, primarily relying on regular expressions (RegEx) for basic text field checks. Justin then introduces the concept of using LLMs to validate more complex inputs, such as job titles or descriptions, which go beyond simple character-level validation. Throughout the episode, the hosts discuss the numerous benefits of LLM-based validation, including improved downstream results in AI systems, reduction in nonsensical inputs, and the potential to decrease reliance on customer support teams. They also tackle the implementation challenges, such as increased computation costs and the risk of rejecting valid inputs in edge cases. Bradley and Justin provide insights into the technical aspects of implementing LLM validation, discussing where in the software architecture to place these checks and how to handle user feedback. They also explore future improvements to the system, such as incorporating probability thresholds and enhancing error messages with helpful examples. The episode concludes with a balanced view of the trade-offs involved in adopting this new approach. While acknowledging the potential for user resistance and increased complexity, the hosts argue that the long-term benefits of cleaner data and reduced errors make LLM-based validation a promising frontier in software development. This episode offers valuable insights for developers, data scientists, and product managers looking to enhance their applications' data quality and user experience through advanced AI techniques. --- Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security. Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool. Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me) Add Justin Macorin and Bradley Arsenault on LinkedIn. Hosted by Ausha. See ausha.co/privacy-policy for more information.

    23min | Published on September 25, 2024

  • Why you can't automate everything with LLMs cover
    Why you can't automate everything with LLMs cover
    Why you can't automate everything with LLMs

    In this episode of Prompt Desk, hosts Bradley Arsenault and Justin Macorin dive into the evolving landscape of AI applications. They explore the shift from full automation dreams to the reality of human-AI collaboration, discussing why guided experiences are proving more effective than autonomous systems. The hosts share insights on designing AI interfaces, managing error rates, and the importance of human oversight in complex AI tasks. Whether you're an AI enthusiast, developer, or business leader, this episode offers valuable perspectives on harnessing the power of large language models while avoiding common pitfalls. Join Bradley and Justin as they unpack the challenges and opportunities in creating reliable, user-friendly AI applications that truly augment human capabilities. --- Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security. Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool. Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me) Add Justin Macorin and Bradley Arsenault on LinkedIn. Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link Hosted by Ausha. See ausha.co/privacy-policy for more information.

    18min | Published on September 18, 2024

  • Data Preparation Best Practices for Fine Tuning cover
    Data Preparation Best Practices for Fine Tuning cover
    Data Preparation Best Practices for Fine Tuning

    In this episode of The Prompt Desk podcast, hosts Bradley Arsenault and Justin Macorin dive deep into the world of fine-tuning large language models. They discuss: The evolution of data preparation techniques from traditional NLP to modern LLMs Strategies for creating high-quality datasets for fine-tuning The surprising effectiveness of small, well-curated datasets Best practices for aligning training data with production environments The importance of data quality and its impact on model performance Practical tips for engineers working on LLM fine-tuning projects Whether you're a seasoned AI practitioner or just getting started with large language models, this episode offers valuable insights into the critical process of data preparation and fine-tuning. Join Brad and Justin as they share their expertise and help you navigate the challenges of building effective AI systems. --- Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security. Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool. Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me) Add Justin Macorin and Bradley Arsenault on LinkedIn. Hosted by Ausha. See ausha.co/privacy-policy for more information.

    20min | Published on September 11, 2024

  • Multilingual Prompting cover
    Multilingual Prompting cover
    Multilingual Prompting

    Have you ever tried to run a prompt in another language and gotten completely different results? It is a sad truth that our LLM models continue to work best in English. It is something that will probably always remain true because of the dominance of the English language, which ensures that a disproportionate amount of the worlds data is found in English. In this episode of The Prompt Desk, your hosts Justin Macorin and Bradley Arsenault discuss their relatively brief experiences with trying to get prompts working well in other languages, and provide some tips and tricks based on what they have learned so far. --- Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security. Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool. Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me) Add Justin Macorin and Bradley Arsenault on LinkedIn. Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link Hosted by Ausha. See ausha.co/privacy-policy for more information.

    15min | Published on August 28, 2024

  • Safely Executing LLM Code cover
    Safely Executing LLM Code cover
    Safely Executing LLM Code

    In this episode, AI experts Bradley Arsenault and Justin Macon dive deep into the challenges and best practices for safely executing code generated by large language models in a production environment. They discuss key security considerations, containerization techniques, static/dynamic code analysis, and error handling - providing valuable insights for anyone looking to leverage the power of LLMs while mitigating the risks of abuse by AI hackers.---Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool.Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me)Add Justin Macorin and Bradley Arsenault on LinkedIn.Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link Hosted by Ausha. See ausha.co/privacy-policy for more information.

    18min | Published on August 21, 2024

  • How to Rescue AI Innovation at Big Companies cover
    How to Rescue AI Innovation at Big Companies cover
    How to Rescue AI Innovation at Big Companies

    In this episode, we dive into how companies can compete and innovate in the age of AI. We explore the limitations of simple "Generate with AI" buttons and discuss the need for more radical changes in AI implementation. Drawing from Clayton Christensen's work on disruptive innovation, we examine why large organizations struggle with adopting transformative technologies and suggest strategies for both big and small companies to stay competitive. We also touch on the challenges of transitioning traditional business models to AI-driven approaches, the importance of organizational agility, and the potential of modular organizational structures. Whether you're part of a Fortune 500 company or a nimble startup, this episode offers insights on navigating the AI revolution and maintaining your competitive edge.---Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool.Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me)Add Justin Macorin and Bradley Arsenault on LinkedIn.Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link Hosted by Ausha. See ausha.co/privacy-policy for more information.

    19min | Published on August 14, 2024

  • How UX Will Change With Integrated Advice cover
    How UX Will Change With Integrated Advice cover
    How UX Will Change With Integrated Advice

    Are you tired of hearing about agents? Chat-bot this, agent that. It's the only way to apply AI that anyone talks about. But are agents the only way to use LLM technology? Is everything just converging towards conversational interface?In this episode of The Prompt Desk, your show hosts discuss a different way to use cutting edge AI in user-interface design: integrated advice. Using LLMs, we can provide AI driven tips, tricks, hints, and feedback on the open-ended text-boxes in our user interfaces. In everything from writing job descriptions and product descriptions, through to social media posts or comments - user interfaces are full of open-ended textbox inputs that are just dying to receive an 21st century makeover.Download the episode to hear more!---Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool.Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me)Add Justin Macorin and Bradley Arsenault on LinkedIn.Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link Hosted by Ausha. See ausha.co/privacy-policy for more information.

    17min | Published on August 7, 2024

  • Prompting in Tool Results cover
    Prompting in Tool Results cover
    Prompting in Tool Results

    If you are using systems prompts, chat completions, and tool completions with OpenAI, you might find it challenging to get the model to follow your prompts. If you are like the show hosts, you often find that there are certain instructions that your bot just refuses to listen to!In the latest episode of The Prompt Desk, show host Bradley Arsenault shares with Justin Macorin the latest technique he has been using to improve the reliability of behaviours that the bot refuses to comply with.That new technique is to add additional instructions in prompts mid-conversation using tool results. Instead of seeing a tool result as simply containing data, you should rethink your tool results as combining both data and a prompt on what the bot should do with that data. With this technique, your show hosts have improved the reliability of their bots significantly.---Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool.Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me)Add Justin Macorin and Bradley Arsenault on LinkedIn.Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link Hosted by Ausha. See ausha.co/privacy-policy for more information.

    19min | Published on July 31, 2024

  • 1
    2

    ...

    6

Description

Embark on a captivating exploration of Large Language Models (LLMs), prompt engineering, and generative AI with hosts Bradley Arsenault and Justin Macorin. With 25 years of combined machine learning and product engineering experience, they are delving deep into the world of LLMs to uncover best practices and stay at the forefront of AI innovation. Join them in shaping the future of technology and software development through their discoveries in LLMs and generative AI.


Podcast website: https://promptdesk.ai/podcast


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Description

Embark on a captivating exploration of Large Language Models (LLMs), prompt engineering, and generative AI with hosts Bradley Arsenault and Justin Macorin. With 25 years of combined machine learning and product engineering experience, they are delving deep into the world of LLMs to uncover best practices and stay at the forefront of AI innovation. Join them in shaping the future of technology and software development through their discoveries in LLMs and generative AI.


Podcast website: https://promptdesk.ai/podcast


Hosted by Ausha. See ausha.co/privacy-policy for more information.

53 episodes

  • Introducing Prospera Labs cover
    Introducing Prospera Labs cover
    Introducing Prospera Labs

    This episode marks a fresh start for the podcast with new co-host Raman joining Brad after their previous co-host Justin's departure. The conversation introduces Prospera Labs, their AI company, and discusses their vision for the future of the podcast and for AI agents in business. Key Points: Prospera Labs focuses on creating AI agents (they specifically avoid the term "chatbots" due to negative associations with older technology) They offer two main products: An AI receptionist that can be set up in minutes to handle calls and bookings A sales agent developed to handle their own high volume of incoming requests Future Vision: They predict AI agents will become as essential for businesses as websites were in the 1990s Plan to develop a low-code/no-code platform for entrepreneurs to build custom AI agents Focus on serving business people rather than engineers, unlike competitors Future plans include integration of video, image technology, and security monitoring Business Strategy: Current dual approach: Ready-to-use SaaS products and custom AI agent development Long-term goal to create a platform where businesses can build their own AI agents Emphasis on actually testing and using their own platform before releasing features to customers Commitment to maintaining consulting services as long as they provide genuine value Background: Raman shares his history of running a software development company for 15-18 years and coming out of retirement to join this venture, while Brad brings AI expertise and existing technology infrastructure to the partnership. The episode provides insight into their business philosophy, emphasizing the importance of creating genuine value and maintaining integrity in their AI solutions while making the technology accessible to non-technical entrepreneurs. Hosted by Ausha. See ausha.co/privacy-policy for more information.

    26min | Published on October 25, 2024

  • What we learned about LLM’s in a year cover
    What we learned about LLM’s in a year cover
    What we learned about LLM’s in a year

    In this one year anniversary episode of the podcast, your show hosts look back on what they have learned and observed over the past year. They made the following observations: Reflection on a year of LLM development: The hosts discuss how the LLM ecosystem has rapidly evolved over the past year, moving from initial hype to a focus on practical applications with demonstrable ROI. Changes in prompt engineering: They note a shift from instruction-based prompting to example-based prompting, finding the latter more reliable. Impact on software development: LLMs have significantly increased productivity and are changing how software is architected and developed, making previously complex tasks much simpler and faster to implement. Shifting skill requirements: The hosts predict that soft skills will become more valuable for engineers, while specific programming language knowledge may become less crucial due to AI assistance. Evolving perspectives: Both hosts describe how their views on LLMs have changed, moving from initial skepticism to greater optimism about the technology's potential and impact. Future of the field: They emphasize the transformative nature of LLMs on the entire field of computer science and encourage newcomers to stick with it, predicting that those who do will become the experts shaping the future of technology. --- Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security. Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool. Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me) Add Justin Macorin and Bradley Arsenault on LinkedIn. Hosted by Ausha. See ausha.co/privacy-policy for more information.

    21min | Published on October 2, 2024

  • Validating Inputs with LLMs cover
    Validating Inputs with LLMs cover
    Validating Inputs with LLMs

    In this thought-provoking episode, AI experts Bradley Arsenault and Justin Macorin delve into the cutting-edge world of input validation using large language models (LLMs). The hosts explore how this innovative approach is transforming traditional data validation methods in software applications. The conversation begins with a look at conventional input validation techniques, primarily relying on regular expressions (RegEx) for basic text field checks. Justin then introduces the concept of using LLMs to validate more complex inputs, such as job titles or descriptions, which go beyond simple character-level validation. Throughout the episode, the hosts discuss the numerous benefits of LLM-based validation, including improved downstream results in AI systems, reduction in nonsensical inputs, and the potential to decrease reliance on customer support teams. They also tackle the implementation challenges, such as increased computation costs and the risk of rejecting valid inputs in edge cases. Bradley and Justin provide insights into the technical aspects of implementing LLM validation, discussing where in the software architecture to place these checks and how to handle user feedback. They also explore future improvements to the system, such as incorporating probability thresholds and enhancing error messages with helpful examples. The episode concludes with a balanced view of the trade-offs involved in adopting this new approach. While acknowledging the potential for user resistance and increased complexity, the hosts argue that the long-term benefits of cleaner data and reduced errors make LLM-based validation a promising frontier in software development. This episode offers valuable insights for developers, data scientists, and product managers looking to enhance their applications' data quality and user experience through advanced AI techniques. --- Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security. Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool. Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me) Add Justin Macorin and Bradley Arsenault on LinkedIn. Hosted by Ausha. See ausha.co/privacy-policy for more information.

    23min | Published on September 25, 2024

  • Why you can't automate everything with LLMs cover
    Why you can't automate everything with LLMs cover
    Why you can't automate everything with LLMs

    In this episode of Prompt Desk, hosts Bradley Arsenault and Justin Macorin dive into the evolving landscape of AI applications. They explore the shift from full automation dreams to the reality of human-AI collaboration, discussing why guided experiences are proving more effective than autonomous systems. The hosts share insights on designing AI interfaces, managing error rates, and the importance of human oversight in complex AI tasks. Whether you're an AI enthusiast, developer, or business leader, this episode offers valuable perspectives on harnessing the power of large language models while avoiding common pitfalls. Join Bradley and Justin as they unpack the challenges and opportunities in creating reliable, user-friendly AI applications that truly augment human capabilities. --- Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security. Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool. Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me) Add Justin Macorin and Bradley Arsenault on LinkedIn. Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link Hosted by Ausha. See ausha.co/privacy-policy for more information.

    18min | Published on September 18, 2024

  • Data Preparation Best Practices for Fine Tuning cover
    Data Preparation Best Practices for Fine Tuning cover
    Data Preparation Best Practices for Fine Tuning

    In this episode of The Prompt Desk podcast, hosts Bradley Arsenault and Justin Macorin dive deep into the world of fine-tuning large language models. They discuss: The evolution of data preparation techniques from traditional NLP to modern LLMs Strategies for creating high-quality datasets for fine-tuning The surprising effectiveness of small, well-curated datasets Best practices for aligning training data with production environments The importance of data quality and its impact on model performance Practical tips for engineers working on LLM fine-tuning projects Whether you're a seasoned AI practitioner or just getting started with large language models, this episode offers valuable insights into the critical process of data preparation and fine-tuning. Join Brad and Justin as they share their expertise and help you navigate the challenges of building effective AI systems. --- Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security. Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool. Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me) Add Justin Macorin and Bradley Arsenault on LinkedIn. Hosted by Ausha. See ausha.co/privacy-policy for more information.

    20min | Published on September 11, 2024

  • Multilingual Prompting cover
    Multilingual Prompting cover
    Multilingual Prompting

    Have you ever tried to run a prompt in another language and gotten completely different results? It is a sad truth that our LLM models continue to work best in English. It is something that will probably always remain true because of the dominance of the English language, which ensures that a disproportionate amount of the worlds data is found in English. In this episode of The Prompt Desk, your hosts Justin Macorin and Bradley Arsenault discuss their relatively brief experiences with trying to get prompts working well in other languages, and provide some tips and tricks based on what they have learned so far. --- Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security. Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool. Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me) Add Justin Macorin and Bradley Arsenault on LinkedIn. Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link Hosted by Ausha. See ausha.co/privacy-policy for more information.

    15min | Published on August 28, 2024

  • Safely Executing LLM Code cover
    Safely Executing LLM Code cover
    Safely Executing LLM Code

    In this episode, AI experts Bradley Arsenault and Justin Macon dive deep into the challenges and best practices for safely executing code generated by large language models in a production environment. They discuss key security considerations, containerization techniques, static/dynamic code analysis, and error handling - providing valuable insights for anyone looking to leverage the power of LLMs while mitigating the risks of abuse by AI hackers.---Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool.Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me)Add Justin Macorin and Bradley Arsenault on LinkedIn.Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link Hosted by Ausha. See ausha.co/privacy-policy for more information.

    18min | Published on August 21, 2024

  • How to Rescue AI Innovation at Big Companies cover
    How to Rescue AI Innovation at Big Companies cover
    How to Rescue AI Innovation at Big Companies

    In this episode, we dive into how companies can compete and innovate in the age of AI. We explore the limitations of simple "Generate with AI" buttons and discuss the need for more radical changes in AI implementation. Drawing from Clayton Christensen's work on disruptive innovation, we examine why large organizations struggle with adopting transformative technologies and suggest strategies for both big and small companies to stay competitive. We also touch on the challenges of transitioning traditional business models to AI-driven approaches, the importance of organizational agility, and the potential of modular organizational structures. Whether you're part of a Fortune 500 company or a nimble startup, this episode offers insights on navigating the AI revolution and maintaining your competitive edge.---Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool.Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me)Add Justin Macorin and Bradley Arsenault on LinkedIn.Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link Hosted by Ausha. See ausha.co/privacy-policy for more information.

    19min | Published on August 14, 2024

  • How UX Will Change With Integrated Advice cover
    How UX Will Change With Integrated Advice cover
    How UX Will Change With Integrated Advice

    Are you tired of hearing about agents? Chat-bot this, agent that. It's the only way to apply AI that anyone talks about. But are agents the only way to use LLM technology? Is everything just converging towards conversational interface?In this episode of The Prompt Desk, your show hosts discuss a different way to use cutting edge AI in user-interface design: integrated advice. Using LLMs, we can provide AI driven tips, tricks, hints, and feedback on the open-ended text-boxes in our user interfaces. In everything from writing job descriptions and product descriptions, through to social media posts or comments - user interfaces are full of open-ended textbox inputs that are just dying to receive an 21st century makeover.Download the episode to hear more!---Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool.Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me)Add Justin Macorin and Bradley Arsenault on LinkedIn.Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link Hosted by Ausha. See ausha.co/privacy-policy for more information.

    17min | Published on August 7, 2024

  • Prompting in Tool Results cover
    Prompting in Tool Results cover
    Prompting in Tool Results

    If you are using systems prompts, chat completions, and tool completions with OpenAI, you might find it challenging to get the model to follow your prompts. If you are like the show hosts, you often find that there are certain instructions that your bot just refuses to listen to!In the latest episode of The Prompt Desk, show host Bradley Arsenault shares with Justin Macorin the latest technique he has been using to improve the reliability of behaviours that the bot refuses to comply with.That new technique is to add additional instructions in prompts mid-conversation using tool results. Instead of seeing a tool result as simply containing data, you should rethink your tool results as combining both data and a prompt on what the bot should do with that data. With this technique, your show hosts have improved the reliability of their bots significantly.---Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.Check out PromptDesk.ai (http://PromptDesk.ai) for an open-source prompt management tool.Check out Brad’s AI Consultancy at bradleyarsenault.me (http://bradleyarsenault.me)Add Justin Macorin and Bradley Arsenault on LinkedIn.Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link Hosted by Ausha. See ausha.co/privacy-policy for more information.

    19min | Published on July 31, 2024

  • 1
    2

    ...

    6