undefined cover
undefined cover
Measuring Business Results of LLMs with Abi Aryan cover
Measuring Business Results of LLMs with Abi Aryan cover
The Prompt Desk

Measuring Business Results of LLMs with Abi Aryan

Measuring Business Results of LLMs with Abi Aryan

43min |22/05/2024
Play
undefined cover
undefined cover
Measuring Business Results of LLMs with Abi Aryan cover
Measuring Business Results of LLMs with Abi Aryan cover
The Prompt Desk

Measuring Business Results of LLMs with Abi Aryan

Measuring Business Results of LLMs with Abi Aryan

43min |22/05/2024
Play

Description

In this insightful episode, hosts Justin Macorin and Bradley Arsenault chat with AI expert Abi Aryan to dive deep into the challenges of measuring the business impact of large language model applications. Abby draws from her 8 years of experience deploying and productizing LLMs to provide a comprehensive framework for defining relevant metrics.

She emphasizes the importance of bridging the gap between engineering and business objectives, highlighting that 76% of machine learning projects fail due to this disconnect. Abby shares practical advice on evaluating LLM performance through a combination of language skills assessment, task-specific metrics, human evaluation, and aligning with overarching business goals.

The conversation covers key topics such as measuring the efficacy of retrieval versus generation components, incorporating user feedback beyond simple thumbs-up/thumbs-downs, detecting and mitigating hallucinations, and tying metrics to concrete business KPIs like sales funnels and revenue generation.

---

Please check out Abi on LinkedIn at https://www.linkedin.com/in/goabiaryan/
Visit her website here: https://abiaryan.com/
Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.

Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Description

In this insightful episode, hosts Justin Macorin and Bradley Arsenault chat with AI expert Abi Aryan to dive deep into the challenges of measuring the business impact of large language model applications. Abby draws from her 8 years of experience deploying and productizing LLMs to provide a comprehensive framework for defining relevant metrics.

She emphasizes the importance of bridging the gap between engineering and business objectives, highlighting that 76% of machine learning projects fail due to this disconnect. Abby shares practical advice on evaluating LLM performance through a combination of language skills assessment, task-specific metrics, human evaluation, and aligning with overarching business goals.

The conversation covers key topics such as measuring the efficacy of retrieval versus generation components, incorporating user feedback beyond simple thumbs-up/thumbs-downs, detecting and mitigating hallucinations, and tying metrics to concrete business KPIs like sales funnels and revenue generation.

---

Please check out Abi on LinkedIn at https://www.linkedin.com/in/goabiaryan/
Visit her website here: https://abiaryan.com/
Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.

Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Share

Embed

You may also like

Description

In this insightful episode, hosts Justin Macorin and Bradley Arsenault chat with AI expert Abi Aryan to dive deep into the challenges of measuring the business impact of large language model applications. Abby draws from her 8 years of experience deploying and productizing LLMs to provide a comprehensive framework for defining relevant metrics.

She emphasizes the importance of bridging the gap between engineering and business objectives, highlighting that 76% of machine learning projects fail due to this disconnect. Abby shares practical advice on evaluating LLM performance through a combination of language skills assessment, task-specific metrics, human evaluation, and aligning with overarching business goals.

The conversation covers key topics such as measuring the efficacy of retrieval versus generation components, incorporating user feedback beyond simple thumbs-up/thumbs-downs, detecting and mitigating hallucinations, and tying metrics to concrete business KPIs like sales funnels and revenue generation.

---

Please check out Abi on LinkedIn at https://www.linkedin.com/in/goabiaryan/
Visit her website here: https://abiaryan.com/
Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.

Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Description

In this insightful episode, hosts Justin Macorin and Bradley Arsenault chat with AI expert Abi Aryan to dive deep into the challenges of measuring the business impact of large language model applications. Abby draws from her 8 years of experience deploying and productizing LLMs to provide a comprehensive framework for defining relevant metrics.

She emphasizes the importance of bridging the gap between engineering and business objectives, highlighting that 76% of machine learning projects fail due to this disconnect. Abby shares practical advice on evaluating LLM performance through a combination of language skills assessment, task-specific metrics, human evaluation, and aligning with overarching business goals.

The conversation covers key topics such as measuring the efficacy of retrieval versus generation components, incorporating user feedback beyond simple thumbs-up/thumbs-downs, detecting and mitigating hallucinations, and tying metrics to concrete business KPIs like sales funnels and revenue generation.

---

Please check out Abi on LinkedIn at https://www.linkedin.com/in/goabiaryan/
Visit her website here: https://abiaryan.com/
Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.

Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Share

Embed

You may also like