undefined cover
undefined cover
[Bonus] Using Embedding Vectors cover
[Bonus] Using Embedding Vectors cover
The Prompt Desk

[Bonus] Using Embedding Vectors

[Bonus] Using Embedding Vectors

22min |11/05/2024
Play
undefined cover
undefined cover
[Bonus] Using Embedding Vectors cover
[Bonus] Using Embedding Vectors cover
The Prompt Desk

[Bonus] Using Embedding Vectors

[Bonus] Using Embedding Vectors

22min |11/05/2024
Play

Description

Note! This is an episode that we found in our archive unreleased. It is one of the earliest episodes we ever recorded, but has so far gone not been listened to. It's not quite the same style as our current episodes, but we thought we'd release it anyway as a special treat for interested listeners!

The podcast episode explores the realm of large language models, prompt engineering, and best practices, with a emphasis on discussing word embeddings and their applications. The hosts, Bradley and Justin, reminisce about their initial encounters with word embeddings and how they transformed the field of natural language processing.

They then examine the advantages and potential drawbacks of word embeddings, highlighting their usefulness in various tasks such as retrieval, named entity recognition, and deduplication. The conversation also touches on practical tips and tricks for working with word embeddings, stressing the importance of vectorizing the right information and choosing the appropriate model for the task at hand.

Throughout the discussion, the hosts underscore the need for product involvement and a thorough understanding of the problem at hand when deciding how to best utilize word embeddings.


Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.

Check out PromptDesk.ai for an open-source prompt management tool.

Check out Brad’s AI Consultancy at bradleyarsenault.me

Add Justin Macorin and Bradley Arsenault on LinkedIn.


Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Description

Note! This is an episode that we found in our archive unreleased. It is one of the earliest episodes we ever recorded, but has so far gone not been listened to. It's not quite the same style as our current episodes, but we thought we'd release it anyway as a special treat for interested listeners!

The podcast episode explores the realm of large language models, prompt engineering, and best practices, with a emphasis on discussing word embeddings and their applications. The hosts, Bradley and Justin, reminisce about their initial encounters with word embeddings and how they transformed the field of natural language processing.

They then examine the advantages and potential drawbacks of word embeddings, highlighting their usefulness in various tasks such as retrieval, named entity recognition, and deduplication. The conversation also touches on practical tips and tricks for working with word embeddings, stressing the importance of vectorizing the right information and choosing the appropriate model for the task at hand.

Throughout the discussion, the hosts underscore the need for product involvement and a thorough understanding of the problem at hand when deciding how to best utilize word embeddings.


Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.

Check out PromptDesk.ai for an open-source prompt management tool.

Check out Brad’s AI Consultancy at bradleyarsenault.me

Add Justin Macorin and Bradley Arsenault on LinkedIn.


Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Share

Embed

You may also like

Description

Note! This is an episode that we found in our archive unreleased. It is one of the earliest episodes we ever recorded, but has so far gone not been listened to. It's not quite the same style as our current episodes, but we thought we'd release it anyway as a special treat for interested listeners!

The podcast episode explores the realm of large language models, prompt engineering, and best practices, with a emphasis on discussing word embeddings and their applications. The hosts, Bradley and Justin, reminisce about their initial encounters with word embeddings and how they transformed the field of natural language processing.

They then examine the advantages and potential drawbacks of word embeddings, highlighting their usefulness in various tasks such as retrieval, named entity recognition, and deduplication. The conversation also touches on practical tips and tricks for working with word embeddings, stressing the importance of vectorizing the right information and choosing the appropriate model for the task at hand.

Throughout the discussion, the hosts underscore the need for product involvement and a thorough understanding of the problem at hand when deciding how to best utilize word embeddings.


Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.

Check out PromptDesk.ai for an open-source prompt management tool.

Check out Brad’s AI Consultancy at bradleyarsenault.me

Add Justin Macorin and Bradley Arsenault on LinkedIn.


Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Description

Note! This is an episode that we found in our archive unreleased. It is one of the earliest episodes we ever recorded, but has so far gone not been listened to. It's not quite the same style as our current episodes, but we thought we'd release it anyway as a special treat for interested listeners!

The podcast episode explores the realm of large language models, prompt engineering, and best practices, with a emphasis on discussing word embeddings and their applications. The hosts, Bradley and Justin, reminisce about their initial encounters with word embeddings and how they transformed the field of natural language processing.

They then examine the advantages and potential drawbacks of word embeddings, highlighting their usefulness in various tasks such as retrieval, named entity recognition, and deduplication. The conversation also touches on practical tips and tricks for working with word embeddings, stressing the importance of vectorizing the right information and choosing the appropriate model for the task at hand.

Throughout the discussion, the hosts underscore the need for product involvement and a thorough understanding of the problem at hand when deciding how to best utilize word embeddings.


Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.

Check out PromptDesk.ai for an open-source prompt management tool.

Check out Brad’s AI Consultancy at bradleyarsenault.me

Add Justin Macorin and Bradley Arsenault on LinkedIn.


Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Share

Embed

You may also like