undefined cover
undefined cover
#73 - LLMs, RAG, Graph Neural Networks and Open Source with Maxime Labonne cover
#73 - LLMs, RAG, Graph Neural Networks and Open Source with Maxime Labonne cover
Let's Talk AI

#73 - LLMs, RAG, Graph Neural Networks and Open Source with Maxime Labonne

#73 - LLMs, RAG, Graph Neural Networks and Open Source with Maxime Labonne

56min |11/07/2024
Play
undefined cover
undefined cover
#73 - LLMs, RAG, Graph Neural Networks and Open Source with Maxime Labonne cover
#73 - LLMs, RAG, Graph Neural Networks and Open Source with Maxime Labonne cover
Let's Talk AI

#73 - LLMs, RAG, Graph Neural Networks and Open Source with Maxime Labonne

#73 - LLMs, RAG, Graph Neural Networks and Open Source with Maxime Labonne

56min |11/07/2024
Play

Description

πŸŽ™οΈ Who is Maxime Labonne?

Maxime Labonne is a Staff Machine Learning Scientist at Liquid AI. Maxime received his PhD from the Polytechnic Institute of Paris and has been working with machine learning since 2019. He has applied his expertise in various contexts, including R&D, industry, finance, and academia. He is also an AI/ML Google Developer Expert. Maxime is widely known for creating popular LLMs on Hugging Face, like AlphaMonarch-7B, Beyonder-4x7B, Phixtral, and NeuralBeagle14. He also released LLM tools, such as LLM AutoEval, LazyMergekit, LazyAxolotl, and AutoGGUF. Maxime is the author of the technical book "Hands-On Graph Neural Networks using Python," published with Packt. He has written technical articles on his blog and Towards Data Science and created the popular LLM course on GitHub, which has over 27,000 stars.


πŸ’‘ In this episode...

... we discuss Maxime's expertise in large language models (LLMs) and graph neural networks, as well as his journey from cybersecurity to machine learning. We also talk about insights on LLMS misconceptions, evaluation optimization techniques, and creating augmented datasets for fine-tuning. Maxime also talked about graph neural networks and non-transformer architectures and provided advice for leading language model research and mentoring language model research beginners. Finally, we discuss Maxime's passion for open source and knowledge sharing.


Follow Let's Talk AI:

βœ‰οΈ Newsletter πŸ‘‰ http://eepurl.com/ijZ8qz

πŸŽ™οΈ Podcast πŸ‘‰ http://smartlink.ausha.co/let-s-talk-ai/

πŸ“Ή Youtube πŸ‘‰ https://www.youtube.com/@lets-talk-ai

πŸ“· Instagram πŸ‘‰ https://www.instagram.com/lets_talk_ai/

🎞️ TikTok πŸ‘‰ https://www.tiktok.com/@letstalkai/

🌐 Website πŸ‘‰ https://lets-talk-ai.com/


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Description

πŸŽ™οΈ Who is Maxime Labonne?

Maxime Labonne is a Staff Machine Learning Scientist at Liquid AI. Maxime received his PhD from the Polytechnic Institute of Paris and has been working with machine learning since 2019. He has applied his expertise in various contexts, including R&D, industry, finance, and academia. He is also an AI/ML Google Developer Expert. Maxime is widely known for creating popular LLMs on Hugging Face, like AlphaMonarch-7B, Beyonder-4x7B, Phixtral, and NeuralBeagle14. He also released LLM tools, such as LLM AutoEval, LazyMergekit, LazyAxolotl, and AutoGGUF. Maxime is the author of the technical book "Hands-On Graph Neural Networks using Python," published with Packt. He has written technical articles on his blog and Towards Data Science and created the popular LLM course on GitHub, which has over 27,000 stars.


πŸ’‘ In this episode...

... we discuss Maxime's expertise in large language models (LLMs) and graph neural networks, as well as his journey from cybersecurity to machine learning. We also talk about insights on LLMS misconceptions, evaluation optimization techniques, and creating augmented datasets for fine-tuning. Maxime also talked about graph neural networks and non-transformer architectures and provided advice for leading language model research and mentoring language model research beginners. Finally, we discuss Maxime's passion for open source and knowledge sharing.


Follow Let's Talk AI:

βœ‰οΈ Newsletter πŸ‘‰ http://eepurl.com/ijZ8qz

πŸŽ™οΈ Podcast πŸ‘‰ http://smartlink.ausha.co/let-s-talk-ai/

πŸ“Ή Youtube πŸ‘‰ https://www.youtube.com/@lets-talk-ai

πŸ“· Instagram πŸ‘‰ https://www.instagram.com/lets_talk_ai/

🎞️ TikTok πŸ‘‰ https://www.tiktok.com/@letstalkai/

🌐 Website πŸ‘‰ https://lets-talk-ai.com/


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Share

Embed

You may also like

Description

πŸŽ™οΈ Who is Maxime Labonne?

Maxime Labonne is a Staff Machine Learning Scientist at Liquid AI. Maxime received his PhD from the Polytechnic Institute of Paris and has been working with machine learning since 2019. He has applied his expertise in various contexts, including R&D, industry, finance, and academia. He is also an AI/ML Google Developer Expert. Maxime is widely known for creating popular LLMs on Hugging Face, like AlphaMonarch-7B, Beyonder-4x7B, Phixtral, and NeuralBeagle14. He also released LLM tools, such as LLM AutoEval, LazyMergekit, LazyAxolotl, and AutoGGUF. Maxime is the author of the technical book "Hands-On Graph Neural Networks using Python," published with Packt. He has written technical articles on his blog and Towards Data Science and created the popular LLM course on GitHub, which has over 27,000 stars.


πŸ’‘ In this episode...

... we discuss Maxime's expertise in large language models (LLMs) and graph neural networks, as well as his journey from cybersecurity to machine learning. We also talk about insights on LLMS misconceptions, evaluation optimization techniques, and creating augmented datasets for fine-tuning. Maxime also talked about graph neural networks and non-transformer architectures and provided advice for leading language model research and mentoring language model research beginners. Finally, we discuss Maxime's passion for open source and knowledge sharing.


Follow Let's Talk AI:

βœ‰οΈ Newsletter πŸ‘‰ http://eepurl.com/ijZ8qz

πŸŽ™οΈ Podcast πŸ‘‰ http://smartlink.ausha.co/let-s-talk-ai/

πŸ“Ή Youtube πŸ‘‰ https://www.youtube.com/@lets-talk-ai

πŸ“· Instagram πŸ‘‰ https://www.instagram.com/lets_talk_ai/

🎞️ TikTok πŸ‘‰ https://www.tiktok.com/@letstalkai/

🌐 Website πŸ‘‰ https://lets-talk-ai.com/


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Description

πŸŽ™οΈ Who is Maxime Labonne?

Maxime Labonne is a Staff Machine Learning Scientist at Liquid AI. Maxime received his PhD from the Polytechnic Institute of Paris and has been working with machine learning since 2019. He has applied his expertise in various contexts, including R&D, industry, finance, and academia. He is also an AI/ML Google Developer Expert. Maxime is widely known for creating popular LLMs on Hugging Face, like AlphaMonarch-7B, Beyonder-4x7B, Phixtral, and NeuralBeagle14. He also released LLM tools, such as LLM AutoEval, LazyMergekit, LazyAxolotl, and AutoGGUF. Maxime is the author of the technical book "Hands-On Graph Neural Networks using Python," published with Packt. He has written technical articles on his blog and Towards Data Science and created the popular LLM course on GitHub, which has over 27,000 stars.


πŸ’‘ In this episode...

... we discuss Maxime's expertise in large language models (LLMs) and graph neural networks, as well as his journey from cybersecurity to machine learning. We also talk about insights on LLMS misconceptions, evaluation optimization techniques, and creating augmented datasets for fine-tuning. Maxime also talked about graph neural networks and non-transformer architectures and provided advice for leading language model research and mentoring language model research beginners. Finally, we discuss Maxime's passion for open source and knowledge sharing.


Follow Let's Talk AI:

βœ‰οΈ Newsletter πŸ‘‰ http://eepurl.com/ijZ8qz

πŸŽ™οΈ Podcast πŸ‘‰ http://smartlink.ausha.co/let-s-talk-ai/

πŸ“Ή Youtube πŸ‘‰ https://www.youtube.com/@lets-talk-ai

πŸ“· Instagram πŸ‘‰ https://www.instagram.com/lets_talk_ai/

🎞️ TikTok πŸ‘‰ https://www.tiktok.com/@letstalkai/

🌐 Website πŸ‘‰ https://lets-talk-ai.com/


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Share

Embed

You may also like