undefined cover
undefined cover
AI Hallucinations: When the Machine Thinks It Knows cover
AI Hallucinations: When the Machine Thinks It Knows cover
Ai Experience [in english]

AI Hallucinations: When the Machine Thinks It Knows

AI Hallucinations: When the Machine Thinks It Knows

31min |04/05/2025
Play
undefined cover
undefined cover
AI Hallucinations: When the Machine Thinks It Knows cover
AI Hallucinations: When the Machine Thinks It Knows cover
Ai Experience [in english]

AI Hallucinations: When the Machine Thinks It Knows

AI Hallucinations: When the Machine Thinks It Knows

31min |04/05/2025
Play

Description

What if the most confident answer your AI gives you… was completely made up?

In this episode, you’ll hear from Daniel Wilson, AI and Data Sovereignty Advisor to Malaysia’s National AI Office and founder of InfoScience AI. With years of hands-on experience in neuromorphic systems and secure AI architectures, Daniel helps you unpack a central paradox of generative models: why they hallucinate, why that might not be a bug, and how you can reduce the risk when using them. Together, you explore the mechanics of inference, the limits of context windows, and the subtle art of prompting.

Daniel also explains how AI hallucinations mirror aspects of human memory—and what it takes to design systems that truly know when they don’t know. If you’ve ever used AI in your work, this episode will give you the tools to better understand its blind spots—and maybe your own.


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Description

What if the most confident answer your AI gives you… was completely made up?

In this episode, you’ll hear from Daniel Wilson, AI and Data Sovereignty Advisor to Malaysia’s National AI Office and founder of InfoScience AI. With years of hands-on experience in neuromorphic systems and secure AI architectures, Daniel helps you unpack a central paradox of generative models: why they hallucinate, why that might not be a bug, and how you can reduce the risk when using them. Together, you explore the mechanics of inference, the limits of context windows, and the subtle art of prompting.

Daniel also explains how AI hallucinations mirror aspects of human memory—and what it takes to design systems that truly know when they don’t know. If you’ve ever used AI in your work, this episode will give you the tools to better understand its blind spots—and maybe your own.


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Share

Embed

You may also like

Description

What if the most confident answer your AI gives you… was completely made up?

In this episode, you’ll hear from Daniel Wilson, AI and Data Sovereignty Advisor to Malaysia’s National AI Office and founder of InfoScience AI. With years of hands-on experience in neuromorphic systems and secure AI architectures, Daniel helps you unpack a central paradox of generative models: why they hallucinate, why that might not be a bug, and how you can reduce the risk when using them. Together, you explore the mechanics of inference, the limits of context windows, and the subtle art of prompting.

Daniel also explains how AI hallucinations mirror aspects of human memory—and what it takes to design systems that truly know when they don’t know. If you’ve ever used AI in your work, this episode will give you the tools to better understand its blind spots—and maybe your own.


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Description

What if the most confident answer your AI gives you… was completely made up?

In this episode, you’ll hear from Daniel Wilson, AI and Data Sovereignty Advisor to Malaysia’s National AI Office and founder of InfoScience AI. With years of hands-on experience in neuromorphic systems and secure AI architectures, Daniel helps you unpack a central paradox of generative models: why they hallucinate, why that might not be a bug, and how you can reduce the risk when using them. Together, you explore the mechanics of inference, the limits of context windows, and the subtle art of prompting.

Daniel also explains how AI hallucinations mirror aspects of human memory—and what it takes to design systems that truly know when they don’t know. If you’ve ever used AI in your work, this episode will give you the tools to better understand its blind spots—and maybe your own.


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Share

Embed

You may also like