When calibration goes awry: hallucination in language models
1:02:01
Out-of-Distribution Generalization as Reasoning: Are LLMs Competitive?
56:20
How neural networks learns simple functions?
33:17
Clayton M. Christensen – The Innovator's Dilemma | Books in Bytes Podcast
1:28:14
Adam Tauman Kalai | When calibration goes awry: Hallucination in language models
58:06
Stanford Webinar - Large Language Models Get the Hype, but Compound Systems Are the Future of AI
57:30
Strong generalization from small brains and no training data
59:18
"AI Safety Through Interpretable and Controllable Language Models" - Peter Hase, YRSS
1:02:06