Better not Bigger: Distilling LLMs into Specialized Models
56:43
How to Evaluate LLM Performance for Domain-Specific Use Cases
41:54
Uç Cihazlar ve Yüksek Lisans'lar: Yapay Zekayı Gelecekte Neler Bekliyor?
57:22
MedAI #88: Distilling Step-by-Step! Outperforming LLMs with Smaller Model Sizes | Cheng-Yu Hsieh
25:21
Model Distillation: Same LLM Power but 3240x Smaller
49:07
[Webinar] LLMs for Evaluating LLMs
31:51
MAMBA from Scratch: Neural Nets Better and Faster than Transformers
28:18
Fine-tuning Large Language Models (LLMs) | w/ Example Code
19:46