Quantization vs Pruning vs Distillation: Optimizing NNs for Inference
![](https://i.ytimg.com/vi/JH_78KmP4Zk/mqdefault.jpg)
28:10
Fine-tuning Whisper to learn my Chinese dialect (Teochew)
![](https://i.ytimg.com/vi/7xTGNNLPyMI/mqdefault.jpg)
3:31:24
Deep Dive into LLMs like ChatGPT
![](https://i.ytimg.com/vi/zc5NTeJbk-k/mqdefault.jpg)
20:18
Why Does Diffusion Work Better than Auto-Regression?
![](https://i.ytimg.com/vi/88IQ-8Tb_PI/mqdefault.jpg)
9:17
8 Best AI Apps for Studying: Boost Your Learning!
![](https://i.ytimg.com/vi/K75j8MkwgJ0/mqdefault.jpg)
12:10
Optimize Your AI - Quantization Explained
![](https://i.ytimg.com/vi/q5REn6OAPRg/mqdefault.jpg)
25:21
Model Distillation: Same LLM Power but 3240x Smaller
![](https://i.ytimg.com/vi/XtT5i0ZeHHE/mqdefault.jpg)
10:41
AI Inference: The Secret to AI's Superpowers
![](https://i.ytimg.com/vi/jk2FsJxZFo8/mqdefault.jpg)
44:06