How I Finally Understood Self-Attention (With PyTorch)
![](https://i.ytimg.com/vi/UGO_Ehywuxc/mqdefault.jpg)
24:09
The Dark Matter of AI [Mechanistic Interpretability]
![](https://i.ytimg.com/vi/UZDiGooFs54/mqdefault.jpg)
17:38
The moment we stopped understanding AI [AlexNet]
![](https://i.ytimg.com/vi/KJtZARuO3JY/mqdefault.jpg)
57:45
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
![](https://i.ytimg.com/vi/Hgg8Xy6IRig/mqdefault.jpg)
28:02
PyTorch Lightning Tutorial - Lightweight PyTorch Wrapper For ML Researchers
![](https://i.ytimg.com/vi/tGnoQ6Jqx34/mqdefault.jpg)
31:50
Instability is All You Need: The Surprising Dynamics of Learning in Deep Models
![](https://i.ytimg.com/vi/wjZofJX0v4M/mqdefault.jpg)
27:14
Transformers (how LLMs work) explained visually | DL5
![](https://i.ytimg.com/vi/5ZlavKF_98U/mqdefault.jpg)
32:07
Fast LLM Serving with vLLM and PagedAttention
![](https://i.ytimg.com/vi/EMWNZtCYg5s/mqdefault.jpg)
12:59