TILOS Seminar: Transformers learn in-context by (functional) gradient descent
![](https://i.ytimg.com/vi/PuAPMjhDJTs/mqdefault.jpg)
1:11:07
TILOS Seminar: Off-the-shelf Algorithmic Stability
![](https://i.ytimg.com/vi/wjZofJX0v4M/mqdefault.jpg)
27:14
Transformers (how LLMs work) explained visually | DL5
![](https://i.ytimg.com/vi/ErnWZxJovaM/mqdefault.jpg)
1:09:58
MIT Introduction to Deep Learning | 6.S191
![](https://i.ytimg.com/vi/r_UBBfTPcF0/mqdefault.jpg)
37:17
Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention
![](https://i.ytimg.com/vi/Y_CnaAkuIrQ/mqdefault.jpg)
29:54
Audition de Yann LeCun, Professeur à NYU et Scientifique en chef sur l'IA à Meta.
![](https://i.ytimg.com/vi/kQ5Ium1K8Nc/mqdefault.jpg)
5:10
Deux professeurs du MIT ont ACCIDENTELLEMENT découvert le SECRET DE L'APPRENTISSAGE
![](https://i.ytimg.com/vi/UNVl64G3BzA/mqdefault.jpg)
50:16
Jacob Andreas | What Learning Algorithm is In-Context Learning?
![](https://i.ytimg.com/vi/-yo2672UikU/mqdefault.jpg)
15:26