Zhiyuan Li: Chain Of Thought Empowers Transformers To Solve Inherently Serial Problems
48:38
Yash Sarrof: The Expressive Capacity of State Space Models: A Formal Language Perspective
45:18
Satwik Bhattamishra: Simplicity Bias in Transformers & their Ability to Learn S. Boolean Functions
33:06
HTML Tags You Should Know | Learn Frontend - #4
57:45
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
51:24
Alessandro Ronca: On the Expressivity of Recurrent Neural Cascades
50:44
VSAONLINE. FS24. Mike Heddes
45:43
Will Merrill: The Illusion of State in State-Space Models
1:05:49