Ido Nachum - A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs
59:22
Itay Evron - Continual learning in linear regression and classification
1:03:01
Dan Vilenchik - Towards Reverse Algorithmic Engineering of Neural Networks
23:25
Things I Do and Do Not Understand About the Aharonov-Bohm Effect
18:45
SampleNet: Learning a Differentiable Point Cloud Sampling Network
15:13
2024's Biggest Breakthroughs in Math
31:50
Instability is All You Need: The Surprising Dynamics of Learning in Deep Models
31:51
MAMBA from Scratch: Neural Nets Better and Faster than Transformers
22:59