NEW Transformer2: Self Adaptive PEFT Expert LLMs in TTA
25:55
Google's NEW TITANS: Transformer w/ RNN Memory
57:45
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
29:27
Dirichlet Energy Minimization Explains In-Context Learning (Harvard)
58:06
Stanford Webinar - Large Language Models Get the Hype, but Compound Systems Are the Future of AI
13:48
Transformers^2 - Self-Adaptive LLMs | SVD Fine-tuning | End of LoRA fine tuning? | (paper explained)
26:52
Andrew Ng Explores The Rise Of AI Agents And Agentic Reasoning | BUILD 2024 Keynote
24:07
AI can't cross this line and we don't know why.
27:02