Yong Jae Lee | Next Steps in Generalist Multimodal Models

1:01:42
Yuxiong Wang | Bridging Generative & Discriminative Learning in the Open World

58:18
Hugo Laurençon | What Matters When Building Vision-Language Models?

56:26
Stanford Seminar - Robot Learning in the Era of Large Pretrained Models

24:38
Multimodal Embeddings: Introduction & Use Cases (with Python)

1:01:31
MIT 6.S191: Recurrent Neural Networks, Transformers, and Attention

51:12
Lukas Lange | SwitchPrompt: Learning Domain-Specific Gated Soft Prompts

53:35
Yuandong Tian | Efficient Inference of LLMs with Long Context Support

59:55