[REFAI Seminar 03/30/23] Efficient Trillion Parameter Scale Training and Inference with DeepSpeed
1:05:21
[REFAI Seminar 04/20/23] Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
21:18
Turing-NLG, DeepSpeed and the ZeRO optimizer
1:03:42
Full Fine tuning with Fewer GPUs - Galore, Optimizer Tricks, Adafactor
36:23
Large Model Training and Inference with DeepSpeed // Samyam Rajbhandari // LLMs in Prod Conference
1:04:09
[REFAI Seminar 11/26/24 ] Efficient Programming on Heterogeneous Accelerators
52:28
Paper Club with Peter - ZeRO: Memory Optimizations Toward Training Trillion Parameter Models
1:05:10
ZeRO & Fastest BERT: Increasing the scale and speed of deep learning training in DeepSpeed
1:00:35