torchtune: Easy and Accessible Finetuning in Native PyTorch - Evan Smothers, Meta
24:21
Running State-of-Art Gen AI Models on-Device with NPU Acceleration - Felix Baum, Qualcomm
26:16
Meta Announces Llama 3 at Weights & Biases’ conference
10:24
Use TorchTune to Fine Tune Llama 3 Locally
21:39
Torchtune: A new finetuning library for LLMs
32:03
DistServe: disaggregating prefill and decoding for goodput-optimized LLM inference
28:05
Building Scientific Computing Infrastructure Software with the PyTorch Ecosystem - Bharath Ramsundar
44:06
LLM inference optimization: Architecture, KV cache and Flash attention
12:16