LoRA Land: How We Trained 25 Fine-Tuned Mistral-7b Models that Outperform GPT-4

57:15
How to Reduce Your OpenAI Spend by up to 90% with Small Language Models

1:01:18
5 Reasons Why Adapters are the Future of Fine-tuning LLMs

17:07
LoRA explained (and a bit about precision and quantization)

27:14
Transformers (how LLMs work) explained visually | DL5

18:23
Finetune Deepseek R1 LLM with LoRA on Your Own Data - Step-by-Step Guide LLM fine-tuning

1:03:39
Efficiently Build Custom LLMs on Your Data with Open-source Ludwig

33:26
🚀 Future of Generative AI: Experts from NVIDIA, Meta, Hugging Face share their insights.

14:32