GPU vs CPU: Running Small Language Models with Ollama & C#
9:20
Run an AI Large Language Model (LLM) at home on your GPU
14:16
Running LLaMA 3.1 on CPU: No GPU? No Problem! Exploring the 8B & 70B Models with llama.cpp
7:55
SLM (Small Language Model) with your Data | Data Exposed
15:13
Supercharge Your Chatbots with Real-Time Data: C# and Kernel Memory RAG Tutorial
6:23
Local Models with Ollama & Microsoft Extensions – Step-by-Step RAG Guide!
13:53
Building your first AI Agent with the Semantic Kernel SDK and C# 🤖
31:34
Llama 3.3, pourquoi et comment mettre ce LLM en production + déploiement de modèles non censurés
4:59