Don't naive RAG do hybrid search instead (Pinecone Weaviate or pgvector + full text search & rerank)
![](https://i.ytimg.com/vi/RkWor1BZOn0/mqdefault.jpg)
51:29
5 tiers of long-term memory and personalization for LLM applications (in-person workshop)
![](https://i.ytimg.com/vi/ibzlEQmgPPY/mqdefault.jpg)
2:12:00
The missing pieces to your AI app (pgvector + RAG in prod)
![](https://i.ytimg.com/vi/9jOMacFnbuI/mqdefault.jpg)
32:49
Setup your first LLM observability traces with LangSmith and iterate on prompts with Quotient AI
![](https://i.ytimg.com/vi/3FbJOKhLv9M/mqdefault.jpg)
8:14
Why vector search is not enough and we need BM25
![](https://i.ytimg.com/vi/OGsQNLxMPEQ/mqdefault.jpg)
34:59
Event Sourcing with René Pardon
![](https://i.ytimg.com/vi/QxHE4af5BQE/mqdefault.jpg)
20:22
How to scrape the web for LLM in 2024: Jina AI (Reader API), Mendable (firecrawl) and Scrapegraph-ai
![](https://i.ytimg.com/vi/CK0ExcCWDP4/mqdefault.jpg)
42:35
Hybrid Search RAG With Langchain And Pinecone Vector DB
![](https://i.ytimg.com/vi/MbLJvqG5hNE/mqdefault.jpg)
32:42