Paper Reading & Discussion: Finding Skill Neurons in Pre-trained Transformer-based Language Models
46:38
Paper Reading & Discussion: Quantifying Memorization Across Neural Language Models
37:55
Paper Reading & Discussion: Knowledge Neurons in Pretrained Transformers
6:58
The C4 Model for Visualising Software Architecture
36:14
Paper Reading & Discussion: LoRAMoE: Alleviate World Know. Forgetting in LLMs via MoE-Style Plugin
23:58
Application of UV VIS Spec
38:46
Trying to understand PiPPy and Pipeline Parallelism API in PyTorch
27:03
The 10 Biggest Myths About Our Economy
17:34