Decision Transformer: Reinforcement Learning via Sequence Modeling (Research Paper Explained)
59:33
LambdaNetworks: Modeling long-range Interactions without Attention (Paper Explained)
36:37
∞-former: Infinite Memory Transformer (aka Infty-Former / Infinity-Former, Research Paper Explained)
21:51
【緊急収録】OpenAIから「AGI完成宣言」か!?サム・アルトマン氏の最新ブログ記事「Reflections」から読み取れることを逃さずに解説
1:20:43
Stanford CS25: V1 I Decision Transformer: Reinforcement Learning via Sequence Modeling
45:30
Learning to summarize from human feedback (Paper Explained)
26:10
Attention in transformers, visually explained | DL6
1:00:19
MIT 6.S191: Reinforcement Learning
24:44