📝 Notes
Short writings on research ideas, concepts, and reflections.
🧠 Representation Learning
Representation Learning: A Gentle Introduction
What does it mean for a model to "learn" a representation? This note explores the intuition behind representation learning.
📊 Log, Softmax & Likelihood
Why Log, Softmax & Likelihood — The Language of Every Loss Function
Every loss function in deep learning is secretly the same idea. This note covers probability, likelihood, softmax, cross-entropy, KL divergence, and the elegant gradient that ties them together.