This is my study hub for the MPL master’s course. Each lecture gets its own page with explanations, key concepts, math, and examples I worked through to actually understand the material — not just memorize it.

I use the SlideLink tool I built to automatically align these notes with the lecture slides.


How to Use These Notes

  • Reading linearly works well — each lecture builds on the previous.
  • Each page has a mental model section (big picture first), then details.
  • Look for the Example blocks to build intuition.
  • Math is included where needed, but always paired with plain-English explanations.

Lectures

TopicCore IdeaExample Model / Use Case
(y-01) IntroductionWhat is ML? Loss, optimization, training loopsLogistic Regression (Iris dataset)
(y-02) Convolutional Neural NetworksSpatial feature extraction with filtersSimple CNN (MNIST digit recognition)
(y-03) Vision CNNsAlexNet, VGG, ResNet, EfficientNetResNet-50 (ImageNet classification)
(y-04) Recurrent Neural NetworksSequences, LSTMs, GRUs, vanishing gradientsLSTM (Sentiment analysis / Stocks)
(y-05) TransformersAttention is all you needBERT / GPT (Machine Translation)
(y-06) Vision Transformer (ViT)Patches + Transformers = visionViT-Base (Large-scale visual recognition)
(y-07) Multimodal LearningCLIP, image+text, cross-modal alignmentCLIP (Zero-shot image classification)
(y-08) Interactive Machine LearningHumans in the loopActive Learning / GNN (Sudoku solver)
(y-09) Generative AI & VAELatent spaces and variational inferenceVAE (Face generation / Reconstruction)
(y-10) GANsGenerator vs. DiscriminatorStyleGAN (Synthetic high-res faces)
(y-11) Reinforcement LearningRewards, policies, Q-learningQ-Learning / PPO (Game playing / Atari)
(y-12) Diffusion ModelsDenoising as generationStable Diffusion (Text-to-image generation)
(y-13) Explainable AI (XAI)Why did the model decide that?Grad-CAM / SHAP (Debugging model bias)

Applied MLP: Case Studies

Practical applications of theory to real-world technical problems.

(y-) Case Study: The Sudoku GNN

Core Idea: Most neural networks work on Euclidean data (grids of pixels or sequences of text). Sudoku is better represented as a Graph, where the rules of the game define the edges (connections) between cells.

Mental Model: Message Passing as Constraint Propagation

  • In traditional Sudoku solvers, you look at a cell and “propagate” the constraints from its row, column, and box to eliminate possibilities.
  • In a GNN, this is exactly what Message Passing does. Each node (cell) sends its current “state” (clues and predictions) to its neighbors. After a few rounds of updates, each node has “seen” enough of the board to make a classification.

💡 Intuition: Why use a Graph instead of a CNN?

  • A CNN is limited by its receptive field; it can only “see” a small 3x3 or 5x5 area at a time.
  • A GNN with explicit edges for rows and columns has a receptive field of 1 for all constraints. This “shortcut” for relational reasoning makes the learning problem significantly easier.

Quick Reference: Key Concepts

ConceptWhere It Appears
BackpropagationL01, L02
AttentionL05, L06, L07
Latent SpaceL09, L10, L12
Sequential DataL04, L05
Generative ModelsL09, L10, L12

(y) Return to Notes | (y) Return to Home