This repository contains my reading notes and attempts at implementations of the topics covered in the excellent book "Deep Learning" by Ian Goodfellow, Yoshua Bengio, and Aaron Courville.
The goal of this repository is to provide a mix of hands-on Python examples, notes, external links, and practical projects as a means of study to accompany the theoretical topics outlined in the book.
I will generally use numpy
in Jupyter notebooks to take notes, using LaTeX where it may help.
Note that there are many errors in formatting and several glyphs when GitHub renders LaTeX in the embedded Jupyter notebooks here. These should render properly locally if you clone the repositories and open them in Jupyter. I'm running the notebooks in Chrome on macOS.
Chapter | Name | Link | Status |
---|---|---|---|
01 | Introduction | ||
02 | Linear Algebra | Chapter 02 | In Progress |
03 | Probability and Information Theory | Chapter 03 | In Progress |
04 | Numerical Computation | ||
05 | Machine Learning Basics | ||
06 | Deep Feedforward Networks | ||
07 | Regularization for Deep Learning | ||
08 | Optimization for Training Deep Models | ||
09 | Convolutional Networks | ||
10 | Sequence Modeling: Recurrent and Recursive Nets | ||
11 | Practical Methodology | ||
12 | Applications | ||
13 | Linear Factor Models | ||
14 | Autoencoders | ||
15 | Representation Learning | ||
16 | Structured Probabilistic Models for Deep Learning | ||
17 | Monte Carlo Methods | ||
18 | Confronting the Partition Function | ||
19 | Approximate Inference | ||
20 | Deep Generative Models |