In this course, you will learn the fundamental principles of deep learning by building your very own neural network components completely from scratch. You'll learn about the history of neural networks and then start building your own library with linear layers, weight initializers, dropout, various regularizers, and batch norm. From there, you'll also learn the math and theory underpinning each of these components. We'll integrate material from statistics, numerical analysis, and Bayesian statistics to give you the most well rounded perspective of deep learning available in any course. View the entire course syllabus below, along with preview lessons. Be sure to click the drop down arrow to see the syllabus in its entirety.
Course Curriculum
Available in
days
days
after you enroll
Available in
days
days
after you enroll
Available in
days
days
after you enroll
- The History of Neural Networks
- The Module Base Class (19:54)
- Practicalities: Using Logging
- Parameter Initialization Strategies - Xavier Initialization (19:13)
- Parameter Initialization Strategies - Glorot Uniform Initialization (6:10)
- Parameter Initialization Strategies - The Implementation (11:24)
- The Linear Layer (11:41)
- Regression and Classification (10:35)
- Activation Functions (17:35)
- Statistical Estimators, Underfitting, and Overfitting (23:28)
- Regularization for Better Generalization (25:48)
- Regularization - Label Smoothing (29:16)
- Regularization - The Elastic Net (8:20)
- Implementing Regularization (7:04)
- A Drop of Dropout - Part 1 (4:04)
- A Drop of Dropout - Part 2 (22:50)
- Implementing Dropout (8:38)
- A Batch of Batch Norm (18:08)
- Implementing Batch Norm (6:28)
Available in
days
days
after you enroll