Autoplay
Autocomplete
Previous Lesson
Complete and Continue
Deep learning with Keras & Tensorflow
Machine Learning Basics
0. introduction to machine learning recap (0:50)
1. Regression (5:59)
2. Regression LAB (7:06)
3. Logistic regression (7:19)
4. logit function (3:56)
5. Building a logistic Regression Line (7:02)
6. Multiple logistic regression (5:30)
7. Validation Matrices - Classification Matrix (4:29)
8. Sensitivity and Specificity (4:43)
9. Sensitivity vs Specificity (8:39)
10. Sensitivity Specificity LAB (6:13)
11. ROC and AUC (5:35)
12. ROC and AUC LAB (2:49)
13. The training error (5:42)
14. Over Fitting and Under Fitting (8:01)
15. Bias Variance Tradeoff (6:28)
16. Holdout data validation (1:58)
17. Hold Out data validation LAB (6:10)
Session-1. Machine Learning Basics-Case Study
Artificial Neural Networks
1. Introduction to ANN (1:46)
2. Logistic Regression Recap LAB (4:49)
3. Decision Boundry - Logistic Regression (4:51)
4. Decision Boundry - LAB (2:15)
5. New Representation for Logistic Regression (4:48)
6. Non Linear Decision Boundry - Problem (3:15)
7. Non Linear Decision Boundry - Solution (6:57)
8. Intermediate Output LAB (6:15)
9. Neural Network Intution (7:35)
10. Neural Network Algorithm (6:47)
11. Demo Neural Network Algorithm (6:16)
12. Neural Network LAB (11:29)
13. Local Minima and Number of Hidden Layers (5:09)
14. Digit Recogniser Lab (13:50)
15. Conclusion (5:36)
Session-2. Introduction to ANN-Case Study
TensorFlow and Keras
3.1 Introduction to Deep Learning Frameworks (4:33)
3.2 Key Terms of Tensorflow (9:03)
3.3 Coding basics in Tensorflow (7:23)
3.4 Model building intution (1:03)
3.5 LAB Building Linear and Logistic regression models with Tensorflow (6:38)
3.6 LAB MNIST model using tensorflow (3:34)
3.7 Tensorflow shortcomings and Intro to Keras (2:56)
3.8 LAB MNIST model using Keras (2:38)
3.9 Tensorflow vs Keras and conclusion (1:14)
Session-3. TensorFLow and Keras-Case Study
ANN Hyper Parameters
4.1 Introduction to Hyperparameters (3:45)
4.2 LAB_calculating number of parameters (3:24)
4.3 Regularization (7:39)
4.4 LAB_Overfitting of a Regression Model (3:40)
4.5 LAB_Regularization in Regression (7:07)
4.6 Regularization in Neural Networks (7:03)
4.7 Demo_Regularization in Neural Networks (3:04)
4.8 Dropout Regularization (8:24)
4.9 LAB_ Dropout Regularization (3:45)
4.10 Weight sharing in Dropout (2:11)
4.11 Early stopping (5:33)
4.12 LAB_ Early stopping notebook (4:19)
4.13 Activation Function (4:36)
4.14 Demo_Activation Function (3:16)
4.15 Problem of Vanishing Gradients (3:55)
4.16 ReLU activation Function (2:34)
4.17 Activation Function for Last Layer (3:50)
4.18 Learning Rate (8:03)
4.19 Demo_ Learning Rate (4:21)
4.20 Momentum (5:30)
4.21 LAB_ Learning rate and momentum (4:07)
4.22 Gradient Descent Batches (3:47)
4.23 LAB_Gradient Descent vs Mini Batch (4:26)
4.24 Hyper Parameter conclusion (0:53)
Session-4. ANN Hyper-parameters-Case Study
Convolutional Neural Networks
5.1 Introduction to CNN - fundamentals of Image data (5:35)
5.2 LAB ANN on Image data - MNIST (8:07)
5.3 Counting parameters of ANN for Image Data (5:18)
5.4 LAB Parameter count in ANN on large Images (4:41)
5.5 Issue with ANN on Image Data (4:30)
5.6 Preserving Spatial Integrity of Images in Neural Network (3:00)
5.7 How filters work (5:13)
5.8 Kernal Matrix and Convoluted Layers (3:56)
5.9 Convoluted Features (2:09)
5.10 LAB Convolution Layer (1:59)
5.11 Handling edges of Image in convolution (2:12)
5.12 Depth of Convolutions (1:06)
5.13 Number of Weights in Convolution Layers (1:27)
5.14 Pooling Convolution Layers (4:26)
5.15 LAB Pooling (1:12)
5.16 The CNN Architecture (3:18)
5.17 LAB CNN on MNIST (5:02)
5.18 Conclusion (1:32)
Session-5. Convolution Neural Networks_CNN-Case Study
Recurrent Neural Network (RNN)
6.1 Introduction to RNN (3:09)
6.2 Sequential Models (7:02)
6.3 Sequential ANNs (9:04)
6.4 LAB Sequential ANNs (17:45)
6.5 RNNs The Programmed Sequential Models (6:20)
6.6 BackPropagation in RNNs (3:30)
6.7 Number of Parameters in RNN Models (7:21)
6.8 BPTT Details (2:57)
6.9 LAB RNN Model Building (7:47)
6.10 Issues with RNNs (2:38)
6.11 RNN Conclusion (1:16)
Session-6. Recurrent Neural Networks_RNN-Case study
Long Short Term Memory (LSTM)
7.1 Introduction to LSTM (1:46)
7.2 LSTM What is Vanishing Gradient (4:53)
7.3 Mathematics of Vanishing Gradinets (14:51)
7.4 LAB_Vanishing Gradients (7:11)
7.5 RNN Other Issues LSTM main idea (7:34)
7.6 LSTM Gates (9:00)
7.7 LSTM Different Representations (9:06)
7.8 LAB LSTM (6:35)
7.9 LSTM Conlcusion (0:26)
Session-7. LSTM-Case Study
2. Regression LAB
Lesson content locked
If you're already enrolled,
you'll need to login
.
Enroll in Course to Unlock