- Logistic Regression as a Neural Network
- Cost Function
- Gradient Descent
- Computational Graph
- Logistic Regression Gradient Descent
- Examples of Gradient Descent
- Vectorization
- Vectorized Implementation of Logistic Regression
- Computation of Vectorized Logistic Regressions Gradient
- Broadcasting in Python
- Numpy Vectors
- Justification of Logistic Regression Cost Function
- Neural Networks Overview
- Vectorizing Implementation of Neural Network
- Activation Functions
- Derivatives of Activation Functions
- Gradient Descent for Neural Networks
- Random Initialization
- Deep L-layer neural network
- Forward Propagation in a Deep Network
- Getting matrix dimensions right
- Why deep network?
- Building blocks of deep neural networks
- Forward and Backward Propagation
- Parameters vs Hyperparameters
- What does this have to do with the brain?
- Normalizing Inputs
- Vanishing Exploding Gradients
- Weight Initialization for Deep Networks
- Numerical approximation of gradients
- Gradient Checking
- Mini Batch Gradient
- Exponentially weighted averages
- Bias correction in exponentially weighted averages
- Gradient descent with momentum
- Optimization Algorithms RMSprop
- Optimization Algorithms Adam
- Learning rate decay
- Local optima
- Normalizing activations in a network
- Fitting Batch Norm into a neural network
- Why does Batch Norm work?
- Batch Norm at test time
- Single number evaluation metric
- Satisficing and Optimizing metric
- Train/dev/test distributions
- Size of dev/test data
- When to change dev/test sets and metrics
- Why human-level performance?
- Avoidable bias
- Understanding human-level performance
- Surpassing human-level performance
- Improving your model performance
- Training and testing on different distributions
- Bias and Variance with mismatched data distributions
- Addressing data mismatch
- Computer Vision
- Edge Detection
- Padding
- Strided Convolution
- Convolutions Over Volume
- One Layer Convolutional Network
- Simple Convolutional Network
- Pooling Layer
- CNN Example
- Why Convolutions are useful?
- LeNet-5, ALexNet, VGG-16
- ResNets
- Networks in Networks and 1x1 Convolutions
- GoogLeNet: Inception Network
- Object Lcoalization
- Landmark Detection
- Object Detection
- Convolutional Implementation of Sliding Windows
- YOLO algorithm
- IoU
- Non max suppression
- Anchor Boxes
- R-CNN
- Notations
- Recurrent Neural Network Model
- Language model and sequence generation
- Sampling novel sequences
- GRU
- LSTM
- Bidirectional RNN
- Word Representation
- Learning word embeddings
- Nueral Network Language model
- Word2Vec
- GloVe
- Debiasing word embedding
- Basic Model
- Picking the most likely sentence
- Beam Search Algorithm
- Error Analysis in Beam Search
- Attention Model
- Speech Recognition
- GAN for anormaly detection
- AnoGAN
- EGBAD
- [Word2vec]
- [FastText]
- [Elmo]
- seq2seq
- Transformer
- Universal Language Model Fine-tuning for Text Classification
- [Tranformer XL]
- [Universal Sentence Encoder](https://stomioka.github.io/deeplearning/docs/
- [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding]
- [GPT2]
- [MN-DDN]
- [XLNet]
- Using Callbacks to control training
- Simple covnet
- MNIST classification example
- Convnet with TF
- Image augmentation
- Sentiment Analysis
- Embeddings
- Sentiment Analysis - Embeddings
- IMDB Subwords 8K with Single Layer LSTM
- IMDB Subwords 8K with Multi Layer LSTM
- IMDB Subwords 8K with 1D Convolutional Layer
- Sarcasm with Bidirectional LSTM
- Sarcasm with 1D Convolutional Layer
- Generating Text
- examples
- Common patterns in time series
- Train, validation, and test
- Metrics for evaluating performance
- Moving average and differencing
- Trailing versus centered windows
- Statistical Forecasting
- Time series Forecasting Notebook
- Preparing features and labels
- Sequence bias
- Feeding windowed dataset into neural networks
- Single layer neural network
- Machine learning on time windows
- Prediction
- More on single layer neural network
- Deep neural network training, tuning and prediction
- RNN for time series
- Building an image recognition algorithm using logistic regression with neural network mindset
- Building a 2 layer neural network for binary classification problem
- Building neural network utilities
- Building a 2 layer neural network and deep learning neural network from scratch
- Initialization
- initialize_parameters_zeros
- initialize_parameters_random
- initialize_parameters_he
- Regularization
- compute_cost_with_regularization
- backward_propagation_with_regularization
- forward_propagation_with_dropout
- backward_propagation_with_dropout
- Gradient Checking
- gradient_check
- gradient_check_n
- Optimization
- update_parameters_with_gd
- random_mini_batches
- initialize_velocity
- update_parameters_with_momentum
- initialize_adam
- update_parameters_with_adam
- TensorFlow: Build neural network for multiclassification problem
- Building Convnet with numpy
- Building Convnet with TensorFlow
- Building ResNets with Keras
- AEDECOD and AETERM similarity with Universal Sentence Encoder with Transformer
- Objects detection using pre-trained Yolo weights
- Neural Style Transfer
- Face Recognition
- Character level language moodel - LSTM
- Generate Text - LSTM
- NLP tasks