WEEK 1
|
History of Deep Learning, McCulloch Pitts Neuron, Thresholding Logic, Perceptron Learning Algorithm and Convergence
|
WEEK 2
|
Multilayer Perceptrons (MLPs), Representation Power of MLPs, Sigmoid Neurons, Gradient Descent
|
WEEK 3
|
Feedforward Neural Networks, Representation Power of Feedforward Neural Networks, Backpropagation
|
WEEK 4
|
Gradient Descent(GD), Momentum Based GD, Nesterov Accelerated GD, Stochastic GD, Adagrad, AdaDelta,RMSProp, Adam,AdaMax,NAdam, learning rate schedulers
|
WEEK 5
|
Autoencoders and relation to PCA , Regularization in autoencoders, Denoising autoencoders, Sparse autoencoders, Contractive autoencoders
|
WEEK 6
|
Bias Variance Tradeoff, L2 regularization, Early stopping, Dataset augmentation, Parameter sharing and tying, Injecting noise at input, Ensemble methods, Dropout
|
WEEK 7
|
Greedy Layer Wise Pre-training, Better activation functions, Better weight initialization methods, Batch Normalization
|
WEEK 8
|
Learning Vectorial Representations Of Words, Convolutional Neural Networks, LeNet, AlexNet, ZF-Net, VGGNet, GoogLeNet, ResNet
|
WEEK 9
|
Visualizing Convolutional Neural Networks, Guided Backpropagation, Deep Dream, Deep Art, Fooling Convolutional Neural Networks
|
WEEK 10
|
Recurrent Neural Networks, Backpropagation Through Time (BPTT), Vanishing and Exploding Gradients, Truncated BPTT
|
WEEK 11
|
Gated Recurrent Units (GRUs), Long Short Term Memory (LSTM) Cells, Solving the vanishing gradient problem with LSTM
|
WEEK 12
|
Encoder Decoder Models, Attention Mechanism, Attention over images, Hierarchical Attention, Transformers.
|