Till KTH:s startsida Till KTH:s startsida

Visa version

Version skapad av Josephine Sullivan 2017-04-27 15:18

Visa < föregående | nästa >
Jämför < föregående | nästa >

Lectures

Here is a provisional list of the course's lecture titles and content. The order in which I present the material may change  as may the content. I will make the slides for each lecture available (probably just) before I give the lecture. 

Lecture 1

Title:  The Deep Learning Revolution

Topics covered: Review of the impact deep networks have had in the application fields of speech, computer vision and NLP. Review of the course's syllabus, Review of the course's assignments, project, written exam and assessment.

Slides

Lecture 2

Title:  Learning Linear Binary & Linear Multi-class Classifiers from Labelled Training Data

 (mini-batch gradient descent optimization applied to "Loss + Regularization" cost functions)

Topics covered: Binary SVM classifiers as an unconstrained optimization problem, supervised learning = minimizing loss + regularization, Gradient Descent, SGD, mini-batch optimization,  multi-class classification with one layer networks, Different loss-functions.

Slides: Lecture2.pdf, Lecture2_2by2.pdf

Suggested Reading Material: Sections 5.1.4, 5.2, 5.2.2, 5.7.2 from "Deep Learning" by Goodfellow, Bengio and Courville. Link to Chapter 5 of Deep Learning

Sections 8.1.3, 8.3.1 from the book give amore detailed description and analysis of mini-batch gradient descent and SGD than given in the lecture notes. Link to Chapter 8 of Deep Learning.

The suggested readings from chapter 5 should be familiar to those who have already taken courses in ML. Lecture 2 should be more-or-less self-contained. But the reading material should flesh out some of the concepts referred to in the lecture

Lecture 3

Title:  Back Propagation

Topics covered:  Chain rule of differentiation, Computational graph, Back propagation (More detail then you probably ever expected!)

SlidesLecture3.pdfLecture3_2by2.pdf

Suggested Reading Material:

Section 6.5 from the deep learning book.

I'm going to go into very explicit detail about the back-propagation algorithm.  It was not my original intention to have such an involved description but condensing the explanation make things less clear. My hope, though, is that everybody will have a good understanding of the theory and the mechanics of the algorithm after this lecture. I go into more specific detail (but not as generic) than in the deep learning book. So my recommendation is that you read my lecture notes to get a good understanding for the concrete example(s)  I explain and then you can read the deep learning book for a broader description.  Section 6.5 also assume you know about networks with more than 1 layer! So it may be better to hold off reading it until after lecture 4 (where I will talk about n-layer networks, activation functions, etc..) 

Lecture 4

Title:  k-layer Neural Networks

Topics covered:  k-layer Neural Networks, Activation functions, Backprop for  k-layer neural networks, Vanishing gradient problem, Batch normalization, Backprop with Batch normalisation

SlidesLecture4.pdfLecture4_2by2.pdf

Suggested Reading Material:

Sections 8.7.1 from the deep learning book has a more subtle description of the benefits of batch normalisation and why it works.

Lecture 5

Title:  Training & Regularization of Neural Networks

Topics covered:  The art/science of training neural networks, hyper-parameter optimisation, Variations of SGD, Regularization via DropOut, Evaluation of the models - ensembles

SlidesLecture5.pdf (to view the embedded videos you must use Adobe reader), Lecture5_2by2.pdf (does not include the videos)

Suggested Reading Material:

Sections 8.3.1, 8.3.2, 8.3.3, 8.5 from the deep learning book cover variations of the SGD in detail.

Lecture 6

Title:  The Convolutional Layer in Convolutional Networks

Topics covered: Details of the convolution layer in Convolutional Networks, Gradient computations for the convolutional layers

SlidesLecture6.pdfLecture6_2by2.pdf

Slides from PDCIntroductionToThePDCEnvironmentDD2424.pdf

Suggested Reading Material:

Section 9.1, 9.2 (motivates benefit of convolutional layers Vs fully connected layers), 9.10 (if you are interested in the neuro-scientific basis for ConvNets)

Lecture 7

Title:  More on Convolutional Networks

Topics covered: Common operations in ConvNets: more on the convolutional operator, Max-pooling, Review of the modern top performing deep ConvNets - AlexNet, VGGNet, GoogLeNet, ResNet

SlidesLecture7.pdfLecture7_2by2.pdf

Suggested Reading Material:

Section 9.3 discusses the pooling operation

Lecture 8

Title:  Visualizing, Training \& Designing ConvNets

Topics covered:  What does a deep ConvNet learn? We review how researchers have attempted to uncover this elusive fact. Part two of the lecture will review some practicalities of training deep neural networks - data augmentation, transfer learning and stacking convolutional filters. 

SlidesLecture8.pdfLecture8_2by2.pdf

Lecture 9

Title:  Networks for Sequential Data: RNNs & LSTMs

Topics covered:  RNNs, back-prop for RNNs, RNNs for synthesis problems, RNNs applied to translation problems, problem of exploding and vanishing gradients in RNN, LSTMs

SlidesLecture9.pdfLecture9_2by2.pdf

Lecture 10

Title:  Generative Adversarial networks (guest lecture)

Topics covered:  The hot and very exciting area of generative learning via generative adversarial networks.

Slides:

Lecture 11

Title:  Incorporating Explicit Memory Mechanisms in Deep Networks 

Topics covered:  Including attention mechanism into neural networks,  Q&A systems 

Slides: