Hoppa till huvudinnehållet
Till KTH:s startsida Till KTH:s startsida

EL2810 Machine Learning Theory 7,5 hp

Course memo Spring 2023-60810

Version 1 – 01/12/2023, 2:00:46 PM

Course offering

Spring 2023-1 (Start date 17/01/2023, English)

Language Of Instruction

English

Offered By

EECS/Intelligent Systems

Course memo Spring 2023

Course presentation

This course introduces the basic concepts and mathematical tools that constitute the foundations of the theory of Machine Learning (ML). In particular, the course covers some theoretical aspects of learning theory (PAC learnability, VC theory), and the main ML subfields, including supervised learning Oinear classification and regression, SVM, and deep learning), unsupervised learning (clustering), and active learning.

Headings denoted with an asterisk ( * ) is retrieved from the course syllabus version Spring 2021

Content and learning outcomes

Course contents

Subject 1. Introduction
Main types of learning: supervised learning, unsupervised learning and reinforcement learning, and their mathematical formalisation (input and label spaces, hypothesis classes, loss function).
Subject 2. PAC framework and empirical risk minimization.
The concept of Probably Approximately Correct (PAC) learnability. Oracle inequalities and bias-variance trade-off Empirical risk minimization principle. Overfitting and the No-Free-Lunch Theorem. Uniform convergence.
Subject 3. Concentration inequalities.
Markov, Chebyshev and Chernoff bounds. Sub-gaussian random variables. Hoeffdings lemma and inequality. Bounded difference (McDiarmid) inequality.
Subject 4. Vapnik-Chervonenkis (VC) Theory
PAC learnability for finite hypothesis classes. Shattering and VC dimension. Sauer-Shelahs lemma. Rademacher complexity. Fundamental Theorem of PAC learning
Subject 5. Linear classification and regression
Linear predictors. Linear classification. Perceptron algorithms Application of VC theory to multilayer neural networks. Logistic and linear regression.
Subject 6. Regularisation, stability and optimisation
Regularized risk minimization Algorithmic stability and its application to generalization bounds for regularized risk minimization. Algorithms for convex learning: gradient descent, sub-gradient descent and stochastic gradient descent.
Subject 7. Support vector machines and kernel methods
Introduction to SVM with hard and soft margins. Performance bounds of hard and soft-margin SVM. Learning algorithms for SVM. Kernel methods; linear separability using embeddings Kernel trick and the representer theorem; admissible kernels  
Subject 8. Deep neural networks
Neural networks and representation theorems. Training neural nets using backpropagation. Dropout as a regularization technique. Recent results about the lost surface and local minima of neural networks. Recent theoretical developments justifying deep learning.
Subject 9. Clustering. Cluster validation and algortihms.
Performance metrics for clusters. State-of-the-art clustering algortihms. Cluster evaluation. K-means and its performance guarantees. The EM-algorithm and its performance for Gaussian mixtures. Spectral clustering, random matrix theory and concentration.
Subject 10. Active learning, online optimization and sequential decisio making
Introduction to bandit problems and reinforcement learning. Exploration-exploitation trade-off. Fundamental limits via the change-of-measure arguments. Examples of algorithms and their guarantees. Best policy identification vs regret minimization.

Intended learning outcomes

After passing the course, the student shall be able to

  • derive and apply the basic theoretical tools that are used in modern machine learning
  • describe known performance guarantees for important machine learning algorithms.

Learning activities

- 2 Homework assignments.

- 2 Computer labs.

- 1 Final exam.

Detailed plan

Learning activities Content Preparations
Homework assignment 1 Introduction -> VC dimensions. See Canvas page of the course.
Homework assignment 2 Regularization -> active learning See Canvas page of the course.
Computer lab 1 Introduction -> linear classification/regression  See Canvas page of the course.
Computer lab 2 Regularization -> clustering See Canvas page of the course.
Exam All topics See Canvas page of the course.

 

Note

This course memo describes the contents and examinations of the course only at a high level. Please refer to the course website, and to the first lecture for further information.

Preparations before course start

Recommended prerequisites

The following are recommended courses to have taken before this course, but not a requirement

  • Multivariable analysis, e.g., SF 1626 or equivalent
  • Probability theory and statistics, e.g., SF1924 or equivalent
  • Basic numeric methods, e.g., SF1516 or equivalent

Software

Python 3 + Jupyter will be used for the computer labs and assignments.

Examination and completion

Grading scale

A, B, C, D, E, FX, F

Examination

  • HEM1 - Homework, 1.0 credits, Grading scale: P, F
  • HEM2 - Homework, 1.0 credits, Grading scale: P, F
  • LAB1 - Laboratory assignment, 1.0 credits, Grading scale: P, F
  • LAB2 - Laboratory assignment, 1.0 credits, Grading scale: P, F
  • TEN1 - Written exam, 3.5 credits, Grading scale: A, B, C, D, E, FX, F

Based on recommendation from KTH’s coordinator for disabilities, the examiner will decide how to adapt an examination for students with documented disability.

The examiner may apply another examination format when re-examining individual students.

Ethical approach

  • All members of a group are responsible for the group's work.
  • In any assessment, every student shall honestly disclose any help received and sources used.
  • In an oral assessment, every student shall be able to present and answer questions about the entire assignment and solution.

Further information

No information inserted

Round Facts

Start date

17 Jan 2023

Course offering

  • Spring 2023-60810

Language Of Instruction

English

Offered By

EECS/Intelligent Systems

Contacts