DD2420 Probabilistic Graphical Models 7.5 credits

Probabilistiska grafiska modeller

This is a relatively advanced course with a flexible set of activities that allow students to chose and between current research applications and theoretical topics to explore.    Probabilistic Graphical models are a foundation for understanding many methods of  artificial intelligence, machine learning and estimation. 

In AI, decisions are made by computers that improve with learning.  To do that, programs must perform inference of estimated probabilities given some evidence.  That inference can be  intractable.   The methods learned in this course will allow the student to formulate the AI problem and do both exact and approximate inference. 

Machine learning provides algorithms for solving problems by using training data.  This course will give insight into how to formulate problems so that machine learning can be used effectively.   While end-to-end learning using generic machine learning methods is a successful approach, data is often limited.  Building good models can help learn with less data by constraining the learning space.  

Bayesian models are at the heart of most estimation methods.  Formulation of the these models is the first step in developing an estimation algorithm.  The estimation itself is in many cases just inference on the model given some evidence.   Approximate inference techniques such as those covered in this course are important in solving many very hard estimation problems in science and engineering. 

  • Education cycle

    Second cycle
  • Main field of study

    Computer Science and Engineering
  • Grading scale

    A, B, C, D, E, FX, F

Course offerings

Spring 19 PGM19 for programme students

Spring 20 PGM20 for programme students

Information for research students about course offerings

There is a PhD level course given by John Folkesson as a module in Topics in Robotics.  More info johnf@kth.se

Intended learning outcomes

The student shall upon passing the course be able to explain and reason about:

1.how the various graphs represent both factorization and independance relations;

2. exact inference on  graphical models including using message passing algorithms to the extent of being able to perform all the steps of the algorithms;

3. approximate inference such as sampling, loopy belief propagation  and variational methods;

4. methods for learning model parameters.

Students earning higher grades will have a deeper and/or broader understanding of the above 4 goals and be able to use some of the methods in 3 and 4 above in real engineering tasks. 

Course main content

The main content of the course is:

  • Graph Representations:  Discriminative vs Generative Models, Bayes Nets (DAG), Undirected Models (MRF/Factor Graphs), Exponential Family (features), D-Separation, Markov Blanket, 
  • Exact Inference: Message Passing, Variable Elimination, Factor Graphs from DAGs, Sum Product Algorithm, Clique Graphs/Trees,  Inference with evidence, Junction Tree Algorithm,
  • Approximate Inference: Loopy Belief Propagation, Monte Carlo Principle, Direct Sampling,  Importance Sampling, Evidence, Rejection Sampling,  MCMC, Gibbs Sampling, Collapsed Importance Sampling, Variational methods (Projections),  MAP inference.   
  • Learning: Parameter Estimation, Max Likelihood Estimation, Sufficient Statistics, Bayesian Parameter Estimation, Conjugate Prior, Gaussian/Beta/Dirichlet Distributions, Partially Observed Data,  Gradient Ascent, Expectation Maximization, Gaussian Mixture Learning

Disposition

The course topics wll be covered in a series of lectures which, along with reading, will give the student a basic understanding.

There will be manditory tutorial excercises covering these to help solidify the students understanding.   

There is also a written exam.

For obtaining higher grades than E the students will be able to choose between alternatives for going into depth on particular methods such as by doing additional tutorials, assignments or projects.  Some of these will require programing skills, such as in python and matlab. 

Eligibility

This course cannot be counted in the degree if the student has taken DD2447

Recommended prerequisites

SF1604 Linear Algerbra,  SF1625 one variable calculus, SF1901 PProbability and statistics,

Programming in matlab and python.

Literature

Probabilistic Graphical Models Principles and Techniques, Daphne Koller & Nir Friedman, MIT Press

ISBN 978-0-262-01319

Examination

  • PRO1 - Tutorials 1, 2.5, grading scale: P, F
  • PRO2 - Tutorials 2, 2.5, grading scale: P, F
  • TEN1 - Examination, 2.5, grading scale: P, F

The code of honor, of course, covers this course: https://www.kth.se/en/eecs/utbildning/hederskodex/inledning-1.17237

Requirements for final grade

Passing the manditory tutorials and written exam.

Offered by

EECS/Intelligent Systems

Contact

John Folkesson johnf@kth.se

Examiner

John Folkesson <johnf@kth.se>

Supplementary information

This course cannot be counted in the degree if the student has taken DD2447

Add-on studies

DD2431 Machine Learning

DD2434 Machine Learning, advanced course

DD2424 Deep learning in data science

DD2432 Artificial Neural Networks and Other Learning Systems 

DD2423 Image Analysis and Computer Vision

DD2425 Robotics and Autonomous Systems

DD2429 Computational Photography

EL2320 Applied Estimation

Version

Course syllabus valid from: Spring 2019.
Examination information valid from: Spring 2019.