- Relevant concepts from probability theory and estimation
- Introduction to synthesis problems and generative models
- Principles of synthesis versus classification
- Regression versus probabilistic modelling
- Modelling goals and evaluation
- Mixture density networks (MDNs)
- Autoregression and large language models (LLMs)
- Normalising flows
- Variational autoencoders (VAEs)
- Diffusion models and flow matching
- Generative adversarial networks (GANs)
- Subjective evaluation
- Hybrid approaches
- Recent developments
- Ethical aspects of generative AI
DD2601 Deep Generative Models and Synthesis 7.5 credits

Information per course offering
Information for Autumn 2025 Start 25 Aug 2025 programme students
- Course location
KTH Campus
- Duration
- 25 Aug 2025 - 24 Oct 2025
- Periods
- P1 (7.5 hp)
- Pace of study
50%
- Application code
50363
- Form of study
Normal Daytime
- Language of instruction
English
- Course memo
- Course memo is not published
- Number of places
Max: 50
- Target group
Open to all master's programmes as long as it can be included in the programme.
- Planned modular schedule
- [object Object]
- Schedule
- Schedule is not published
- Part of programme
Contact
Course syllabus as PDF
Please note: all information from the Course syllabus is available on this page in an accessible format.
Course syllabus DD2601 (Autumn 2025–)Content and learning outcomes
Course contents
Intended learning outcomes
After passing the course, the students should be able to:
- characterise synthesis problems, deep generative methods, and their applications
- distinguish different objectives, performance measures, and common problems with generative modelling
- describe the relation between deep generative models and regression-based methods
- train and tune deep generative models on different datasets
- evaluate generative models objectively and subjectively
- discuss ethical aspects of particular relevance to generative AI
in order to
- be able to judiciously use deep generative modelling to solve problems in industry and/or academia.
Literature and preparations
Specific prerequisites
Knowledge in deep learning, 6 credits, equivalent to completed course DD2424/DD2437.
Active participation in DD2424/DD2437 whose final examination has not yet been reported to Ladok is equated with course completion.
Knowledge and skills in programming, 6 credits, equivalent to completed course DD1337/DD1310-DD1319/DD1321/DD1331/DD100N/ID1018.
Knowledge in multivariable analysis, 7.5 credits, equivalent to completed course SF1626.
Knowledge in probability theory and statistics, 6 credits, equivalent to completed course SF1910-SF1925/SF1935.
Recommended prerequisites
- Good programming skills (equiv. to DD1337/DD1310–1319/DD1331/DD1332/ID1018) including Python, PyTorch, Jupyter Notebooks.
- Probability theory (equiv. to SF1900–SF1935) including probability, conditional probability, Bayes’ law, independence, expectation, random variables, probability mass and density functions, samples, random sampling, mean, variance, standard deviation, median, correlation, covariance, uniform distributions, multivariate Gaussian distributions and their properties, conditional expectation, parameter estimation, maximum-likelihood estimation, biassed estimators, consistency, change of variables, Jensen’s inequality, least-squares regression.
- Algebra and geometry (equiv. to SF1624) including vectors, matrices, systems of linear equations, inner and outer products, norms, triangle inequality, metric spaces, determinants, eigenvalues, linear dependence, subspaces, trace of a matrix.
- Single-variable calculus (equiv. to SF1625) including functions, domains, ranges, monotonicity, exponential functions and logarithms, limits, l'Hôpital's rule, sequences, change of variables, convex functions, ordinary differential equations, Euler’s method.
- Multivariate calculus (equiv. to SF1626/SF1674) including partial derivatives, multivariate chain rule, change of variables, gradients, Hessian matrices, Jacobian matrices.
- Machine learning (equiv. to DD1420/DD2421 or DD2380/ID1214) including optimisation, convexity, loss functions, train/val/test sets, k-fold cross validation, mean squared error, classification, accuracy, overfitting, Bayes-optimal error rate, Gaussian mixture models, high-dimensional geometry (curse of dimensionality). Information theory for machine learning including entropy, bits, differential entropy, cross-entropy.
- Deep learning (equiv. to DD2424/DD2437) including feed-forward networks, activation functions, ReLU, softmax, stochastic gradient descent, updates, epochs, CNNs, RNNs, mean and variance normalisation, initialisation, hyperparameters.
Literature
Examination and completion
If the course is discontinued, students may request to be examined during the following two academic years.
Grading scale
Examination
- LAB1 - Digital Assignment with Oral Comprehension Questions, 7.5 credits, grading scale: A, B, C, D, E, FX, F
Based on recommendation from KTH’s coordinator for disabilities, the examiner will decide how to adapt an examination for students with documented disability.
The examiner may apply another examination format when re-examining individual students.
Examiner
Ethical approach
- All members of a group are responsible for the group's work.
- In any assessment, every student shall honestly disclose any help received and sources used.
- In an oral assessment, every student shall be able to present and answer questions about the entire assignment and solution.
Further information
Course room in Canvas
Offered by
Main field of study
Education cycle
Supplementary information
In this course, the EECS code of honor applies, see:
http://www.kth.se/en/eecs/utbildning/hederskodex