Logga in till din kurswebb

Du är inte inloggad på KTH så innehållet är inte anpassat efter dina val.

Ändra tidsperiod eller vy
Vecka 36 2013 Visa i Mitt schema
Mån 2 sep 17:00-19:00 Lecture 1, Introduction
HT 2013
Föreläsning
Plats: D1

Readings: Marsland, Chapter 1

Structure of the Course

  • What will the course cover?
  • How are labs and examination handled?

Learning Machines

  • What do we mean by a "Learning Machine"?
  • What can learning algorithms be used for?
  • How can a simple learning program be constructured?

Slides from this lecture: Slides lecture 1

                                            Slides lecture 1 (part II)

Tors 5 sep 15:00-17:00 Lecture 2, Concept Learning [Örjan]
HT 2013
Föreläsning
Plats: M1

Concept Learning

Readings: Marsland, Chapter 1

  • What is Concept Learning?
  • Important terminology: positive and negative examples, hypotheses
  • The structure of the hypotheses space: General and Special hypotheses
  • How can one find a hypothesis that conforms with data?
  • The Find-S algorithm
  • Why does a naive List-Then-Eliminate algorithm not work in practice?
  • How can the choise of leaning method influence the result?
  • "Bias-Free Learning", is it possible? Is it desirable?

Slides on Concept Learning

Vecka 37 2013 Visa i Mitt schema
Mån 9 sep 17:00-19:00 Lecture 3, Decision Trees [Atsuto]
HT 2013
Föreläsning
Plats: D1

Decision Trees

Readings: Marsland, Chapter 6

  • What is a Decision Tree?
  • When are decision trees useful?
  • How can one select what questions to ask?
  • What do we mean by Entropy for a data set?
  • What do we mean by the Information Gain of a question?
  • What Bias do we get if we maximize the Information Gain?
  • What does William from Ockham (1285-1349) have to do with this?
  • Is it possible to learn too much?
  • How can one prevent the algorithms from learning irrelevant details?

Slides on Decision Trees

Vecka 38 2013 Visa i Mitt schema
Mån 16 sep 17:00-19:00 Lecture 4, Artificial Neural Networks [Örjan]
HT 2013
Föreläsning
Plats: D1

Artificial Neural Networks

Readings: Marsland, Chapter 2-3

  • Why are these algorithms called Neural Networks?
  • How does a Single Layer Perceptron work?
  • What can a Single Layer Perceptron learn?
  • The Perceptron Learning algorithm
  • The Delta-rule, error minimization
  • The Multi Layer Perceptron
  • Error Backpropagation

Slides on ANN

Vecka 39 2013 Visa i Mitt schema
Mån 23 sep 15:00-17:00 Lecture 5, Support Vector Machines [Örjan]
HT 2013
Föreläsning
Plats: E1

Support Vector Machines

Readings: Marsland, Chapter 5

  • How does linear separation act in high-dimensional spaces?
  • What does empirical and structural risk refer to?
  • Why are classification margins good for generalization performance?
  • When are slack variables useful?
  • How can a support vector machine be trained?
  • Why is the dual optimization problem often easier to solve?
  • What is a support vector?
  • What is the advantage of using kernel functions?

Slides on SVM

Fre 27 sep 15:00-17:00 Lecture 6, Ensemble Learning [Atsuto]
HT 2013
Föreläsning
Plats: E1

Bagging and Boosting

Readings: Marsland, Chapter 7

  • Wisdom of Crowd
  • Characterization of classifiers
  • Bagging
  • Boosting

Slides on Ensemble Learning

Vecka 40 2013 Visa i Mitt schema
Mån 30 sep 16:00-18:00 Lecture 7, Probability based Learning [Atsuto]
HT 2013
Föreläsning
Plats: F2

Bayesian Learning

Readings: Marsland, Chapter 8 and 15.1

  • Bayes Theorem
  • MAP, ML hypotheses
  • MAP learners
  • Naive Bayes learner
  • Expectation Maximization (EM) algorithm

Slides on Probability based Learning

Tors 3 okt 16:00-18:00 Lecture 8, Evolutionary Algorithms [Atsuto]
HT 2013
Föreläsning
Plats: E1

Genetic Algorithms

Readings: Marsland, chapter 12

  • In what way can evolution be regarded as an algorithm?
  • What can be optimized using Genetic Algorithms?
  • How are potential solutions represented?
  • How do we represent the goal?
  • Chromosomes, Populations and Generations
  • What makes GA different from other optimization methods?
  • What do we mean by Genetic Programming?
  • What can go wrong?

Slides on Genetic Algorithm

Vecka 41 2013 Visa i Mitt schema
Mån 7 okt 17:00-19:00 Lecture 9, Reinforcement Learning [Örjan]
HT 2013
Föreläsning
Plats: D1

Reinforcement Learning

Readings: Marsland: chapter 13

  • Is it possible to learn when nobody tells you the correct answer?
  • Central terms: State, Action, Reward
  • More terms: Value function (cumulative value), Policy
  • How can we judge the consequences of our actions?
  • What do we mean by an optimal behavior?
  • Is it possible to learn what is the best thing to do in each state?
  • Is it possible to learn faster by planning ahead?

Slides on Reinforcement Learning

Tors 10 okt 10:00-12:00 Lecture 10, Graphical Models [Atsuto]
HT 2013
Föreläsning
Plats: FR4

Graphical Models

Readings: Marsland, chapter 15

  • How can conditional probabilities be represented as a graph?
  • What is a Bayesian network?
  • What is a Hidden Markov Model?
  • What is a Dynamic Bayesian network?

Slides on Bayesian Networks

Slides on Dynamic Bayesian Networks

Vecka 42 2013 Visa i Mitt schema
Mån 14 okt 17:00-19:00 Lecture 11, Learning Theory [Örjan]
HT 2013
Föreläsning
Plats: D1

Learning Theory

  • Is it possible to measure how hard a learning task is?
  • What can go wrong during learning?
  • What do we mean when we say that a hypothesis is "approximately correct"
  • Is it possible to estimate the number of traning examples needed?
  • Are there learning tasks that take exponentially long time?
  • PAC-learnable
  • VC-dimension
  • Can we estimate how many errors a learner must make?

Slides on Learning Theory

Tors 17 okt 10:00-12:00 Lecture 12, Summary and Outlook
HT 2013
Föreläsning
Plats: D1
Feedback Nyheter