After an initial review of probabilistic models for multivariate data and the principles of Bayesian learning as opposed to point estimates of model parameters, these models and methods are further developed for various applications in regression and classification. The following main topics are covered:
Generalized linear models for regression and classification
Neural networks
Kernel methods, especially sparse approaches, such as the Relevance Vector Machine (RVM) and Support Vector Machine (SVM)
Graphical models, incl. Bayesian networks and Markov random fields
Mixture models and Expectation Maximization
Approximate inference methods, e.g., variational inference with factorized approximation
Monte Carlo sampling methods
Probabilistic (Bayesian) principal component analysis (PCA)
Models for sequencial data, especially Hidden Markov models
