Till KTH:s startsida Till KTH:s startsida

Ändringar mellan två versioner

Här visas ändringar i "ML Sessions 2019" mellan 2019-11-15 14:12 av Matteo Gamba och 2019-12-02 16:11 av Matteo Gamba.

Visa < föregående | nästa > ändring.

ML Sessions 2019

Unsupervised / Semi-Supervised Learning Upcoming session: 21st Nov5th December, 14:00 - 15:30, Teknikringen 14, room 523 (5th floor).

Challenging Common Assumptions in the Unsupervised Learning ofStructured Disentangled Representations

Read papers

2019-10-24 Small Data Challenges in Big Data Era: Survey of Recent Progress on Unsupervised and Semi-Supervised Methods 2019-11-07 Block Neural Autoregressive Flow 2019-11-21 Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

Generalization in Machine Learning Read papers

2019-05-02 Generalization in Deep Learning 2019-05-16 Understanding deep learning requires rethinking generalization 2019-06-13 Emergence of Invariance and Disentanglement in Deep Representations 2019-09-11 Three Factors Influencing Minima in SGD 2019-09-26 Exploring Generalization in Deep Learning 2019-10-10 On the importance of single directions for generalization Predicting the Generalization Gap in Deep Networks with Margin Distributions ______________________________________________________________________________

MCMC #3 - 7. Mar. 2019 Markov Chain Monte Carlo and Variational Inference: Bridging the Gap

http://proceedings.mlr.press/v37/salimans15.pdf

MCMC #2 - 21. Feb. 2019 An Introduction to MCMC for Machine Learning (second part plus coding)

fhttps://www.cs.ubc.ca/~arnaud/andrieu_defreitas_doucet_jordan_intromontecarlomachinelearning.pdf

MCMC #1 - 7. Feb. 2019 An Introduction to MCMC for Machine Learning (part 1)

fhttps://www.cs.ubc.ca/~arnaud/andrieu_defreitas_doucet_jordan_intromontecarlomachinelearning.pdf

______________________________________________________________________________

Interpretable ML #6 - Application Session 24. Jan. 2019 e-SNLI: Natural Language Inference with Natural Language Explanations Judith Petra How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation Sanne Sarah Methods for Interpreting and Understanding Deep Neural Networks Federico Matteo Network Disection: Quantifying Interpretability of Deep Visual Representations Louise Gradcam++ Sofia Marcus Interpretable ML #5 - 7. Jan. 2019 A causal framework for explaining the predictions of black-box sequence-to-sequence models

https://people.csail.mit.edu/tommi/papers/AlvJaa_EMNLP2017.pdf