Workshops and conferences

Workshop on Mathematics for Complex Data, June 24-26, 2019

The purpose of this workshop is to bring together researchers interested in the mathematics of complex data. There will be talks on mathematical theory and methods related to data analysis and artificial intelligence. 


Daniel Persson, Chalmers
Johan Jonasson, Chalmers
Annika Lang, Chalmers
Wojciech Chacholski, KTH
Martina Scolamiero, KTH
Anna Persson, KTH
Henrik Hult, KTH
Guo-Jhen Wu, Brown
Joel Larsson, Warwick
Tatyana Turova, Lund
Natasa Sladoje, Uppsala
Thomas Schön, Uppsala


Please register here  for the workshop (registration is free of charge). 


KTH Campus, Valhallavägen, Stockholm.

Past events

Workshop on Mathematics for Complex Data, May 30-31, 2018

The purpose of this workshop is to bring together researchers interested in the mathematics of complex data. There will be talks on mathematical methods for data analysis as well as presentations of complex data in applications.


The Brummer & Partners MathDataLab: Workshop on Mathematics for Complex Data takes place in room K1, Teknikringen 56,  floor 3, KTH Campus


Please register here  for the workshop (registration is free of charge). 


May 30th
13.00 - 13.45 Alexandre Proutiere, KTH, Automatic Control
13.45 - 14.30 Atsuto Maki, KTH, RPL
14.30 - 15.00 Coffee Break
15.00 - 15.45 Salla Franzen, SEB

16.00 -17.00 Reception

May 31st
09.00 - 09.45 Anna Persson, Chalmers
09.45 - 10.30 Jens Berg, Uppsala
10.30 - 11.00 Coffee Break
11.00 - 11.45 Martina Scolamiero, EPFL
11.45 - 12.30 David Eklund, KTH Mathematics


Alexandre Proutiere, KTH, Automatic Control
Title: Inference from graphical data: fundamental limits and optimal algorithms
Abstract: We investigate the problem of cluster recovery in random graphs generated according to models extending the celebrated Stochastic Block Model. To reconstruct the clusters, we sequentially sample the edges of the graph, either randomly or in an adaptive manner or following a random walk on the graph. With a given sample budget, the objective is to devise a clustering algorithm that recover the hidden clusters with the highest possible accuracy. We develop a generic method to derive tight upper bound for the reconstruction accuracy (satisfied by any algorithm), and inspired by this fundamental limit, devise asymptotically optimal clustering algorithms. We further study the design of clustering algorithms with limited memory and computational complexity

Atsuto Maki, KTH, RPL
Title: Transfer learning and multi-task learning in deep convolutional networks
Abstract: Deep Convolutional Networks (ConvNets) have become prevalent in computer vision in the last several years. The talk is about Transfer Learning and Multi-Task Learning in ConvNets​ which we have been studying at Robotics, Perception, and Learning (RPL) Lab. We will first look at the utility of global image descriptors given by ConvNets for visual recognition tasks in the context of transfer learning. Then we will turn to Multi-Task Learning for the tasks of semantic segmentation and object detection which would in general involve some challenges in designing a global objective function. Time permitted, we will also visit the topic of robot learning with the new Deep Predictive Policy Training using Reinforcement Learning.

Salla Franzen, SEB
Title: Data-driven banking
Abstract: The amount of data available for harvesting and exploring increases every second. In finance the time has come to leverage the knowledge, expertise and experience from financial experts and combine them with the new technologies and open software programming methodologies available. This talk will focus on some examples of applications of these new technologies and methodologies to big data sets in finance and on the enormous potential of collaborations between academics and industry experts.

Anna Persson, Chalmers
Title: A multiscale method for parabolic equations
Abstract: We study numerical solutions for parabolic equations with highly varying (multiscale) coefficients. Such equations typically appear when modelling heat diffusion in heterogeneous media like composite materials. For these problems classical polynomial based finite element methods fail to approximate the solution well unless the mesh width resolves the variations in the data. This leads to issues with computational cost and available memory, which calls for new approaches and methods. In this talk I will present a multiscale method based on localized orthogonal decomposition, first introduced by Målqvist and Peterseim (2014). The focus will be on how to generalize this method to time dependent problems of parabolic type.

Jens Berg, Uppsala, Mathematics
Title: Data-driven discovery of partial differential equations
Abstract: The current era is providing us with an abundance of high-quality data. A long-standing problem in the natural sciences is how to to transform the observed data into a predictive mathematical model. In this talk we will use the recent advances in machine learning and deep learning to analyze complex data sets and discover their governing partial differential equations (PDEs). The method will be demonstrated for data sets which have been generated by known PDEs, and finally we will discuss some applications where traditional modeling by first physical principles is intractable.

Martina Scolamiero, EPFL
Title: Multivariate Methods in Topological Data Analysis
Abstract: In Topological Data Analysis we study the shape of data using topology. Similarly to clustering, shape characteristics highlight correlation patterns and describe the structure of the data, that can then be exploited for prediction tasks. Multivariate methods in TDA are especially interesting as they can be used to combine and study heterogeneous sources of information about a dataset. In this talk I will focus on multi-parameter persistence, a rich and challenging multivariate method. In particular, I will describe a framework that allows to compute a new class of stable invariants for multi-parameter persistence. The key element underlying this novel approach is a metric de ned by `noise systems'. A lter function is usually chosen to highlight properties we want to examine in a dataset. Similarly, our new metric allows some features of datasets to be considered as noise. Examples of topological analysis on real world data will be presented throughout the talk, with a speci c focus on applications to neuroscience and psychiatry.

David Eklund, KTH, Mathematics
Title: The algebraic geometry of bottlenecks
Abstract: I will talk about bottlenecks of algebraic varieties in complex affine space. The bottlenecks are lines which are normal to the variety at two distinct points. Such pairs of points, and the distance between them, is of major importance in the data analysis of real varieties. I will explain the relation to the so-called reach of a smooth variety which appears naturally in the context of topological data analysis. I will address two interlinked problems: the enumerative problem of counting the number of bottlenecks and the computational problem of formulating efficient numerical methods to compute bottlenecks.

Opening workshop Nov 17, 2017

The opening of the Brummer & Partners MathDataLab takes place on Nov 17.


The opening workshop takes place at: Open Lab, Valhallavägen 79, Stockholm ( )


Please register for the workshop  by Nov 10. (registration is free of charge but space is limited)


08:45 Breakfast

09:15 Welcome, Sigbritt Karlsson, KTH president

09:20 Presentation of Brummer & Partners MathDataLab, Henrik Hult, KTH

09:30-10:15 Randomized algorithms for large scale linear algebra and data analytics, Per-Gunnar Martinsson, Oxford University

Abstract. The talk will describe how randomized projections can be used to effectively, accurately, and reliably solve important problems that arise in data analytics and large scale linear algebra. We will focus in particular on accelerated algorithms for computing full or partial matrix factorizations such as the eigenvalue decomposition, the QR factorization, etc. Randomized projections are used in this context to reduce the effective dimensionality of intermediate steps in the computation. The resulting algorithms execute faster on modern hardware than traditional algorithms, and are particularly well suited for processing very large data sets.
The algorithms described are supported by a rigorous mathematical analysis that exploits recent work in random matrix theory. The talk will briefly review some representative theoretical results.

10:15-11:00 From RNA-seq time series data to models of regulatory networks, Konstantin Mischaikow, Rutgers University

Abstract.  We will describe a novel approach to nonlinear dynamics based on topological and combinatorial ideas. An important consequence of this approach is that it is both computationally accessible and allows us to rigorously describe dynamics observable at a fixed scale over large sets of parameter values. To demonstrate the value of this approach we will consider RNA-seq time series data time series data and propose potential regulatory networks based on how robustly the network is capable of reproducing the observed dynamics.

11:00-12:30 Lunch break (lunch not included)

12:30-13:15 What is persistence? Wojciech Chacholski, KTH

Abstract.  What does it mean to understand shape? How can we measure it and make statistical conclusions about it? Do data sets have shapes and if so how to use their shape to extract information about the data? There are many possible answers to these questions. Topological data analysis (TDA) aims at providing some of them using homology. In my presentation aimed at broader audience I will describe the essence of TDA. I will illustrate how TDA can be used to give a machine intelligence to learn geometric shapes and how this ability can be used in data analysis.

13:15-14:00 Some mathematical challenges in the analysis of complex data,  Henrik Hult, KTH

Abstract. In this talk I will give an overview of some recent advancement in the analysis of complex data. The talk will emphasize questions related to training and architecture of neural networks and I will try to highlight some mathematical challenges in this field.