KTH's seminar series on probability, statistics, data science and related topics.
The seminar takes place on Mondays 15:15–16:16. For now all talks are held via Zoom, with meeting ID
62144698204
Next Talk
Monday, 29 March, 15:15 Markus Fischer (Padua) Correlated equilibria and mean field games Mean field games are limit models for symmetric N-player games, as N tends to infinity, where the prelimit models are solved in terms of Nash equilibria. A generalization of the notion of Nash equilibrium, due to Robert Aumann (1974, 1987), is that of correlated equilibrium. In a simple discrete, non-static setting, we will discuss mean field games based on correlated equilibria. We give a definition of correlated mean field game solution, prove that it arises as limit of symmetric N-player correlated equilibria in restricted Markov open-loop strategies, and construct approximate N-player equilibria starting from a correlated mean field game solution. We also compare our definition to the one by D. Lacker of weak solutions for mean field games without common noise, and give an example of correlated mean field game solutions with non-deterministic flow of measures.
Joint work with Luciano Campi, University of Milan "La Statale"
Monday, 12 April, 15:15 Will Leeb (Minnesota)
Monday, 26 April, 15:15 Yulong Lu (UMASS Amherst)
Monday, 3 May, 15:15 Guo-Jhen Wu (KTH)
Monday, 10 May, 15:15 Oliver Tse (Eindhoven)
Monday, 24 May, 15:15 Michela Ottobre (Edinburgh)
Monday, 7 June, 15:15 Julia Gaudio (MIT)
Monday, 22 March, 15:15 Harsha Honnappa (Purdue) New Insights on Queues in Random Environments Queueing systems are often subject to clear nonstationarities that arise a as a consequence of diurnal, seasonal or stochastic effects, with the latter emerging as a consequence of being subject to a random environment. The modeling and performance analysis of queues in random environments, in particular, has attracted significant interest in the recent literature. In this talk I will present recent work on approximating performance metrics of infinite server queueing systems that are fed by arrival processes with stochastic arrival intensities that fluctuate rapidly. This setting includes Cox processes, as well as self-excited models such as the Hawkes process. At the outset, it is clear that computing performance metrics in this setting cannot be achieved in closed form, raising the question of how to compute approximations. In particular, one is confronted by the question of whether to use an annealed setting (where the performance measures are “averaged” or “annealed” over the random environment) or a quenched setting (where the performance measures are conditioned on the random environment). I will present our on-going work studying both settings establishing asymptotic approximations in three flavors: as asymptotic “refinements”, fluid-scale limits and diffusive-scale limits of performance measures. In the case of infinite server queues driven by Cox processes, our results show that there exists a parameter regime where the quenched limits exist (and coincide with the annealed limits) and a complementary regime where they may not, suggesting that the quenched analysis, though seemingly more robust and simpler to carry out, should be used with caution.
This is based on joint work with Peter W. Glynn, Zeyu Zheng at Stanford University and Samy Tindel, Aaron Nung-Kwan Yip and Yiran Liu at Purdue University.
Monday, 15 March, 15:15 Lukasz Szpruch (Edinburgh) Mean-Field Neural ODEs via Relaxed Optimal Control We develop a framework for the analysis of deep neural networks and neural ODE models that are trained with stochastic gradient algorithms. We do that by identifying the connections between control theory, deep learning and theory of statistical sampling. We derive Pontryagin's optimality principle and study the corresponding gradient flow in the form of Mean-Field Langevin dynamics (MFLD) for solving relaxed data-driven control problems. Subsequently, we study uniform-in-time propagation of chaos of time-discretised MFLD. We derive explicit convergence rate in terms of the learning rate, the number of particles/model parameters and the number of iterations of the gradient algorithm. In addition, we study the error arising when using a finite training data set and thus provide quantitive bounds on the generalisation error. Crucially, the obtained rates are dimension-independent. This is possible by exploiting the regularity of the model with respect to the measure over the parameter space.
Monday, 1 March, 15:15 Adam Waterbury (UNC Chapel Hill) Approximating Quasi-Stationary Distributions with Interacting Reinforced Random Walks We propose two numerical schemes for approximating quasi-stationary distributions (QSD) of finite state Markov chains with absorbing states. Both schemes are described in terms of certain interacting chains in which the interaction is given in terms of the total time occupation measure of all particles in the system. The schemes can be viewed as combining the key features of the two basic simulation-based methods for approximating QSD originating from the works of Fleming and Viot (1979) and Aldous, Flannery, and Palacios (1998), respectively. In this talk I will describe the two schemes, discuss their convergence properties, and present some exploratory numerical results comparing them to other QSD approximation methods.
Monday, 22 February, 15:15 Clara Stegehuis (Twente) Optimal constrained and unconstrained subgraph structures Subgraphs contain important information about network structures and their functions. We investigate the presence of subgraphs in a random graph model with infinite-variance degrees. We introduce an optimization problem which identifies the dominant structure of any given subgraph. The unique optimizer describes the degrees of the vertices that together span the most likely subgraph and allows us count and characterize all subgraphs.
We then show that this optimization problem easily extends to other network structures, such as clustering, which expresses the probability that two neighbors of a vertex are connected. The optimization problem is able to find the behavior of network subgraphs in a wide class of random graph models.