Skip to main content
To KTH's start page

On learning in mice and machines

continuous population codes in natural and artificial neural networks

Time: Fri 2023-11-24 13.00

Location: E1, Lindstedtsvägen 3, Stockholm

Language: English

Subject area: Biotechnology Applied and Computational Mathematics

Doctoral student: Emil Wärnberg , Beräkningsvetenskap och beräkningsteknik (CST), Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden

Opponent: Professor Mark Humphries, University of Nottingham, School of Psychology, Nottingham, UK

Supervisor: Professor Konstantinos Meletis, Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden; Associate professor Arvind Kumar, Beräkningsvetenskap och beräkningsteknik (CST); Professor Gilad Silberberg, Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden

Export to calendar

Thesis presented for joint PhD degree between KI and KTH, with KI as home university.

QC 20231101

Abstract

Neural networks, whether artificial in a computer or natural in the brain, could represent information either using discrete symbols or continuous vector spaces. In this thesis, I explore how neural networks can represent continuous vector spaces, using both simulated neural networks and analysis of real neural population data recorded from mice. A special focus is on the networks of the basal ganglia circuit and on reinforcement learning, i.e., learning from rewards and punishments.

The thesis includes four scientific papers: two theoretical/computational (Papers I and IV) and two with analysis of real data (Papers II and III).

In Paper I, we explore methods for implementing continuous vector spaces in networks of spiking neurons using multidimensional attractors, and propose an explanation for why it is hard to escape the neural manifolds created by such attractors.

In Paper II, we analyze experimental data from dorsomedial striatum collected using 1-photon calcium imaging of transgenic mice with celltype-specific markers for the striatal direct, indirect and patch pathways, as the mice were gathering rewards in a 2-choice task. In line with extensive previous results, our data analysis revealed a number of neural signatures of reinforcement learning, but no apparent difference between the pathways.

In Paper III, we present a new software tool for tracking neurons across weeks of 1-photon calcium imaging, and employ it to follow patch-specific striatal projection neurons from the dorsomedial striatum across two weeks of daily recordings.

In Paper IV, we propose a model for how the nigrostriatal dopaminergic projection could, in a biologically plausible way, convey a vector-valued error gradient to the dorsal striatum, as required for backpropagation.

Based on the results of the papers and a review of existing literature, I argue that while the basal ganglia indeed make up a circuit for reinforcement learning as previously thought, this circuit represents reinforcement learning states, actions and policies using a continuous population code and not using discrete symbols.

urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-338936