25.02.2026
Daniel Durstewitz (CIMH & IWR, Heidelberg University)
Daniel Durstewitz (Zentral Institut für seelische Gesundheit & Interdisciplinary Center for Scientific Computing at Heidelberg University):
Dynamical Systems Foundation Models for Neuroscience
Abstract:
Foundation models like, e.g., large language models, are models that are pre-trained on a large corpus of data with diverse applications. They accomplish feats like in-context learning (learning to extrapolate just from examples without parameter fine-tuning) and zero-shot generalization to novel domains and tasks.
Their promise in neuroscience lies with i) facilitating the discovery of common computational motifs and principles across diverse tasks and datasets, and ii) providing zero-shot predictions for novel experimental conditions. For these purposes, and – more generally – to be useful as neuroscientific discovery tools that can provide insight into computational mechanisms, they need to be interpretable and, ideally, mathematically tractable.
In my talk I will present two types of foundation model architectures based on dynamical systems principles that fulfill these criteria, and that are able to few- or zero-shot reconstruct unseen dynamical systems just from “in-context data”.
