Debaters and Position Statements
Professor Christopher Summerfield (University of Oxford / UK AI Safety Institute):
Biological systems are “scruffy” - they learn via a mixture of generative modelling, reward learning, and social feedback. Different natural behaviours rely on distinct mixtures of these objectives. Learning the how the world is structured requires a rich generative model. Acquiring sensorimotor behaviours requires a form of shortcut learning that we call “reinforcement". Learning the rules that govern interactions with others (including in language) requires rich social feedback. Whilst these training signals are all ultimately “information”, in practice they involve distinct processing pathways and modular brain structures.
Professor Karl Friston (University College London):
All biotic self-organisation can be neatly described as self-evidencing; namely, acting in a way to maximise the evidence (a.k.a., marginal likelihood or mutual information) for generative models of the lived world. The scale-invariant applicability of this principle can explain everything from sentience to sexual reproduction.
Moderator: Professor Heidi Johansen-Berg (University of Oxford)
Format: Start time, 3:30pm. 10 mins each for opening, 20 mins moderated discussion, 10 mins audience Q&A, and close with an audience vote.
×

Christopher Summerfield
Professor of Cognitive Neuroscience, University of Oxford / UK AI Safety Institute
Professor Christopher Summerfield is a Professor of Cognitive Neuroscience at the University of Oxford and Research Director at the UK AI Safety Institute (AISI). Bridging academia and industry, including his work with Google DeepMind, his research sits at the interface of neuroscience and AI. He explores how the learning rules of modern artificial intelligence offer quantitative models for human cognition. This is the central theme of his recent book, Natural General Intelligence: How understanding the brain can help us build AI (Oxford University Press, 2023). In it, he argues that human intelligence relies on a "scruffy" but highly effective toolbox of different learning mechanisms—such as generative modelling, reward learning, and social feedback. At the AISI, he applies these insights to biological and machine information processing to tackle the challenges of AI safety, algorithmic alignment, and the societal integration of frontier models.
×

Heidi Johansen-Berg
Pro-Vice Chancellor (Strategic Initiatives) and Associate Head (Research and Innovation) , University of Oxford
Professor Heidi Johansen-Berg is Professor of Cognitive Neuroscience and Pro-Vice-Chancellor for Strategic Initiatives at the University of Oxford. She is also a Fellow of the Royal Society and Chair of the REF2029 Unit of Assessment 4 (Psychology, Psychiatry and Neuroscience). Her pioneering research explores how the human brain changes its structure and function in response to learning, experience, and injury. Using advanced magnetic resonance imaging (MRI), her work sheds light on plasticity processes in the adult brain, demonstrating how acquiring new skills rewires biological networks. Her team translates these fundamental discoveries into clinical practice, exploring how targeted interventions like non-invasive brain stimulation can enhance motor recovery after a stroke. Furthermore, her broader research investigates how lifestyle factors, such as exercise, influence brain health and resilience across the lifespan
×

Karl Friston
Professor of Imaging Neuroscience/Wellcome Principal Research Fellow, University College London
Professor Karl Friston is a theoretical neuroscientist and an authority on brain imaging at University College London (UCL). Widely recognised as one of the most highly cited neuroscientists in the world, he invented Statistical Parametric Mapping (SPM), the international standard for analysing fMRI and PET imaging data. Professor Friston is well known for his groundbreaking formulation of the Free Energy Principle and Active Inference. This unifying theory of brain function proposes that all biological systems are driven by a single imperative - to minimise "surprise" and maintain homeostasis by generating top-down predictions about the world. He proposes that this provides a mathematical framework for understanding sentience, learning, and biological self-organisation. In the context of artificial intelligence, his theoretical position offer a biologically plausible alternative to standard machine learning, advocating for "embodied" systems driven by survival and uncertainty-resolution rather than arbitrary reward functions.