Stagewise Reinforcement Learning and the Geometry of the Regret Landscape

Authors

Chris Elliott
Timaeus
Einar Urdshals
Timaeus
David Quarel
Timaeus
Matthew Farrugia-Roberts
University of Oxford
Daniel Murfet
Timaeus

Publication Details

Published:
January 12, 2026

Access

Abstract

Singular learning theory characterizes Bayesian learning as an evolving tradeoff between accuracy and complexity, with transitions between qualitatively different solutions as sample size increases. We extend this theory to deep reinforcement learning, proving that the concentration of the generalized posterior over policies is governed by the local learning coefficient (LLC), an invariant of the geometry of the regret function. This theory predicts that Bayesian phase transitions in reinforcement learning should proceed from simple policies with high regret to complex policies with low regret. We verify this prediction empirically in a gridworld environment exhibiting stagewise policy development: phase transitions over SGD training manifest as "opposing staircases" where regret decreases sharply while the LLC increases. Notably, the LLC detects phase transitions even when estimated on a subset of states where the policies appear identical in terms of regret, suggesting it captures changes in the underlying algorithm rather than just performance.

Cite as

@article{elliott2026stagewise,
  title = {Stagewise Reinforcement Learning and the Geometry of the Regret Landscape},
  author = {Chris Elliott and Einar Urdshals and David Quarel and Matthew Farrugia-Roberts and Daniel Murfet},
  year = {2026},
  abstract = {Singular learning theory characterizes Bayesian learning as an evolving tradeoff between accuracy and complexity, with transitions between qualitatively different solutions as sample size increases. We extend this theory to deep reinforcement learning, proving that the concentration of the generalized posterior over policies is governed by the local learning coefficient (LLC), an invariant of the geometry of the regret function. This theory predicts that Bayesian phase transitions in reinforcement learning should proceed from simple policies with high regret to complex policies with low regret. We verify this prediction empirically in a gridworld environment exhibiting stagewise policy development: phase transitions over SGD training manifest as "opposing staircases" where regret decreases sharply while the LLC increases. Notably, the LLC detects phase transitions even when estimated on a subset of states where the policies appear identical in terms of regret, suggesting it captures changes in the underlying algorithm rather than just performance.},
  eprint = {2601.07524},
  archivePrefix = {arXiv},
  url = {https://arxiv.org/abs/2601.07524}
}
Click to copy