From Theory to Practice

We use singular learning theory to study how training data shapes model behavior.

We use this understanding to develop new tools for AI safety. Read more.

Bayesian Influence Functions for Hessian-Free Data Attribution

By Kreer et al.

Classical influence functions face significant challenges when applied to deep neural networks, primarily due to non-invertible Hessians and high-dimensional parameter spaces.

Bayesian Influence Functions for Hessian-Free Data Attribution

The Loss Kernel: A Geometric Probe for Deep Learning Interpretability

By Adam et al.

We introduce the loss kernel, an interpretability method for measuring similarity between data points according to a trained neural network.

The Loss Kernel: A Geometric Probe for Deep Learning Interpretability

Embryology of a Language Model

By Wang et al.

Understanding how language models develop their internal computational structure is a central problem in the science of deep learning.

Embryology of a Language Model

From Global to Local: A Scalable Benchmark for Local Posterior Sampling

By Hitchcock and Hoogland

Degeneracy is an inherent feature of the loss landscape of neural networks, but it is not well understood how stochastic gradient MCMC (SGMCMC) algorithms interact with this degeneracy.

From Global to Local: A Scalable Benchmark for Local Posterior Sampling

Modes of Sequence Models and Learning Coefficients

By Chen and Murfet

We develop a geometric account of sequence modelling that links patterns in the data to measurable properties of the loss landscape in transformer networks.

Structural Inference: Interpreting Small Language Models with Susceptibilities

By Baker et al.

We develop a linear response framework for interpretability that treats a neural network as a Bayesian statistical mechanical system.

Structural Inference: Interpreting Small Language Models with Susceptibilities

Programs as Singularities

By Murfet and Troiani

We develop a correspondence between the structure of Turing machines and the structure of singularities of real analytic functions, based on connecting the Ehrhard-Regnier derivative from linear logic with the role of geometry in Watanabe's singular learning theory.

Programs as Singularities

You Are What You Eat – AI Alignment Requires Understanding How Data Shapes Structure and Generalisation

By Lehalleur et al.

In this position paper, we argue that understanding the relation between structure in the data distribution and structure in trained models is central to AI alignment.

You Are What You Eat – AI Alignment Requires Understanding How Data Shapes Structure and Generalisation

Structure Development in List-Sorting Transformers

By Urdshals and Urdshals

ICML SMUNN Workshop

We study how a one-layer attention-only transformer develops relevant structures while learning to sort lists of numbers.

Structure Development in List-Sorting Transformers

Dynamics of Transient Structure in In-Context Linear Regression Transformers

By Carroll et al.

Modern deep neural networks display striking examples of rich internal computational structure.

Dynamics of Transient Structure in In-Context Linear Regression Transformers

Differentiation and Specialization of Attention Heads via the Refined Local Learning Coefficient

By Wang et al.

ICLR Spotlight

We introduce refined variants of the Local Learning Coefficient (LLC), a measure of model complexity grounded in singular learning theory, to study the development of internal structure in transformer language models during training.

Differentiation and Specialization of Attention Heads via the Refined Local Learning Coefficient

Loss Landscape Degeneracy and Stagewise Development of Transformers

By Hoogland et al.

TMLR Best Paper at 2024 ICML HiLD Workshop

We show that in-context learning emerges in transformers in discrete developmental stages, when they are trained on either language modeling or linear regression tasks.

Loss Landscape Degeneracy and Stagewise Development of Transformers

The Local Learning Coefficient: A Singularity-Aware Complexity Measure

By Lau et al.

AISTATS 2025

Deep neural networks (DNN) are singular statistical models which exhibit complex degeneracies.

The Local Learning Coefficient: A Singularity-Aware Complexity Measure

Join Timaeus

Join our growing team of dedicated researchers applying cutting-edge mathematics to prevent AI risks that could affect billions of people.

Dodecahedron