Cube

Latest News

Stay updated with our latest research publications, blog posts, and community events.

Showing of items
Event

Focus Period: Mathematical Science of AI Safety

Nov 03, 2025
Paper

Bayesian Influence Functions for Hessian-Free Data Attribution

Kreer et al.
Sep 30, 2025
Paper

The Loss Kernel: A Geometric Probe for Deep Learning Interpretability

Adam et al.
Sep 30, 2025
Event

ODYSSEY 2025

Aug 25, 2025
Blog

Research Engineer @ Timaeus

Hoogland and Wingerden
Aug 12, 2025
Paper

Embryology of a Language Model

Wang et al.
Aug 01, 2025
Paper

From Global to Local: A Scalable Benchmark for Local Posterior Sampling

Hitchcock and Hoogland
Jul 29, 2025
Video

Singular Learning Theory & AI Safety | SLT Seminar

In the SLT seminar, Jesse Hoogland from Timaeus talks to us about his research agenda applying singular learning theory to AI safety.

Jul 28, 2025
Video

Singular Learning Theory and AI Safety | MATS 8.0

MATS 8.0 seminar by Jesse Hoogland. Singular learning theory (SLT) suggests that the geometry of the loss landscape is key to developing a better scientific understanding of deep neural networks, along with new practical tools for engineering safer systems.

Jul 09, 2025
Video

Programs as Singularities | SLT Seminar

Daniel Murfet from Timaeus tells us how to think about Turning machines as critical points of an analytic function, from a recent paper with Will Troiani.

Jul 05, 2025
Video

Embryology of AI: How Training Data Shapes AI Development | Cognitive Revolution

Jesse Hoogland and Daniel Murfet, founders of Timaeus, introduce their mathematically rigorous approach to AI safety through 'developmental interpretability' based on Singular Learning Theory. They explain how neural network loss landscapes are actually complex, jagged surfaces full of 'singularities' where models can change internally without affecting external behavior—potentially masking dangerous misalignment.

Jun 19, 2025
Video

Studying Small Language Models with Susceptibilities | SLT Seminar

In this talk, Garrett Baker from Timaeus presented some recent work: 'Studying Small Language Models with Susceptibilities'. This paper uses singular learning theory to study the response of language models to perturbations in the data distribution.

May 28, 2025
Blog

Director of Operations @ Timaeus

Hoogland and Wingerden
May 22, 2025
Paper

Modes of Sequence Models and Learning Coefficients

Chen and Murfet
Apr 25, 2025
Paper

Structural Inference: Interpreting Small Language Models with Susceptibilities

Baker et al.
Apr 25, 2025
Paper

Programs as Singularities

Murfet and Troiani
Apr 10, 2025
Paper

You Are What You Eat – AI Alignment Requires Understanding How Data Shapes Structure and Generalisation

Lehalleur et al.
Feb 08, 2025
Paper

Structure Development in List-Sorting Transformers

Urdshals and Urdshals
Jan 30, 2025
Paper

Dynamics of Transient Structure in In-Context Linear Regression Transformers

Carroll et al.
Jan 29, 2025
Blog

Open Roles @ Timaeus

Hoogland and Wingerden
Jan 17, 2025
Video

Jesse Hoogland on Singular Learning Theory | AXRP

In this video, Jesse Hoogland discusses Singular Learning Theory (SLT) and introduces the refined Local Learning Coefficient (LLC). They explore how this refined LLC helps uncover new circuits in language models.

Nov 27, 2024
Event

The Australian AI Safety Forum 2024

Nov 07, 2024
Paper

Differentiation and Specialization of Attention Heads via the Refined Local Learning Coefficient

Wang et al.
Oct 04, 2024
Blog

Singular learning theory: exercises

Furman
Aug 30, 2024
Event

ILIAD 2024

Aug 28, 2024
Blog

So you want to work on technical AI safety

Wang
Aug 24, 2024
Video

Singular Learning Theory: Overview And Recent Evidence | Plectics

Jesse Hoogland introduces Singular Learning Theory (SLT), which links data structure, neural network architecture, loss landscape geometry, and learning dynamics. The talk surveys recent evidence supporting this framework and explores its implications for AI safety and interpretability.

Jun 10, 2024
Video

Jesse Hoogland - Singular Learning Theory

Singular Learning Theory (SLT) is a novel mathematical framework that expands and improves upon traditional Statistical Learning theory using techniques from algebraic geometry, bayesian statistics, and statistical physics. It has great promise for the mathematical foundations of modern machine learning.

May 12, 2024
Video

Singular Learning Theory with Daniel Murfet | AXRP

This video explores Singular Learning Theory (SLT), a Bayesian statistics framework that helps explain deep learning models, their learning dynamics, and generalization. The discussion with Daniel Murfet covers phase transitions, learning coefficients, AI alignment, and open problems in SLT’s application to AI safety and capabilities.

May 07, 2024
Blog

Stagewise Development in Neural Networks

Hoogland et al.
Mar 20, 2024
Blog

Simple versus Short: Higher-order degeneracy and error-correction

Murfet
Mar 11, 2024
Blog

Timaeus's First Four Months

Hoogland et al.
Feb 28, 2024
Paper

Loss Landscape Degeneracy and Stagewise Development of Transformers

Hoogland et al.
Feb 04, 2024
Blog

Generalization, from thermodynamics to statistical physics

Hoogland
Nov 30, 2023
Blog

Learning coefficient estimation: the details

Furman
Nov 15, 2023
Event

The 2023 Oxford Conference

Nov 05, 2023
Blog

Announcing Timaeus

Hoogland et al.
Oct 22, 2023
Blog

You're Measuring Model Complexity Wrong

Hoogland and Wingerden
Oct 11, 2023
Event

The 2023 Melbourne Hackathon

Oct 07, 2023
Event

The 2023 Amsterdam Retreat

Sep 18, 2023
Paper

The Local Learning Coefficient: A Singularity-Aware Complexity Measure

Lau et al.
Aug 23, 2023
Video

Jesse Hoogland–AI Risk, Interpretability | UCL

Jesse Hoogland is a research assistant at David Krueger's AI Safety lab in Cambridge. He now focuses on Singular Learning Theory and Developmental Interpretability. Previously, he co-founded a health-tech startup for automating bariatric surgery care.

Jul 06, 2023
Blog

DSLT 4. Phase Transitions in Neural Networks

Carroll
Jun 24, 2023
Blog

DSLT 3. Neural Networks are Singular

Carroll
Jun 20, 2023
Event

The 2023 Berkeley Conference

Jun 19, 2023
Event

The Primer

Jun 19, 2023
Blog

DSLT 2. Why Neural Networks obey Occam's Razor

Carroll
Jun 18, 2023
Blog

DSLT 1. The RLCT Measures the Effective Dimension of Neural Networks

Carroll
Jun 16, 2023
Blog

DSLT 0. Distilling Singular Learning Theory

Carroll
Jun 15, 2023
Video

The Physics of Intelligence: from Classical to Singular Learning Theory

Learn how Singular Learning Theory (SLT) helps us understand neural networks by analyzing their loss landscapes. This talk explores model interpretation and phase transitions during training. Presented at Imperial College London, May 22, 2023.

May 22, 2023
Blog

Approximation is expensive, but the lunch is cheap

Hoogland
Apr 19, 2023
Blog

Empirical risk minimization is fundamentally confused

Hoogland
Mar 22, 2023
Blog

The shallow reality of 'deep learning theory'

Hoogland
Feb 22, 2023
Blog

Gradient surfing: the hidden role of regularization

Hoogland
Feb 06, 2023
Blog

Interview Daniel Murfet on Universal Phenomena in Learning Machines

Oldenziel
Feb 06, 2023
Blog

Spooky action at a distance in the loss landscape

Hoogland
Jan 28, 2023
Blog

Neural networks generalize because of this one weird trick

Hoogland
Jan 18, 2023

Research Videos

Watch our researchers discuss singular learning theory, AI safety, and interpretability in talks, interviews, and educational content.

Singular Learning Theory & AI Safety | SLT Seminar

Singular Learning Theory & AI Safety | SLT Seminar

In the SLT seminar, Jesse Hoogland from Timaeus talks to us about his research agenda applying singular learning theory to AI safety.

Singular Learning Theory and AI Safety | MATS 8.0

Singular Learning Theory and AI Safety | MATS 8.0

July 9, 2025

MATS 8.0 seminar by Jesse Hoogland. Singular learning theory (SLT) suggests that the geometry of the loss landscape is key to developing a better scientific understanding of deep neural networks, along with new practical tools for engineering safer systems.

Programs as Singularities | SLT Seminar

Programs as Singularities | SLT Seminar

July 5, 2025

Daniel Murfet from Timaeus tells us how to think about Turning machines as critical points of an analytic function, from a recent paper with Will Troiani.

Embryology of AI: How Training Data Shapes AI Development | Cognitive Revolution

Embryology of AI: How Training Data Shapes AI Development | Cognitive Revolution

June 19, 2025

Jesse Hoogland and Daniel Murfet, founders of Timaeus, introduce their mathematically rigorous approach to AI safety through 'developmental interpretability' based on Singular Learning Theory. They explain how neural network loss landscapes are actually complex, jagged surfaces full of 'singularities' where models can change internally without affecting external behavior—potentially masking dangerous misalignment.

Studying Small Language Models with Susceptibilities | SLT Seminar

Studying Small Language Models with Susceptibilities | SLT Seminar

May 28, 2025

In this talk, Garrett Baker from Timaeus presented some recent work: 'Studying Small Language Models with Susceptibilities'. This paper uses singular learning theory to study the response of language models to perturbations in the data distribution.

Jesse Hoogland on Singular Learning Theory | AXRP

Jesse Hoogland on Singular Learning Theory | AXRP

November 27, 2024

In this video, Jesse Hoogland discusses Singular Learning Theory (SLT) and introduces the refined Local Learning Coefficient (LLC). They explore how this refined LLC helps uncover new circuits in language models.

Singular Learning Theory: Overview And Recent Evidence | Plectics

Singular Learning Theory: Overview And Recent Evidence | Plectics

June 10, 2024

Jesse Hoogland introduces Singular Learning Theory (SLT), which links data structure, neural network architecture, loss landscape geometry, and learning dynamics. The talk surveys recent evidence supporting this framework and explores its implications for AI safety and interpretability.

Jesse Hoogland - Singular Learning Theory

Jesse Hoogland - Singular Learning Theory

May 12, 2024

Singular Learning Theory (SLT) is a novel mathematical framework that expands and improves upon traditional Statistical Learning theory using techniques from algebraic geometry, bayesian statistics, and statistical physics. It has great promise for the mathematical foundations of modern machine learning.

Singular Learning Theory with Daniel Murfet | AXRP

Singular Learning Theory with Daniel Murfet | AXRP

May 7, 2024

This video explores Singular Learning Theory (SLT), a Bayesian statistics framework that helps explain deep learning models, their learning dynamics, and generalization. The discussion with Daniel Murfet covers phase transitions, learning coefficients, AI alignment, and open problems in SLT’s application to AI safety and capabilities.

Jesse Hoogland–AI Risk, Interpretability | UCL

Jesse Hoogland–AI Risk, Interpretability | UCL

July 6, 2023

Jesse Hoogland is a research assistant at David Krueger's AI Safety lab in Cambridge. He now focuses on Singular Learning Theory and Developmental Interpretability. Previously, he co-founded a health-tech startup for automating bariatric surgery care.

The Physics of Intelligence: from Classical to Singular Learning Theory

The Physics of Intelligence: from Classical to Singular Learning Theory

May 22, 2023

Learn how Singular Learning Theory (SLT) helps us understand neural networks by analyzing their loss landscapes. This talk explores model interpretation and phase transitions during training. Presented at Imperial College London, May 22, 2023.