
Singular Learning Theory & AI Safety | SLT Seminar
Singular Learning Theory & AI Safety | SLT Seminar
July 28, 2025
In the SLT seminar, Jesse Hoogland from Timaeus talks to us about his research agenda applying singular learning theory to AI safety.
Stay updated with our latest research publications, blog posts, and community events.
Jesse Hoogland presented "Singular Learning Theory and AI Safety" on September 17, 2025 at FAR.AI Labs Seminar.
In the SLT seminar, Jesse Hoogland from Timaeus talks to us about his research agenda applying singular learning theory to AI safety.
MATS 8.0 seminar by Jesse Hoogland. Singular learning theory (SLT) suggests that the geometry of the loss landscape is key to developing a better scientific understanding of deep neural networks, along with new practical tools for engineering safer systems.
Daniel Murfet from Timaeus tells us how to think about Turning machines as critical points of an analytic function, from a recent paper with Will Troiani.
Jesse Hoogland and Daniel Murfet, founders of Timaeus, introduce their mathematically rigorous approach to AI safety through 'developmental interpretability' based on Singular Learning Theory. They explain how neural network loss landscapes are actually complex, jagged surfaces full of 'singularities' where models can change internally without affecting external behavior—potentially masking dangerous misalignment.
In this talk, Garrett Baker from Timaeus presented some recent work: 'Studying Small Language Models with Susceptibilities'. This paper uses singular learning theory to study the response of language models to perturbations in the data distribution.
In this video, Jesse Hoogland discusses Singular Learning Theory (SLT) and introduces the refined Local Learning Coefficient (LLC). They explore how this refined LLC helps uncover new circuits in language models.
Jesse Hoogland introduces Singular Learning Theory (SLT), which links data structure, neural network architecture, loss landscape geometry, and learning dynamics. The talk surveys recent evidence supporting this framework and explores its implications for AI safety and interpretability.
Singular Learning Theory (SLT) is a novel mathematical framework that expands and improves upon traditional Statistical Learning theory using techniques from algebraic geometry, bayesian statistics, and statistical physics. It has great promise for the mathematical foundations of modern machine learning.
This video explores Singular Learning Theory (SLT), a Bayesian statistics framework that helps explain deep learning models, their learning dynamics, and generalization. The discussion with Daniel Murfet covers phase transitions, learning coefficients, AI alignment, and open problems in SLT’s application to AI safety and capabilities.
Jesse Hoogland is a research assistant at David Krueger's AI Safety lab in Cambridge. He now focuses on Singular Learning Theory and Developmental Interpretability. Previously, he co-founded a health-tech startup for automating bariatric surgery care.
Learn how Singular Learning Theory (SLT) helps us understand neural networks by analyzing their loss landscapes. This talk explores model interpretation and phase transitions during training. Presented at Imperial College London, May 22, 2023.
Try selecting a different filter to see more content.
Watch our researchers discuss singular learning theory, AI safety, and interpretability in talks, interviews, and educational content.

Jesse Hoogland presented "Singular Learning Theory and AI Safety" on September 17, 2025 at FAR.AI Labs Seminar.

July 28, 2025
In the SLT seminar, Jesse Hoogland from Timaeus talks to us about his research agenda applying singular learning theory to AI safety.

July 9, 2025
MATS 8.0 seminar by Jesse Hoogland. Singular learning theory (SLT) suggests that the geometry of the loss landscape is key to developing a better scientific understanding of deep neural networks, along with new practical tools for engineering safer systems.

July 5, 2025
Daniel Murfet from Timaeus tells us how to think about Turning machines as critical points of an analytic function, from a recent paper with Will Troiani.

June 19, 2025
Jesse Hoogland and Daniel Murfet, founders of Timaeus, introduce their mathematically rigorous approach to AI safety through 'developmental interpretability' based on Singular Learning Theory. They explain how neural network loss landscapes are actually complex, jagged surfaces full of 'singularities' where models can change internally without affecting external behavior—potentially masking dangerous misalignment.

May 28, 2025
In this talk, Garrett Baker from Timaeus presented some recent work: 'Studying Small Language Models with Susceptibilities'. This paper uses singular learning theory to study the response of language models to perturbations in the data distribution.

November 27, 2024
In this video, Jesse Hoogland discusses Singular Learning Theory (SLT) and introduces the refined Local Learning Coefficient (LLC). They explore how this refined LLC helps uncover new circuits in language models.

June 10, 2024
Jesse Hoogland introduces Singular Learning Theory (SLT), which links data structure, neural network architecture, loss landscape geometry, and learning dynamics. The talk surveys recent evidence supporting this framework and explores its implications for AI safety and interpretability.

May 12, 2024
Singular Learning Theory (SLT) is a novel mathematical framework that expands and improves upon traditional Statistical Learning theory using techniques from algebraic geometry, bayesian statistics, and statistical physics. It has great promise for the mathematical foundations of modern machine learning.

May 7, 2024
This video explores Singular Learning Theory (SLT), a Bayesian statistics framework that helps explain deep learning models, their learning dynamics, and generalization. The discussion with Daniel Murfet covers phase transitions, learning coefficients, AI alignment, and open problems in SLT’s application to AI safety and capabilities.

July 6, 2023
Jesse Hoogland is a research assistant at David Krueger's AI Safety lab in Cambridge. He now focuses on Singular Learning Theory and Developmental Interpretability. Previously, he co-founded a health-tech startup for automating bariatric surgery care.

May 22, 2023
Learn how Singular Learning Theory (SLT) helps us understand neural networks by analyzing their loss landscapes. This talk explores model interpretation and phase transitions during training. Presented at Imperial College London, May 22, 2023.