Join Our Mission
We're building the theoretical foundations and empirical tools for AI safety through cutting-edge research in singular learning theory and developmental interpretability. Explore opportunities to contribute to this mission.
We're building the theoretical foundations and empirical tools for AI safety through cutting-edge research in singular learning theory and developmental interpretability. Explore opportunities to contribute to this mission.
We're hiring research scientists, a research engineer, and potentially a research lead to work on applications of singular learning theory to alignment. Apply by April 30, 2026.
We're actively hiring! See our current opportunities above, or email us at careers@timaeus.co if you have questions about working at Timaeus.
Even if current roles aren't the right fit, we're always interested in connecting with talented individuals who share our passion for AI safety research.
Looking for research collaborations, or interested in doing a PhD on SLT? Join our Discord community to connect with researchers, find collaborators, and get advice about research opportunities.
Introduce yourself in our Discord to connect with the community and ask for guidance on where to look for research opportunities in SLT and developmental interpretability.