Review of Complexity Measures
A comprehensive review and comparison of different notions of effective dimensionality in machine learning models.
Learning theorists have studied many different notions of effective dimensionality. Of these, the learning coefficient is the most theoretically well-founded. However, it is not clear how the learning coefficient relates to other notions of effective dimensionality, such as the Hessian rank, or the dimensionality of the tangent space.
This project aims to provide a comprehensive review of various notions of effective dimensionality in machine learning models. Key questions to address include:
- What are the main notions of effective dimensionality that have been studied in the literature?
- How do these different measures relate to one another theoretically?
- How do they compare empirically when applied to real-world models?
- What are the strengths and limitations of each measure?
- How does the learning coefficient from Singular Learning Theory compare to these other measures?
The review should cover both theoretical aspects and empirical comparisons. Potential measures to consider include:
- Learning coefficient (from SLT)
- Hessian rank
- Tangent space dimensionality
- VC dimension
- Rademacher complexity
- Intrinsic dimension
- Effective degrees of freedom
This review would provide valuable context for the developmental interpretability agenda and help situate the learning coefficient within the broader landscape of model complexity measures.
Where to begin:
If you have decided to start working on this, please let us know in the Discord. We'll update this listing so that other people who are interested in this project can find you.