Lottery Tickets vs. DevInterp
Is the lottery ticket hypothesis compatible with DevInterp? Or do they contradict?
The Lottery Ticket Hypothesis (LTH) suggests that within large neural networks, there exist smaller sub-networks (“winning tickets”) that can be trained from the start to reach similar levels of performance as the larger network. The strong form of the lottery ticket hypothesis suggests that these winning tickets can be found just by pruning the larger network.
On its face, this appears to contradict the claim of DevInterp that structure is constructed progressively over the course of training — that pruning is not enough. Are these two claims really at odds? On the other hand, the idea of a simpler, smaller model contained within a larger model is native to SLT and compatible with one interpretation of the learning coefficient as an “effective dimensionality” (at least for minimally singular models).
Clarifying the relationship between these two ideas, both theoretically and in terms of their empirical predictions, would be a valuable contribution to the literature. It could be the case that the Lottery Ticket Hypothesis is straightforward to refute in more realistic models or settings with clearer algorithmic structure.
What would disproving the lottery ticket hypothesis look like? Here are some examples of the kind of evidence that would go to show that the lottery ticket hypothesis is false:
- Run the lottery ticket procedure on language models & show that you can’t find a winning ticket. So far, we are only aware of the LTH being tested on image classification tasks. Why haven’t we heard about it in the case of language models? Perhaps because it doesn’t replicate.
- Train models from the same weight initialization and show that a given part of the model can evolve into different roles. This would be evidence that the structure of the model is not fixed from the start.
- Show that models can acquire structure & then forget it later. The existence of transient intermediate structures is strong counterevidence for structure being latent at initialization.
Where to begin:
If you have decided to start working on this, please let us know in the Discord. We'll update this listing so that other people who are interested in this project can find you.