I've been told timm has a lot of hidden features. Yes, the docs need improving, that's a WIP! Curious about one of those features I've been using a lot lately in CLIP ViT fine-tuning? Every model in timm, when used with optimizer factory supports layer-wise LR decay.
Also known as discriminative LR decay, this applies a decaying LR to the model params as you move away from the head. It's very useful for fine-tuning from large pretrain dataset (or semi/unsupervised train -> supervised) without blowing away properties from pretrain.
I didn't just try to map parameter children / modules into a list (that isn't consistent across models). I sat down and wrote regex (ugh) for every single model to appropriately map stem / block / stage / heads to meaningful 'layers', either blocks or 'coarse' stages
Regex return by 'group_matcher', a method on every model, it's used internaly in the optimizer factory when you pass 'layer_decay'. New param groups will be created and timm LR sched will apply the decay factor... it can also be used manually to grab parameter or group mappings
@wightmanr Ha, that's pretty much the same way we implemented this in big_vision! Though we haven't really found layerwise lr decay to be very useful yet even though it seems popular recently.