I propose that we adopt the term "Large Self-Supervised Models (LSSMs)" as a replacement for "Foundation Models" and "LLMs". "LLMs" don't capture non-linguistic data and "Foundation Models" is too grandiose. Thoughts? @percyliang
@tdietterich @percyliang There was a previous discussion in the comment thread below. I personally think "Large Pre-Trained Models" (LPTMs) might be the most neutral label (given the significance of pre-training & the controversy around the definition of self-supervised learning) x.com/raphaelmillier…
@tdietterich @percyliang There was a previous discussion in the comment thread below. I personally think "Large Pre-Trained Models" (LPTMs) might be the most neutral label (given the significance of pre-training & the controversy around the definition of self-supervised learning) x.com/raphaelmillier…
@raphaelmilliere @tdietterich @percyliang like this too
@raphaelmilliere @tdietterich @percyliang I like this. But the model isn’t always large (e.g. CLIP’s vision encoder). So maybe we should say “large-scale” instead, to indicate that the training is what’s large?