I propose that we adopt the term "Large Self-Supervised Models (LSSMs)" as a replacement for "Foundation Models" and "LLMs". "LLMs" don't capture non-linguistic data and "Foundation Models" is too grandiose. Thoughts? @percyliang
@tdietterich @percyliang To me 'foundation' is really about use, and LSSM is about technical attributes. Foundation models are centralized and clients adapt them to solve specific tasks, through prompting, fine tuning, or whatever else. Such models could be LSSMs but could also be built in other ways.
@joshua_saxe @percyliang Yes. But my guess is that self-supervision is the only way to scale to these immense data sets, so I think it is a safe term going forward.
@tdietterich @percyliang I see your point, but I could also imagine a foundation vision model trained on pixel-annotated images from 3d simulators, a gamebot foundation model trained on self play, or new models trained in some new regime that hasn't been invented yet.