I propose that we adopt the term "Large Self-Supervised Models (LSSMs)" as a replacement for "Foundation Models" and "LLMs". "LLMs" don't capture non-linguistic data and "Foundation Models" is too grandiose. Thoughts? @percyliang

@tdietterich The beauty of language is that you can have multiple terms that highlight different aspects of the same object. You don't have to choose. I use "LLM" to talk about LLMs, "self-supervised" for their construction, and "foundation model" for their function. No term can be replaced.

@percyliang Yes, but as you know, "Foundation" is too close to "Foundational", and many of us find that troubling. That is why I'm proposing a more neutral term. For use, maybe we could just call them "Upstream models".

@tdietterich @percyliang In 2015 no one used “artificial intelligence”, it was ML, big data, DL. Maybe AI seemed very grandiose, now no one cares. It is just the current trend.

@tdietterich @percyliang Models via Unsupervised Learning that Excel (MULE) 🤷‍♂️

@tdietterich @percyliang Their function is more “platform” than “foundation”. “Foundation” suggests satisfactory intellectual/theoretical underpinnings. “Platform” indicates the more economic fact that most practitioners will build on them rather than start from scratch.

@tdietterich @percyliang totally agree that "foundation" is too close to "foundational" and the meaning of "foundation models" is too broad and too vague.

@seth_stafford @percyliang I like "platform", another possibility is "substrate"

@zehavoc @percyliang This is another interesting proposal. An issue might be that in some systems, the core is actually a large database or corpus and the network is a summary and generalization of that corpus.