@tdietterich The beauty of language is that you can have multiple terms that highlight different aspects of the same object. You don't have to choose. I use "LLM" to talk about LLMs, "self-supervised" for their construction, and "foundation model" for their function. No term can be replaced.
@percyliang Yes, but as you know, "Foundation" is too close to "Foundational", and many of us find that troubling. That is why I'm proposing a more neutral term. For use, maybe we could just call them "Upstream models".
@giffmana @francoisfleuret @ggdupont @percyliang My impression was that self-supervised is competitive with supervised in computer vision. Is this wrong? In particular, doesn't self-supervised permit training on much more data?
@tdietterich @francoisfleuret @ggdupont @percyliang Yes it currently still fails: on imagenet-1k there are now competitive methods. But scaling *the same* data 10x, on imagenet-21k, they still fall far behind supervised. The stated goal, training on infinite web data, is superseded by (supervised!) image-text, works much better.
@giffmana @tdietterich @francoisfleuret @percyliang Given the same data AND the right labels, supervised learning does get better results. Does it get same level of generalisation/multitasking? (again for text selfsupervised allows more flexibility and scale higher, but I'm curious if it happens also on images)