Reduce AI inference load at the endpoint device by spending more upfront on AI training and feeding the model more data. In this paper Google gets better model performance at ~half the inference compute load by spending ~50% more compute on ~4x the data. ai.googleblog.com/2021/12/more-e…
2
6
28
0
4
Download Image
Relevant for a company like Tesla; better to invest in Dojo and further harness their abundant data than to upgrade the chips on all of the cars.