I am pretty surprised by the lack of urgency among many (but not all) academics in addressing what seems to be one of the biggest developments in modern times - we accidentally built a machine that produces something that looks like language & thought Why? What does it teach us?
I am pretty surprised by the lack of urgency among many (but not all) academics in addressing what seems to be one of the biggest developments in modern times - we accidentally built a machine that produces something that looks like language & thought Why? What does it teach us?
I would add that by every measure of creativity we have (all of which are flawed, which didn't matter when we were only measuring humans) LLMs produce creative results And frontier LLMs operate on a high level on novel managerial, medical, and legal tasks. Its all kind of weird.
@emollick US colleges and universities will be disrupted in a big way over the coming decade
@emollick @tylercowen it only seems surprising because you don't think of yourself as forming Markov chains when you communicate but really, you are easier to learn to catch a ball than to write an equation to predict how a ball should be caught
@emollick True. I think in part because producing language is not the same as communicating. No LLM is capable of a speech act, of arriving at mutual understanding w another subject, of coordinating social action.... LLMs teach us that language production is not equiv to talking.
@emollick Below a trial for plausibility of the surprising performance of #LLM. Plus an explorative chat. 1.Interpreting LLMs Through Extended Information Theory 2.Comparing AI and Human Cognition; Holography as an AI Analogy 3.Exploring “Grokking”: Sudden Leaps in AI Learning
@emollick My impression is that the research community (and the corporate world) is working hard to understand exactly what these "machines" can actually do. Their limitations are coming into focus as well as methods for mitigating those limitations. 1/
@emollick It’s probably because daddy Comsky dismissed LLM’s early in the hype cycle.
@emollick LLM’s produce text. It’s quite common for humans to express thought through text, thus we easily confuse the text produced by LLM’s as containing thought. But it does not.
@emollick but AI endlessly churns out long bloviating pieces that sound insightful and use sophisticated vocabulary but actually have little original to say and provide no meaningful value to the reader or society at large how could such a technology possibly be relevant to academics
@emollick In WW1 it took 2 yrs and 77% of all causalities being head wounds for Germany to reappraise its strategy of issuing all of its is soldiers leather helmets covered in gold with a conspicuous spike on the top, and even then the initial response was to make the spike detachable.