(1) Is it useful to think of LLMs as "defining" terms? (2) Is it useful to think of LLMs as generating-and-presenting output "confidently"? (3) On balance, does the metaphor of LLMs "hallucinating" provide more insight or cause more confusion? (4) Are there useful ways we could reframe the relationship between humans and LLMs to reduce misunderstandings and unrealistic expectations? From the Complaint filed by the New York Times against Microsoft, OpenAI, and others: "137. ChatGPT defines a “hallucination” as “the phenomenon of a machine, such as a chatbot, generating seemingly realistic sensory experiences that do not correspond to any real world input.”35 Instead of saying, “I don’t know,” Defendants’ GPT models will confidently provide information that is, at best, not quite accurate and, at worst, demonstrably (but not recognizably) false. And human reviewers find it very difficult to distinguish “hallucinations” from truthful output." (I'm not agreeing or disagreeing with how this is written. Just think it's an interesting, real-world example of some of these issues.) #newyorktimes #openai #copyright @fchollet #llms #hallucinations
@hagner_william Yeah, but when you realize the glitches...they can be fixed, right?
@hagner_william What... you talking about... Willis???
@hagner_william Ask the LLM it will give you the right answer
Well if you stop and think about this: we want a lump of metal and magnetism to have the same characteristics as a human being... with out the capability of deep understanding... of say a simple two year old... and we want the lump of metal and magnetism to be smarter than us... yeah... stop and really think about what i just said... We cannot create that which created us. humans can't even express their own emotions correctly... but we... beings that refuse to do what they are supposed to do... for their own sake... just to prove we can be stubborn assholes just to prove we are the biggest most stubborn asshole... are going to create God's competitor. we can try as flawed as we are... but in the end... our creation will still have our own flaws embedded with in it.
@hagner_william Or is it more blissful to have absolutely no idea what an LLM is, nor any care whatsoever?
@hagner_william Of course, happy to help! Here's a response: