“AI” hallucinations are not a problem that can be fixed in LLMs. They are an inherent aspect of the process and an inevitable result of the fact that LLMs are mostly probabilistic engines, with no supervisory or introspective capability, which actual sentient beings possess and use to fact-check their output. So there. :p
It’s funny seeing the list and knowing connecticut is only there because it’s alphabetically after colorado (in fact all four listed appear in that order alphabetically) because they probably scraped so many lists of states that the alphabetical order is the statistically most probable response in their corpus when any state name is listed.
“AI” hallucinations are not a problem that can be fixed in LLMs. They are an inherent aspect of the process and an inevitable result of the fact that LLMs are mostly probabilistic engines, with no supervisory or introspective capability, which actual sentient beings possess and use to fact-check their output. So there. :p
It’s funny seeing the list and knowing connecticut is only there because it’s alphabetically after colorado (in fact all four listed appear in that order alphabetically) because they probably scraped so many lists of states that the alphabetical order is the statistically most probable response in their corpus when any state name is listed.
So we should better put the question like
“What is the probability of a D suddenly appearing in Connecticut?”
A wild ‘D’ suddenly appears! (that’s about all I know about Pokemon…)