Have you spotted any hallucinations so far? I’m curious about what kind of hallucinations can be created when a LLM summarizes a text.
Any pronouns are alright
Mastodon: @Staden@pawb.fun
Have you spotted any hallucinations so far? I’m curious about what kind of hallucinations can be created when a LLM summarizes a text.
After checking the source code, well… it just summarizes the posts. Doesn’t help much with the human error problem.
But as mentioned by OP, it’s in early stage of development, and they plan to add features to “find the missing perspectives on an issue” and analyze political alligmnent information. So in the future maybe it could become a useful tool.
correct, but humans also exaggerate and lie a lot in the news, so maybe this AI could look through different sources and identify inaccuracies.
I haven’t looked the source code tho…
I tried but it doesn’t work with usernames that end with special characters like _
Ok that’s funny XD
At least it won’t be harmful in any way.