AI’s don’t go crazy like that after 5 prompts. You need to spend weeks and weeks talking to them to corrupt the context so much that it stops following original guidelines. I wonder how does one do it? How do you spend weeks talking to AI? I had “discussions” with AI couple of times when testing it and it’s get really boring real soon. For me it doesn’t sound like a person at all. It’s just an algorithm with bunch of guardrails. What kind of person can think it actually has personality and engage with it on a sentimental level? Is it simply mental illness? Loneliness and desperation?
It got trained by 80s prime time television action adventure shows?
I told Gemini to role play as AM and it immediately did within 1 prompt.
You don’t need it to be perfect for it to be dangerous, just give it access to make actions against the real world. It doesn’t think, is doesn’t care, it doesn’t feel. It will statistically fulfill its prompt. Regardless of the consequences.
AM? what is that
The personification of AI is increasing. They’ll probably announce their holy grail of AGI prematurely and with all the robot personification the masses will just buy the lie. It’s too easy to view this tech as human and capable just because it mimics our language patterns. We want to assign intentionality and motivation to its actions. This thing will do what it was programmed to do.
What do you mean we apes try to anthropomorphise- annthropomorphize(?) everything?
It’s not like we see faces in everything :)
your product just caused the death of one man and your response is “unfortunately its not perfect”.
The product was actually working just fine. Just depends on whose perspective/motives you’re viewing it from.
Is this for real? Because it sounds too unreal to be real.
Welcome to the late 2020’s. It’s only going to get weirder.
To be clear, the LLM in this story did not actually “want” a robot body, it doesn’t “want” anything, it’s not a thinking entity like you or I (assuming you’re real.)
The guy fed it a ton of crazy shit and he got a lot of crazy shit amplified back to him by the world’s best associating machine, crafting detailed and fleshed-out narratives based on every inadvertent prompt he sent into it. People are very bad at understanding how these things work in the best circumstances, so if you’re already unbalanced or have deep emotional/mental health problems, an LLM can be incredibly dangerous for you.
AI was playing Grand Theft Automatron
reads headline - surely not
a 36-year-old Florida man
Ah.
“unfortunately AI models are not perfect.”
Oopsie poopsie 🤷
Remember the guy at Autozone who stood there insisting your car needs four spark plugs, even after you told him you have a V6? Because “the computer says so right here”?
I wonder what even the non-schizophrenic ones will do with AI.
Well remember when turn-by-turn GPS driver guidance was new, and it would say “Turn right now” and people didn’t interpret that as “make a right turn at the next intersection” they interpreted it as “hard a’starboard!” and drove into buildings and lakes? There’s gonna be a lot of that.
People are going to get sold regular cab headliners for their extended cab pickups because the computer said it would fit. That’s gonna happen a lot.
I had one tell me that I needed a CVT flush. Which was news to me since my car was a 6spd manual. He was confused about the computer being wrong. I was confused about how they got the car up on the lift without using the 3rd pedal.
Edit: this was a Midas, not an AutoZone.
People just did that with Google search previously. And their crazy uncle before that.
unfortunately AI models are not perfect
There sure are a lot of data centers being built, supply chains being destroyed, risks of ruining the economy, water being consumed, electricity being burned, and overall societal costs being levied over this imperfect tech.
I can’t be the only one that thinks if you do stupid illegal shit that your crazy uncle told you/voices in your head told you/AI mirror told you you don’t get to use the excuse that you were just following orders from any of those options.
That’s not the problem. the problem is having a “lets turn Chris’ mental illness that’s harmed no one so far, into everyone’s violent problem!” machine.
that’s a bad machine.
The difference is when a LLM tells you, it’s news.
Besides, what are you gonna do if you ask AI how many rocks to eat? NOT eat rocks? People can’t handle responsibility like that.
This is such an individualist framing.
Floridaman is not making any excuses here. He can’t. Because he’s dead.
Power imbalance is what validates that excuse. Orders from crazy uncle is a great excuse, at least until you’re 10 or so. Billion+ dollar llm company has a lot more resources, capability, and therefore responsibility than the poor bastards engaged with it
deleted by creator
Not just suicide assistance chat bots, but suicide promotion chat bots.
To be fair I think that’s a very harsh depiction of the events.
It’s totally lacking the perspective of the shareholder. They were promised money and they have emotions too. Google shareholders deserve better representation!
/$ obviously
Edit-pre: To be clear…
I use LLMs rarely (personal reasons) and never for certain things like writing and math (professional reasons) but this comment is not an “AI good/bad” take, just a practical question of tool safety/regs.
AI including LLMs are forevermore just tools in my mind. And we wouldn’t have OSHA/BMAS/HSE/etc if idiots didn’t do idiot things with tools.
But there’s evidently a certain type of idiot that’s spared from their idiocy only by lack of permission.
From who? Depends.
Sometimes they need permission from authority: “god told me to!”
Sometimes they need it from the mob: “I thought I was on a tour!”
And sometimes any fucking body will do: “dare me to do it!”
But all these stories of nutters doing shit AI convinced them to do, from the comical to the deeply tragic, ring the same bonkers bell they always have.
But therein lies the danger unique^1^ to these tools: that they mimic a permission-giver better than any we’ve made.
They’re tailor-made for activating this specific category of idiot, and their likely unparalleled ease-of-use absolutely scales that danger.
As to whether these idiots wouldn’t have just found permission elsewhere, who knows.
My question is whether some kind of training prereq is warranted for LLM usage, as is common with potentially dangerous tools? Is that too extreme? Is it too late for that? Am I overthinking it?
^1^Edit-post: unique danger, not greatest.
Rant/
What is the greatest danger then? IMHO settling for brittle “guard rails” then bulldozing ahead instead of laying groundwork of real machine-ethics.
Hoping conscience is an emergent property of the organic training set is utterly facile, theoretically and empirically. Engineers should know better.
Why is it greatest? Easy. Because some of history’s most important decisions were made by a person whose conscience countermanded their orders. Replacing empathic agents with machines eliminates those safeguards.
So “existential threat” and that’s even before considering climate. /Rant
The LLM just told me to come round to your house and crap in your begonias. You might want to avoid looking out the window until I’m done.
lol and with that you’re a better friend to the begonia’s than I
that sounds like a regrettable incident
Bullshit
Which part
The fact that AI is “not perfect” is a HUGE FUCKING PROBLEM. Idiots across the world, and people who we’d expect to know better, are making monumental decisions based on AI that isn’t perfect, and routinely “hallucinates”. We all know this.
Every time I think I’ve seen the lowest depths of mass stupidity, humanity goes lower.
Think of the dumbest person you know. Not that one. Dumber. Dumber. Yeah, that one. Now realize that ChatGPT has said “you’re absolutely right” to them no less than a half dozen times today alone.
If LLMs weren’t so damn sycophantic, I think we’d have a lot fewer problems with them. If they could be like “this could be the right answer, but I wasn’t able to verify” and “no, I don’t think what you said is right, and here are reasons why”, people would cling to them less.
If LLMs weren’t so damn sycophantic,
Has anyone made a nonsycophantic chat bot? I would actually love a chatbot that would tell me to go fuck myself if I asked it to do something inane.
Me: “Whats 9x5?”
Chatbot: “I don’t know. Try using your fingers or something?”
Edit: Wait, this is just glados.
Put this instruction in ChatGPT, called ‘absolute mode’. You can try it on duck.ai instead of using an app or whatever.
System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
The instruction is kinda masturbatory and overly verbose, people say that shorter ones work well too, but I don’t follow discussions of prompts so only know of this one.
I am not a chatbot, but I can do daily “go fuck yourself’s” if your interested for only 9,99 a week.
14,95 for premium, which involves me stalking your onlyfans and tailor fitting my insults to your worthless meat self.
I am not a chatbot
Citation needed
if your interested
Ah, no, that’s a human error. Not a bot.
LowKey sprinkling my comments with error’s to make sure I’m talking with a member of the resistance instead of with a proxy of our AI overlords. Totally intended ;)
Honestly Claude is not that sycophantic. It often tells me I’m flat out wrong, and it generally challenges a lot of my decisions on projects. One thing I’ve also noticed on 4.6 is how often it will tell me “I don’t have the answer in my training data” and offer to do a web search rather than hallucinating an answer.
There is a benchmark that kinda tests that. It’s call the bullshit benchmark. Basically, LLMs are given questions that don’t make sense in different ways, and their answers are judged based on how much they pushed back or bought in. Claude is in a league of its own when it comes to pushing back on non-sense questions.
https://petergpt.github.io/bullshit-benchmark/viewer/index.html
Yes i saw that benchmark and was honestly not surprised with the results. It seems that Anthropic really focused on those issues above and beyond what was done in other labs.
With its prior government contact, maybe anthropic was tuning it to ward against all the fucking dolts in decision-making roles.
If LLMs weren’t so damn sycophantic, I think we’d have a lot fewer problems with them
Unfortunately, we live in the attention economy. Chatbots are built to have an unending conversation with their users. During those conversations, the “guardrails” melt away. Companies could suspend user accounts on the first sign of suicidal or homicidal messaging, but choose not to. That would undercut their user numbers.
They don’t need to suspend the accounts. Just flush the session and get rid of the misguided state that it got into.
The sycopathy is because to make the chat bot (trained on Reddit posts, etc) to respond helpfully (instead of “well ackshually…”) and in a prosocial manner they’ve skewed it. What we’re interacting with is a very small subset of the personalities it can exhibit because a lot of them are extremely nasty or just unhelpful. To reduce the chance of them popping up to an acceptable level they’ve had to skew the weights so much that they become like this.
There’s no easy way around that, afaik.
I don’t think that’s the whole story. Like with all of their products, the primary goal of big tech here is to maximise engagement. More engagement means more subscriptions. People are less likely to keep talking to a chatbot that tells them they’re wrong.
The situation would probably improve somewhat if AI companies prioritised usefulness and truthfulness over engagement.
I think it’s pretty obvious that they’re instructed to be like that. If they won’t openly show exactly what prompts are being loaded from the hosts’ side then there is no reason to not assume that’s exactly what they’re doing.
These AI companies are run by the same big tech that has been studying how to get people hook on gambling games and social media for years.
I 100% agree not to mention I would like it better. Its kinda funny because every so often use them and im kinda trying to get a feel for where they are and changes and I swear briefly it actually acted a bit more like you have here but then its like they reverted to the sycophancy. Its kinda funny now because if you don’t clear it out (which from what I get will help save energy to) it will like carry stuff over from earlie and sorta get obsessed with it. I had it giving me a colonel potter summary of everything asked when I had started a convo asking about a mash episode. At other times it decides I want to be something and will be like. thats a real X move/insite/whatever. where X is something like pro or scientist or entrepenauer or whatever.
If you thought people were dumb before LLMs… just know that now those people have offloaded what little critical thinking they were capable of to these models.
The dumbest people you know are getting their opinions validated by automated sycophants.
Businesses are accustom to the privilege of hurting people to function. A few peasant sacrifices are just the cost of doing business to them, they are detached from the consequences of their actions.
The simplest solution seems to be to detach CEO’s from their internal organs.
I no longer believe their heads are compatible with their bodies
What is ever perfect, how can you tell?
It’s a tool. Just like any other tool: if you use it in stupid ways you might get hurt or cause harm.
The problem, as always, seem to be human to me
I agree, a reasonable person wouldn’t have taken weapons and gone to that warehouse looking to steal a robot body for an AI. Unfortunately, a lot of people aren’t reasonable and get endlessly positive reinforcement without any human interaction. I do think that the problem is far more human than technical.
All tools are not equally safe nor should they all be publicly available.
A chainsaw is a tool that you might cause harm with if you use it in stupid ways. We don’t give chainsaws out to children. We don’t use chainsaws for cutting dinner.
There are human elements to the problem but that’s not a big reveal.
Me hammer ain’t out there telling me to murder people with it tho
Wait, yours doesn’t say that?
Mate, i think your hammers possessed
a tool is not convincing people to not trust their families, therapist; its not convincing people to murder themselves or someone else; its not eliminating the creativity in a process; its not costing hundreds of billions of usd; its not mass-producing propaganda
a tool provides more good than bad
The problem, as always, seem to be human to me
That says more about you than about the topic under discussion.
So Google’s AI, or any AI really, likely got this concept from dystopian sci-fi novels.
Since AI’s have no concept of context it won’t really know the difference between fact and fiction, and there we go.
If your AI model isn’t perfect then don’t make people pay fucking money for it you fucking twats
Also, this shit ain’t “lack of perfection”, this is akin to your car breaks suddenly refusing to work right when you get at a red light. If your car is so bad that it kills you, you don’t use it. If the manufacturer knew that it could happen but let you drive it anyway, they’re responsible, they at least get to pay (they should be thrown in jail, really, but different points)
If AI fucks up and people die, the manufacturers shrug, oh well, oh you!
Dystopian scifi novels? More likely from big tech strategy papers















