Altman’s remarks in his tweet drew an overwhelmingly negative reaction.
“You’re welcome,” one user responded. “Nice to know that our reward is our jobs being taken away.”
Others called him a “f***ing psychopath” and “scum.”
“Nothing says ‘you’re being replaced’ quite like a heartfelt thank you from the guy doing the replacing,” one user wrote.



They used the word future for a reason. The technology is still being developed so basing future predictions on the current state is silly.
Your response is really unimpressive. My point is that LLM training, as it now stands, doesn’t seem like it can possibly adapt to an internet that isn’t full of free information ripe for the taking. If people come to rely on LLMs, how will they get the information to keep up with further advancements in, well, anything?
The amount of money dumped into ai can’t be recouped. It’s already a massive bubble.
How is that relevant? Even if the bubble pops LLMs aren’t going away.
Because all the tech bros saying AI are going to change the world are wrong. Just like they were wrong with blockchain currencies, just like they were wrong with owning images on computers. They’ll also be wrong about this.
And no one is arguing otherwise. But work on genAI isn’t going to stop just because the bubble pops, and there’s no real reason to think that its current capabilities can’t be improved on.
It’s a predictive text model. It’s not artificial intelligence.
I don’t see how that’s relevant to the discussion.