You’re not productive if you don’t use a lot of AI, says guy who makes all of his money selling AI hardware
The beauty is when those companies run out of human training data and start training on AI slop, just to generate even more AI slop.
This is probably already happening though.
It is, intentionally. Some of the training data is synthetic
It’s moments like these that make me think about the state of the world and my part in it. I may just be a random loser on the Internet, but I do know a lot more shit that some of the biggest multi-quad-spillion-dollar CEOs, apparently.
For example, it’s an old fact that tech CEOs know jack shit about measuring productivity, even when they’re obsessed with it. Yeah. One more example.
I think that’s far too low tbh
I thought AI would save us money? If it adds 50% onto the cost of a salary and bynall studies does not improve productivity output, then it’s not great.
Oh, he left out their plan to fire half of them and drive wages down by a third.
This may sound weird, but I think anecdotal evidence might be more informative than the productivity stats for now, until the industry settles on a new equilibrium.
Some engineers are more productive with AI, and some (maybe even most, still) are less productive. People are still putting in the effort to learn how to use it more effectively/productively (there is a learning curve), and some of the less productive are getting laid off.
It sucks, but that’s just how it is now.
Also, AI tooling is still evolving very rapidly. A lot of information and stats are only valid for maybe a few months.
The AI bubble went from a trillion dollar worth of circlejerk investments, to a self-sucking circlejerk trillion dollar ponzi scheme. Jensen Huang is right the moment an Invidia’s employee stop sucking their own dick or stop jerking off their coworkers and bosses simultaneously, the whole economy collapse.
It’s amazing they can just make these claims without literally any evidence and no major “media” organization asks for it. They are just propaganda for these companies.
Luigi knows the solution for this.
Jesus that’s a lot of tokens.
Even if I was trying to do everything in my power to make burn tokens for real work I’d be hard fucking pressed to burn more than a couple thousand a month and that’s just being wasteful.
you just need to use more context injection and more agents working in parallel on git worktrees or whatever and then get really depressed because you own an absolute fuckload of code now and you are less familiar with it
I remember back when Nvidias PR team used to push this humble rags to riches story about Jensen back in the day, I guess even they would have a tough time doing that now that he’s gone mask off
Let me translate this for you, “My bonus depends on you showing our massive investment wasn’t a waste so I’m holding your jobs hostage until you make up busy work to pretend it was worthwhile.”
Jensen Huang should suck my dick.
Only if he has the tokens to do so.
Wow, that doesn’t sound like a pyramid scheme at all. At all.
Jensen compared today’s AI tools to machinery that was invented during the industrial revolution
They really want this to be an apt comparison and it’s really not
Edit
It seems that the Nvidia CEO isn’t the only one investing in AI tokens for his employees to freely use.
They also really want to talk about tokens like they’re some kind of currency
It’s worse, they want to change the global economy to corporations paying corporations…
The total elimation of actual consumers, because none of us will be able to afford to consume enough.
AI companies need people to pay for AI to keep buying Nvidia chips. So Nvidia is making their employees pay for the AI so AI companies keep buying Nvidia chips.
It’s not a sustainable system, it’s just a money churn whose only purpose is to consolidate wealth.
I’m hungry. When do we eat?
That’s the neat part!
So ponzi scheme?
I didn’t read it that way. I think he’s saying “bosses: if you’re paying a $100k salary to a dev and not also paying $50k for tokens, your dev isn’t working hard enough”. Which is better, but only just.
To refine that even further, he doesn’t appear to imply that the dev isn’t WORKING hard enough, only that they’re not being OPTIMALLY PRODUCTIVE.
What he’s trying to do, really is float and normalize the concept of baking tokens into HR math in terms of a “golden ratio”… which happens to be 2:1.
So, when a company goes all in on ai, and they cut thier workforce in half, they’ll need to add in 50% for tokens. 50% of the original staff, at a new 150% cost, puts the company at 75% pre ai workforce cost. This is the “guidelines” they’re trying to normalize.
It’s funny how his calculation factors in the completely immaterial price of tokens variable instead the material one which is the number of tokens or better yet the productivity gain per token.
Huh, yeah. 50% cost in tokens could be 1 very expensive token or 1 million tokens. Who knows?
I think he’s saying slaves should owe their soul to the company store.
Don’t give this sack of shite the benefit of doubt.
he needs new jackets.
A straight white one
He didn’t explicitly say it, but the language in use gels with the ones that have
The tokens as currency thing feels exactly like gaming microtransaction bullshit to me. Obscure the true cost of each purchase by selling in-game currency. Dark patterns lead to higher spending.
It’s proof of work cryptocurrency. But at least this time the compute is aiming at producing something useful, not the lowest hash value.
With my trusty LLM, I follow the steps recommending that I try reaching inside the die press to look for any jammed parts that could have caused the machine to suddenly stop working. My coworker, who my boss sent to assist me based on instructions from her LLM, asks his LLM how to help me. My coworker’s LLM recommends that he check if the emergency stop button has been pulled…
Just a few days ago they were indeed talking about giving tokens as bonus.
Is company chit still illegal if it’s a bonus?
That’s one the most insanely stupid things I’ve ever heard. Tokens are a tool used to do your job, a business expense, not fucking compensation.
Are the kWh that you use to light and air condition an office part of your compensation? How about toner and paper in the office printer?
If those tokens are a bonus, they’re yours, right? So you can burn half your annual salary worth of tokens translating Microsoft Encarta 95 into Klingon, and they will foot the bill?
They also really want to talk about tokens like they’re some kind of currency
i think it is better than pizza fridays for them, because they probably can’t barter pizza for free.
Sure I don’t get any retirement or healthcare benefits, but look at all the company scrip I get for the sloppy autocomplete that is stealing all my groundwater no matter how many times someone says „closed loop cooling“.
CEO suggests raising employee costs by fifty percent and is immediately fired.
Sorry, we don’t live in a sane world anymore.
Wouldn’t an AI researcher naturally find generative AI disadvantageous because they are attempting to develop novel tools which could not exist in the training set in the first place?
Even novel solutions are usually built out of smaller common building blocks. E.g. many novel solutions surely use a database. You can make the LLM help you set up and use the database, that your novel solution uses.
“No we don’t need databases anymore, only blockchains.” —Nvidia CEO a few years ago
And AI can help you migrate your database solutions to blockchain, utilizing 3000W worth of Nvidia co-processing power to validate your blockchain database that used to work on a 0.3W ARM processor.
The database was an arbitrary example. A more relevant example would be tenserflow layers in a neural network. As I understand it, you can in some cases get a novel solution to a problem just by choosing a smart enough combination, with the right data.
ChatGPT absolutely knows how to help doing the grunt work setting up the tenserflow configuration, following your directions.
you can in some cases get a novel solution to a problem just by choosing a smart enough combination, with the right data.
Smart, lucky, who can tell the difference?
If used by an expect developer, then the combinations are not just random “lucky” choices.
Or, if you take the machine learning approach, you just try all the combinations and use the one(s) that perform the best.
The world is not that simple. There are too many combinations to try. And you risk hitting local maxima, even if doing the gradient thing.
If you are capable of giving good directions…
I’m probably not arguing with you, and I’m not trying to regardless. You seem like you have tried this, watched it happen, go “huh, neet!” And then get it to take the next step in whatever you were doing in the first place only to find out you didn’t provide adequate requirements for your config.
only to find out you didn’t provide adequate requirements for your config.
Every software development project, ever.
Review your requirements before starting development. Review them again after each phase of development. Address inadequacies, conflicts, ambiguities whenever you find them.
AI is actually helpful in this process - not so much knowing what to choose to do, but pointing out the gaps and contradictions it can be helpful.
Well, yes, that is a central point.
I am a senior programmer. LLMs are amazing - I know exactly what I want, and I can ask for it and review it. My productivity has gone up at least 3-fold, with no decrease in quality, by using LLMs responsibly.
But it seems to me that some people on social media just can’t imagine using LLMs in this way. They just imagine that all LLM usage is vibe coding, using the output without understanding or review. Obviously you are very unlikely to create any fundamentally new solutions if you only use LLMs that way.
only to find out you didn’t provide adequate requirements for your config.
Senior programmer. I know exactly what I want. My requirement communicated to the LLM are precise and adequate.
What I find LLMs doing for my software development is filling in the gaps. Thorough documented requirements coverage, unit test coverage, traceability, oh you want a step by step test procedure covering every requirement? No problem. Installer scripts and instructions. Especially the stuff we NEVER did back in the late 1980s/early 1990s LLMs are really good at all of that.
Nothing they produce seems 100% good to go on the first pass. It always benefits from / usually requires multiple refinements which are a combination of filling in missing specifications, clarifying specifications which have been misunderstood, and occasionally instructing it in precisely how something is expected to be done.
A year ago, I was frustrated by having to repeat these specific refinement instructions on every new phase of a project - the LLM coding systems have significantly improved since then, much better “MEMORY.md” and similar capturing the important things so they don’t need to be repeated ALL THE TIME.
On the other hand, they still have their limits and in a larger recent project I have had to constantly redirect the agents to stop hardcoding every solution and make the solution data driven from a database.
I were simply unable to convince Codex to split a patch into separate git commits in a meaningful way. There are things that just doesn’t work.
Still useful for lots of stuff. Just don’t use it blind.
Yes, this is why I point it out. I agree with you, but no part of this is actually common sense. It just feels like it.
That’s fair. I guess it could be no different than a scientist with some grand scheme handing his plans off to others to implement.
I think I was assuming that cutting edge AI research involves more math/theory than just… bootstrapping existing tech stacks and tweaking configs.













