Log in | Sign up

  • 1 Post
  • 321 Comments
Joined 2 years ago
cake
Cake day: January 22nd, 2024

help-circle

  • Let’s hope the winners are slightly less ghoulish than our Oil barons.

    What a foolish hope!

    $200,000,000,000 debt.
    Who well pay it?
    You talk like gravity doesn’t exist!

    You’re wrong if you think that it won’t be heavily reliant AI customers like software companies who spend five years removing codewriting skills from their workforce and building up technical debt in their codebase because no one has to understand it in those five years and there’s a lot of subtle, hard to spot bugs that got through code review because humans simply don’t make those kinds of errors and no one ever had to spot one in their life before claude came along.

    Did you think that enshitification wouldn’t affect the product? Yesterday’s computers and cars were easy to disassemble to replace parts. Now it’s much, much harder, and it’s very common to void your warranty if you do that. Today’s ai generated code is easy to tinker with and you can do what you like with your end product. Why would it stay that way? Why wouldn’t they engineer it to make that harder? It’s not difficult to make code confusing by changing variable names. I could fuck up your codebase for humans by simply swapping names like productSKU and customerID, let alone writing obfuscated code for any purpose whatsoever and with whatever variable names I like.

    Some software companies are outsourcing their talent to AI behemoths with mountains of debt to recoup. Guess who’s going to pay the debt! And what’s the point of such a company in the long run? Why are you speedrunning paying to replace yourself?

    There will be an AI crash and “consolidation”, meaning a switch to monopolies or near monopolies. Some companies are shedding institutional knowledge and programming skill like it was waste water. Once dependence comes, value extraction will follow it like disease follows unvaccinated infection.

    There is already $200bn in debt and growing rapidly. The shareholders aren’t going to be paying it. The ai customers are.


  • There was an article a bit ago explaining that most AI companies are making a 95% loss. You know, spending 100, receiving 5 loss. All that debt is going to mean the price for AI is about 20 times lower than it needs to be just to break even. The software teams that came to rely on AI to save costs will soon enough find themselves on the hook for this mountain of debt. Enshitification is real. Enshitification is coming. AI will not stay cheap, convenient and free of advertising.


  • How else are they going to begin to recap their billions and billions of debt? Someone has to pay for all those data centres, all that hardware, all that power, etc etc etc. It will be the companies that have come to rely on AI.

    Sure, for now, AI is a lot cheaper than an intern, but it doesn’t become an expert like a human does. And Amazon used to be cheaper than other retailers right up until they had achieved vast market share.

    This cannot be the last 10x price multiplier they pull. Not even close. Firstly they’re way, way, way off from recouping their costs, and secondly, they’re still way, way off market value for an incompetent human intern who isn’t learning much.

    Uber didn’t enter the market to open up taxis to new drivers and bring down prices, that was marketing. They entered the market to take a cut out of ever taxi fare in the world, and drive up prices at peak times to many times the agreed fares, especially in regulated areas.

    Similarly, AI didn’t enter the coding market to drive down prices and enable greater access for folk to generate code. They entered the coding market to receive the wages of programmers and drive up prices in in-demand fields. They are not unaware of how much companies pay devs. Why else would they have spent all those billions in advance? Where is the payback coming from?




  • AI has the same relationship to truth and trust that former British Prime Minister Boris Johnson has: truth is absolutely not part of the equation whatsoever, except in as much as it may be necessary to say some true things in order to establish trust.

    Giving such an entity executive power is to ignore a vast and ever growing body of information that you ought not to trust, yet here you are, hanging over the keys to the plausible-sounding nonsense monger.

    What about Nigel Farage? Well of course, he’s an absolute liar, a man who chose the dark side decades ago, who knows he’s in the wrong but strangely thinks it’s somehow bad to try to do good.

    Boris isn’t a liar or evil in the conventional sense, it’s just that he absolutely wouldn’t dream of letting whether something is true or good be part of decision-making any more than he would lock himself in a cage in public for the week, consult with ants about his route to work or hop on one leg all the time.

    So it is with AI.







  • You’ll be the 4753rd guy with the oops my llm trashed my setup and disobeyed my explicit rules for keeping it in check.

    You know programmers who use llms believe they’re much more productive because they keep getting that dopamine hit, but when you actually measure it, they’re slower by about 20%.

    You appointed yourself boss over a fast and plausible intern who pastes and edits a LOT of stack overflow code, but never really understands it and absolutely is incapable of learning. You either spend almost all of your time in code review now for your stupid sycophantic llm interns who always tell you you’re right but never learn from you, or you’re checking in vast quantities of shit to your projects.

    You know really subtle, hard to find bugs on rare cases that pass your CI every single time? Or ones that no one in their right mind would have made, but yet they compile and look right at first glance. They’re now your main type of bug. You are rotting your projects with your random number generator.

    And you think that all the money you’re playing for your blagging llms protects you from them fucking up everything for you. But it doesn’t. And you’ll also find that your contract with your llm supplier expressly excludes them from any liability whatsoever arising from you using it instead pre-blaming you for trusting it.


  • no its just the free models…

    You just have to be aware… when using a cheap model

    You: just the cheap ones

    I never said that.

    Ohhhhhhhhh ok yes of course you never said or implied that. Not your repeated message at all. And yet you can’t keep away from adressing your criticism towards free or cheap LLMs! It’s like your subtext or your underlying belief is that of you just pay big tech enough money and they can just build a big enough set of server farms, it’ll be ok. No, it will not be ok and the enshittification has begun from an already shitty base point.

    All LLMs are shit, the cheap and free ones are indeed just easier to spot as generating shit, if you ask them about things you know about. But you have to accept that they’re ALL shit and STOP making get out clauses for the expensive ones by firing your criticisms exclusively at the cheap or free ones.

    Giving ANY LLM executive power over your data is A BIG MISTAKE because you’re putting your data in the control of something which operates, at its heart, as a random number generator. They’re trained to sound right. People trust them because they sound right. This is a fundamental error.