• simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    81
    arrow-down
    3
    ·
    edit-2
    4 months ago

    AI was a promise more than anything. When ChatGPT came out, all the AI companies and startups promised exponential improvements that will chaaangeee the woooooorrlllddd

    Two years later it’s becoming insanely clear they hit a wall and there isn’t going to be much change unless someone makes a miraculous discovery. All of that money was dumped in to just make bigger models that are 0.1% better than the last one. I’m honestly surprised the bubble hasn’t popped yet, it’s obvious we’re going nowhere with this.

    • bluGill@kbin.run
      link
      fedilink
      arrow-up
      30
      ·
      4 months ago

      @simple@lemm.ee

      @technology@lemmy.world @tek@calckey.world

      ai has been doing that trick since the 1950s. There have been a lot of use coming out of ai, but it has never been called ai once successful and never lived up to the early hype. some in the know about all those previous ones were surprised by the hype and not surprised about where it has gone, while others pushed the hype.

      The details have changed but nothing else.

      • bionicjoey@lemmy.ca
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        1
        ·
        4 months ago

        Yeah the only innovation here is that OpenAI had the balls to use the entire internet as a training set. The underlying algorithms aren’t really new, and the limitations have been understood by data scientists, computer scientists, and mathematicians for a long time.

        • Frozengyro@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          4 months ago

          So now it just has to use every conversation that happens as a data set. They could use microphones from all over the world to listen and learn and understand better…

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        4 months ago

        LISP machines were cool. They can bring back that kind of “AI” right now, I want to have one.

        I mean, how cool can it be, having hardware acceleration of LISP-typical operations, and a whole LISP-built operating system.

        Maybe resurrecting Genera is too much, but we can do with porting Emacs.

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        4 months ago

        (Repeating myself due to being banned from my previous instance for offering to solve a problem with nukes)

        Bring back Lisp machines. I like what was called AI when they were being made.

    • henrikx@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      13
      ·
      edit-2
      4 months ago

      You should all see the story about the invention blue LEDs. No one believed that it could work except some japanese guy (Shuji Nakamura) who kept working on it despite his company telling him to stop. No one believed it could ever be solved despite being so close. He solved it and the rewards were astronomical.

      This could very well be another case of being so close to a breakthrough. Two years since GPT-3 came out is nothing. If you were paying any sort of attention you would see there are promising papers coming out almost every week. It’s clear there is a lot we don’t know about training neural nets effectively. Our own brains are the proof of that.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        4 months ago

        I mean if you ignore all the papers that point out how dubious the gen AI benchmarks are, then it is very impressive.

      • cley_faye@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        4 months ago

        No one believed that it could work except some japanese guy

        There is a difference in not knowing how to do a thing and someone coming out doing the thing, and knowing how something works, knowing it’s by design limitations, and still hoping it may work out.

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        4 months ago

        mwahahah. The people who are working on LLMs right now are the dumbasses and MBAs of the industry. If we ever get anything like an artificial general AI, it will come from a team of serious researchers / engineers who don’t give a shit about marketing.

      • bamboo@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        4
        ·
        4 months ago

        There are millions of people devoting huge amounts of time and energy into improving AI capabilities, publishing paper after paper finding new ways to improve models, training, etc. Perhaps some companies are using AI hype to get free money but that doesn’t discredit the hard work of others.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          4 months ago

          There are millions of people devoting huge amounts of time and energy into improving AI capabilities,

          millions of students who bought into the marketing bullshit, you mean.

        • henrikx@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          5
          ·
          edit-2
          4 months ago

          Can’t believe you get downvoted for saying that. No worries though as the haters will all be proven wrong eventually.