This is the technology worth trillions of dollars huh

  • 1rre@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    19
    ·
    6 hours ago

    A six year old can read and write Arabic, Chinese, Ge’ez, etc. and yet most people with PhD level experience probably can’t, and it’s probably useless to them. LLMs can do this also. You can count the number of letters in a word, but so can a program written in a few hundred bytes of assembly. It’s completely pointless to make LLMs to do that, as it’d just make them way less efficient than they need to be while adding nothing useful.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      4 hours ago

      LOL, it seems like every time I get into a discussion with an AI evangelical, they invariably end up asking me to accept some really poor analogy that, much like an LLM’s output, looks superficially clever at first glance but doesn’t stand up to the slightest bit of scrutiny.

      • 1rre@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        6
        ·
        3 hours ago

        it’s more that the only way to get some anti AI crusader that there are some uses for it is to put it in an analogy that they have to actually process rather than spitting out an “ai bad” kneejerk.

        I’m probably far more anti AI than average, for 95% of what it’s pushed for it’s completely useless, but that still leaves 5% that it’s genuinely useful for that some people refuse to accept.

        • abir_v@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 hours ago

          I feel this. In my line of work I really don’t like using them for much of anything (programming ofc, like 80% of Lemmy users) because it gets details wrong too often to be useful and I don’t like babysitting.

          But when I need a logging message, or to return an error, it’s genuinely a time saver. It’s good at pretty well 5%, as you say.

          But using it for art, math, problem solving, any of that kind of stuff that gets tauted around by the business people? Useless, just fully fuckin useless.

        • TempermentalAnomaly@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          It’s amazing that if you acknowledge that:

          1. AI has some utility and
          2. The (now tiresome and sloppy) tests they’re using doesn’t negate 1

          You are now an AI evangelist. Just as importantly, the level of investment into AI doesn’t justify #1. And when that realization hits business America, a correction will happen and the people who will be effected aren’t the well off, but the average worker. The gains are for the few, the loss for the many.

        • Jomega@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          it’s more that the only way to get some anti AI crusader that there are some uses for it

          Name three.

          • 1rre@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            2 hours ago

            I’m going to limit to LLMs as that’s the generally accepted term and there’s so many uses for AI in other fields that it’d be unfair.

            1. Translation. LLMs are pretty much perfect for this.

            2. Triaging issues for support. They’re useless for coming to solutions but as good as humans without the need to wait at sending people to the correct department to deal with their issues.

            3. Finding and fixing issues with grammar. Spelling is something that can be caught by spell-checkers, but grammar is more context-aware, another thing that LLMs are pretty much designed for, and useful for people writing in a second language.

            4. Finding starting points to research deeper. LLMs have a lot of data about a lot of things, so can be very useful for getting surface level information eg. about areas in a city you’re visiting, explaining concepts in simple terms etc.

            5. Recipes. LLMs are great at saying what sounds right, so for cooking (not so much baking, but it may work) they’re great at spitting out recipes, including substitutions if needed, that go together without needing to read through how someone’s grandmother used to do xyz unrelated nonsense.

            There’s a bunch more, but these were the first five that sprung to mind.

            • voronaam@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              31 minutes ago
              1. Translation. Only works for unified technical texts. The older non-LLM translation is still better for any general text and human translation for any fiction is a must. Case in point: try to translate Severance TV show transcript to another language. The show makes a heavy use of “Innie/Outie” language that does not exist in modern English. LLM fail to translate that - human translator would be able to find a proper pair of words in the target language.

              2. Triaging issues for support. This one is a double-edged sword. Sure you can triage issues faster with LLM, but other people can also write issues faster with their LLMs. And they are winning more. Overall, LLM is a net negative on your triage cost as a business because while you can process each one faster than before, you are also getting way higher volume of those.

              3. Grammar. It fails in that. I asked LLM about “fascia treatment” but of course I misspelled “fascia”. The “PhD-level” LLM failed to recognize the typo and gave me a long answer about different kinds of “facial treatment” even though for any human the mistake would’ve been obvious. Meaning, it only corrects grammar properly when the words it is working on are simple and trivial.

              4. Starting points for deeper research. So was the web search. No improvement there. Exactly on-par with the tech from two decades ago.

              5. Recipes. Oh, you stumbled upon one of my pet peeves! Recipes are generally in the gutter on the textual Internet now. Somehow a wrong recipe got into LLM training for a few things and now those mistakes are multiplied all over the Internet! You would not know the mistakes if you did not not cook/bake the thing previously. The recipe database was one of the early use cases for the personal computers back in 1990s and it is one of the first ones to fall prey to “innovation”. The recipes online are so bad, that you need an LLM to distill it back to manageable instructions. So, LLM in your example are great at solving the problem they created in the first place! You would not need LLM to get cooking instructions out of 1990s database. But early text generation AIs polluted this section of the Internet so much, that you need the next generation AI to unfuck it. Tech being great at solving the problem it created in the first place is not so great if you think about it.

            • Jomega@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              53 minutes ago

              Right, except they suck at all of those things. Especially the last one. Unless you think glue is an acceptable pizza topping.

              • 1rre@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                1
                ·
                45 minutes ago

                Nice, here’s a gold star for finding one case of it doing something wrong. I’ll call the CEO of AI and tell them to call it off, it’s a good thing humans have never said anything like that!

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      So if the AI can’t do it then that’s just proof that the AI is too smart to be able to do it? That’s your arguement is it. Nah, it’s just crap

      You think just because you attached it to an analogy that makes it make sense. That’s not how it works, look I can do it.

      My car is way too technologically sophisticated to be able to fly, therefore AI doesn’t need to be able to work out how many l Rs are in “strawberry”.

      See how that made literally no sense whatsoever.

      • 1rre@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        3 hours ago

        Except you’re expecting it to do everything. Your car is too “technically advanced” to walk on the sidewalk, but wait, you can do that anyway and don’t need to reinvent your legs