• Epzillon@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    22 hours ago

    Are you deadass saying we should let ChatGPT itself and the companies that ship it form its own safety guidelines? Because that went really well with the Church Rock incident…

    • Electricd@lemmybefree.net
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      21 hours ago

      If they don’t, then its lawsuits going their way, so they will put some

      But having some laws isn’t necessarily bad, I just don’t trust countries to do a good job at it, knowing how tech illiterate they are

      • Epzillon@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 hours ago

        What do you even mean. You are contradicting yourself. “We shouldnt blame AI or the companies because they cant be controlled” but the companies and AI itself is supposed to handle the safety regulations? What type of regulations do you seriously expect them to restrict themselves with if they know there is no way they cant guarantee safety? The legislation must come outside of the business and restrict the industry from releasing half-baked ass-garbage that is potentially harmful to the public.

        • Electricd@lemmybefree.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          19 hours ago

          What I meant is:

          You can’t expect LLMs not to do that because that’s not technically possible at the moment

          Companies should display warning and add some safeguards to reduce the amount of time this happens