I’m rather curious to see how the EU’s privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)

  • Primarily0617@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    2 years ago

    it’s crazy that “it’s too hard :(” has become an acceptable justification for just ignoring the law within tech circles

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      I’m not an AI expert, and I wouldn’t say it is too hard, but I believe removing a specific piece of data from a model is like trying to remove excess salt from a stew. You can add things to make the stew less salty but you can’t really remove the salt.

      The alternative, which is a lot of effort but boo-hoo for big tech, is to throw out the model and start over without the data in question. These companies would do well to start with models built on public or royalty free data and then add more risky data on top of that (so you only have to rebake starting from the “public” version).

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        Replace salt with poison or an allergenic substance and if fully holds. If a batch has been contaminated, then yes, you should try again.

        But now that the cat is out of the bag, other companies are less willing to let something be scrap able due to how valuable it can be.

        I think big tech knew this, that they can only build these models on unfiltered data before the AI craze.

      • GoosLife@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        If there’s something illegal in your dish, you throw it out. It’s not a question. I don’t care that you spent a lot of time and money on it. “I spent a lot of time preparing the circumstances leading to this crime” is not an excuse, neither is “if I have to face consequences for committing this crime, I might lose money”.

  • DigitalWebSlinger@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    “AI model unlearning” is the equivalent of saying “removing a specific feature from a compiled binary executable”. So, yeah, basically not feasible.

    But the solution is painfully easy: you remove the data from your training set (ie, the source code), and re-train your model (recompile the executable).

    Yes, it may cost you a lot of time and money to accomplish this, but such are the consequences of breaking the law. Maybe be extra careful about obeying laws going forward, eh?