When Adobe Inc. released its Firefly image-generating software last year, the company said the artificial intelligence model was trained mainly on Adobe Stock, its database of hundreds of millions of licensed images. Firefly, Adobe said, was a “commercially safe” alternative to competitors like Midjourney, which learned by scraping pictures from across the internet.

But behind the scenes, Adobe also was relying in part on AI-generated content to train Firefly, including from those same AI rivals. In numerous presentations and public postsabout how Firefly is safer than the competition due to its training data, Adobe never made clear that its model actually used images from some of these same competitors.

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    9 months ago

    I said it around 2 years ago when the term “ethical” was first coined by media when talking about AI. Ehtical in this context just means those who own data centers and made a huge efford to extract and process user data (Facebook, Google, Amazon, etc.) have all the cards. Nevermind the technology being so new users couldn’t possibly consent to it years ago. They just update their TOS and get that consent retroactively while law makers are absent as they happily watch their strocks go up.

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      9 months ago

      Its really frustrating to see people get riled up and manipulated into thinking legislating to make illegal anything “unethical” is in their interest.

      Its a fantasy to think individual creators will get a slice of the pie and not just the data brokers. Its also a convenient way to destroy the competition.

      People are getting emotional and they are going to use that to build one of the grossest monopoly ever seen.