AI’s become so invasively popular and I’ve seen more evidence of its ineffectiveness than otherwise, but what I dislike most about it is that many run on datasets of stolen data for the sake of profitability à la OpenAI and Deepseek

https://mashable.com/article/openai-chatgpt-class-action-lawsuit https://petapixel.com/2025/01/30/openai-claims-deepseek-took-all-of-its-data-without-consent/

Are there any AI services that run on ethically obtained datasets, like stuff people explicitly consented to submitting (not as some side clause of a T&C), data bought by properly compensating the data’s original owners, or datasets contributed by the service providers themselves?

    • Treczoks@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      1 day ago

      There are no legal sources big enough to train an AI on the level required to even perform basic interaction.

      • AmbitiousProcess (they/them)@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 hours ago

        This is very true.

        I was part of the OpenAssistant project, voluntarily submitting my personal writing to train open-source LLMs without having to steal data, in the hopes it would stop these companies from stealing people’s work and make “AI” less of a black box.

        After thousands of people submitting millions of prompt-response pairs, and after some researchers said it was the highest quality natural language dataset they’d seen in a while, the base model was almost always incoherent. You only got a functioning model if you just used the data to fine-tune an existing larger model, Llama at the time.

  • partial_accumen@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    1 day ago

    Are there any AI services that don’t work on stolen data?

    Yes, absolutely, but I don’t think that’s the question you want the answer to. There are many places where AI is used inside companies or hobby project where the specific problem to be solved is very specific and other peoples stolen data wouldn’t help you anyway.

    Lets say you’re a company that sells items at retail online, like a Walmart or Amazon. You want an AI model to be able to help your workers better select the size of box to pack the various items in for shipment to customers. You would input your past data for shipments you’ve sent including all the dimensions of your products you’re selling (so that data isn’t stolen), and input all of the sizes of boxes you have (they’re your boxes so also not stolen). You’d then could create an Unsupervised Classifier AI model based on linear regression. So the next time you have a set of items that need to be shipped out you’d input those items, and the model would tell you the best box size to use. No stolen data in any of this.

    Now, the question I think you’re asking is actually:

    “Are there any LLM AI chatbot services that don’t work on stolen data?”

    That answer, I don’t know. Most of the chatbot models we’re given to set up chatbots are pretrained by the vendor and you simply input your additional data to make knowledgeable on specific niche subjects.

  • Pamasich@kbin.earth
    link
    fedilink
    arrow-up
    5
    ·
    22 hours ago

    Switzerland announced a new LLM project which might be of interest here.

    Here’s a German article on it. If you’re okay with a Reddit link, here’s a translation.

    Some points on it:

    • fully open source in its entirety — source code, model weights, and training data will all be publically released.
    • licensed under Apache 2.0
    • compliant with Swiss data protection laws, copyright law, and the EU AI act
    • respects crawler opt-outs on websites

    While nothing there explicitly says the data is ethically sourced, we’ll be able to tell based on the opensource training data, and I assume copyright law takes care of stuff like books being used (though idk if the AI has a way to determine the license of web content, or if it fully relies on opt-outs there).

  • TriflingToad@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    19 hours ago

    iirc the AI in Adobe Photoshop is only trained off of the stock images they have the rights to

    could be wrong tho, I don’t use Adobe

  • razorcandy@discuss.tchncs.de
    link
    fedilink
    arrow-up
    5
    ·
    1 day ago

    Some machine learning models are trained on what’s called synthetic data, which is generated specifically for that purpose and mimics real-world data. What I don’t know is how much of the data used is synthetic vs. stolen.

  • Platypus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    24 hours ago

    Getty Images has an image generator trained exclusively on licensed images. I’m not aware of any text generators that do the same.

  • valek879@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    I heard about Notebook LM recently. I couldn’t tell you what it’s trained on but I’m order to use the LLM you need to provide it source material.

    So say you’re writing something for school. You can gather 50+ papers on the subject you’re trying to write about, upload them, then ask the LLM about what you uploaded. Sounds like turning research from a search for info to an interview with an “expert.”

    Again I can’t speak to how it was trained in the background but this seems genuinely useful.