AI can’t be all that bad. The problem I’m always seeing with AI is a double-edged sword. You have corporations shoving AI in just about everything, treating it like its a cure for cancer and that really rubs people the wrong way. Then, on a more of a society level, you’ve got people who use AI for an assortment of things like making art with AI and still accredit themselves as an artist to people who treat AI like a therapist when it is not advised to.

However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.

  • thatsTheCatch@lemmy.nz
    link
    fedilink
    arrow-up
    4
    ·
    5 days ago

    Most of my qualms with AI aren’t in the usage of AI, but in its creation (water usage, mass layoffs, etc.—you’ve heard it all before).

    To me it’s like asking “What are some good uses for slaves?” (An extreme example to show the point, I’m not trying to say AI is the same as slavery).

    Like yeah I could find good uses for it, but should it exist in the first place?

  • sicktriple@lemmy.ml
    link
    fedilink
    arrow-up
    25
    ·
    edit-2
    7 days ago

    The technology itself is novel and cool. Its the complete and utter meltdown of all tech companies into brainless hype machines that is harmful, which is course, is a function of capitalist incentive and the need for the tech industry to come out with some new paradigm shifting innovation every decade. A normal, healthy society would have been able to leverage machine learning and LLM technology where its most useful, like parsing large amounts of data, or running a local instance on your computer to ask a few questions, etc. We wouldn’t see LLMs in every text editor, pencilcase and pair on sneakers but these snake oil salesmen who run the US economy are absolutely desperate for a new paradigm shift so they can keep making exponentially more money.

    The thing is, we don’t need to build these datacenters siphoning comically evil amounts of energy from the grid and making personal compute a thing of the past. Average everyday person doesn’t need cloud compute, they can run a local 4b parameter (very, very small) model on their laptop or phone if they need to ask chatgpt to make them a workout routine or to ask them who won the 1918 world series. But these fucking cretins don’t care, that’s not the point, they are in this because it’s a golden ticket to growth city and once they cash their check they don’t give one hot fuck about the human-spirit-stealing-machine they built.

    TLDR: our society is broken and that’s why we keep getting the shittiest, lowest-common-denominator version of everything. everything has to suck by definition because that’s the only version that the system we built will allow.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    6 days ago

    For every small benefit, there are disastrous mistakes. We shouldn’t discuss one without the other:

    https://tech.co/news/list-ai-failures-mistakes-errors

    March 2026

    • Police used AI facial recognition to arrest a Tennessee woman for crimes committed in a state she says she’s never visited

    February 2026

    • Health advice given by AI chatbots is frequently wrong, says new study

    January 2026

    • Study reveals that fixing AI mistakes takes up to 40% of the time that it saves

    • An AI tool used by ICE to identify applicants with previous law enforcement experience falsely flagged applicants with no such experience, leading to the placement of unqualified recruits in field offices.

    December 2025

    • AI mistakes clarinet for gun at Florida school

    November 2025

    • Google Antigravity deletes entire content of user’s computer drive

    • Report finds AI hallucinations in 490 court filings from the past six months

    October 2025

    • Teenager handcuffed after AI mistakes Doritos packet for gun

    • Lawyer submits AI-assisted court filing with fake citations

    • Man follows ChatGPT advice over stopping eating salt, develops rare condition. The man was hospitalized, sectioned, and eventually treated for psychosis. He tried to escape the hospital within 24 hours of being admitted.

    • ChatGPT-5 jailbroken with 24 hours of release

    July 2025

    • AI Coding app deletes entire company database

    • McDonald’s AI chatbot error exposes data of 64 million job applicants

    • AI program is tasked with running a small shop, goes insane, claims to be human

    • Apple Intelligence falsely presents BBC headline

    … and it just keeps going.

  • seahag@lemmy.world
    link
    fedilink
    arrow-up
    27
    arrow-down
    3
    ·
    edit-2
    7 days ago

    AI has uses in the medical, scientific, and disabled communities. I’ve seen it helping blind people with shopping, with Google glasses or whatever reporting what they’ve picked up and describing it to them. It can also identify/predict cancer tissue early.

    Generative AI is peak laziness and the death of human creativity. Using AI for companionship has a nasty effect on mental health.

    AI should have only ever been an assistant in medical/scientific research in my opinion, simply because it’s so damaging to the environment, economy, and society.

    • iByteABit@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      6 days ago

      It can also identify/predict cancer tissue early.

      Do you mean an LLM or a machine learning model specifically trained for this?

      • Paragone@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        6 days ago

        Different case, obviously, but I remember reading about an AI which can identify pending-heart-attack from x-rays … and nobody could figure-out what the hell it was judging from…

        THAT is brilliant.

        Specialized to the degree that it is trustworthy.

        I’d be surprised if humans could possibly compete against a properly-done set-of-AI’s, which worked through, in the correct order, all possible diagnostic-reasoning.

        Democratizing accurate-diagnosis would be THE medical-revolution the world needs, now.

        _ /\ _

  • ☂️-@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    6 days ago

    translation is pretty good.

    they want to make ai npcs on games, which could be awesome if we can ever reduce the system requirements for running it.

    • Random Dent@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      I tried out a game/demo thing that was a tester for AI NPC dialogue. I asked an NPC to tell me about himself and he replied that he could not connect to server lol

    • sangeteria@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      6 days ago

      There’s that one silly vampire game which uses AI NPCs, I think it’s kind of fun looking from people I saw play it

  • logos@sh.itjust.works
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    7 days ago

    I have a friend at work that does a lot of video. He films weddings, music videos etc. and is making a pilot for Netflix. He uses AI to go through all his footage and tag it according to content. E.g. if he needs a clip of birds, he can just search ‘birds’ and it will pull up all relevant footage. Incredibly useful.

  • racoon@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    5 days ago

    Converting PDFs into HTMLs or RFT/TXT docs witout OCR typos. Until recently, it was almost impossible to turn a scanned book from PDF into doc or TXT, because the output of copying and pasting or converting using PDF tools was illegible. AI now can do a “deep AI seek” (look it up) into the texts.

    I am converting a textbook into an audiobook in HTML (paragraph highlighting with manual sync) with an integrated popup glossary into every word (with grammar and meaning) and dictionary lookup if clicked.

    Besides, as an apendix to each chapter, I add all the explanations from the book.

    I took the ~4 500 words of the book and asked for a grammar analysis and meaning lookup to create a glossary. The IA joyfully skipped many terms but that is something I will fix when each chapter is finished. Now I am being punished with waiting despite having paid $20.

  • WolfLink@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    5 days ago

    LLMs tend to be a “jack of all trades, master of none”. You are likely to find them useful for helping you with something you are inexperienced at, but not at something you are an expert in. However, because they lie a lot, it’s best to double-check your information, but the LLM can still be helpful with the ”you don’t know what you don’t know” issue.

  • Techlos@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    6 days ago

    Curating massive music libraries. I’ve been using a small embedding model to organise my music for DJing, and being able to generate a t-sne plot clustered on perceptual similarity has been wonderfully useful.

    I’ve also found CLIP models useful for searching videos, just embed a screenshot every couple of min of footage and query with a description of the scene.

    And as bad as generated subtitles can be, when the only other option is nothing at all they are pretty nice to have.

  • Lumidaub@feddit.org
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    7 days ago

    If we’re strictly talking about LLMs: Certain accessibility services - MAYBE. Writing closed captions / transcription for the most part requires little “human” touch. If we ASSUME that AI will be able to it reliably one day - because it really can’t yet - that’s one thing that would benefit society.

    Image descriptions is another thing I might see done by AI one day but that still requires an understanding of what’s actually important about the image.

    • lepinkainen@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      6 days ago

      I built a system that translates subtitles from English to my native language and it beats cheap-ass “official” translations 9/10

      It even gets colloquial terms and phrases right, adapting to the correct song for example - something a human translator working for minimum pay usually won’t bother

    • Paragone@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      6 days ago

      Please go watch the yt video of Bernie Sanders discussing politics/society/civilization with Claude.ai

      That may blow your mind…

      It’s … not quite as limited as you, or I, had been believing…

      _ /\ _

  • MerrySkeptic@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    ·
    7 days ago

    I’m a therapist. I use HIPAA compliant AI to generate my (editable) case notes for my sessions now. Not only is it a huge time saver to simply edit a generated note as opposed to making one from scratch, but in many cases it takes more detailed notes, including quotes from clients.

    I have heard of other therapists and medical doctors also using AI to help with diagnosing.

    The danger is when therapistsdon’t review the content to check for accuracy. Because occasionally it will generate something not really reflective of what the therapist might have been doing, or it might lack detail that the therapist might have otherwise inclused. But more often the stuff it comes up with is surprisingly accurate.And editing is even easier when you can just tell the AI something like, “include more details about how the client noticed their pattern of putting their own feelings last,” and it just does what you asked. You don’t necessarily have to edit manually, though you can.

    • fizzle@quokk.au
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      I dislike this immensely and actively seek health care providers that don’t use these tools.

      My core problem is that I want a professional who engages with me as a human and knows me.

      I’m a professional (not in health care) but I “know” all of my clients, and I don’t think that’s an unreasonable expectation for a client or patient. When I pay $100 to talk to a GP for 10 minutes, I don’t think it’s too much to ask for them to have a conversation with me, really truly listen to me, and spend a few minutes writing some notes.

      In the case of a mental health professional the time spent after an appointment with a patient is much greater. I don’t really want what I’ve said to be automatically converted to notes for a human to review. I want a human to consider the human to human conversation we have had, in the context of other conversations we have had and the relationship I have with them, and use those insights to produce appropriate documentation.

      Finally, I have a strongly held belief that relying on the assistance of gen AI reduces one’s skills and abilities. For example, consider two therapists who have just completed their education and accreditation and start seeing patients. One uses gen AI to produce notes for every patient, the other eschews this practice. Ten years later, which therapist would you really trust to listen to patients and be able to distill the key elements of the conversation both spoken and unspoken?

      That said, I’m aware that these services are becoming an industry standard. I suppose they may help therapists see more patients, and in the context of public health that might be a good thing. Whether or not I would use a service like this if I were a therapist is a difficult question to answer. If I were just starting out I think I probably would. That is to say my beef isn’t with you personally using a service like this, more that it’s becoming an industry standard.

      • MerrySkeptic@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        4 days ago

        I understand those concerns and I think there’s validity. But there’s also enormous potential for benefit.

        I know of several therapists who are very good at being present with a client but terrible at documentation. And if one of these has a busy day or two it is easy to get behind. By the time they get around to writing the note the details are very fuzzy. Human memory is notoriously unreliable. A therapist I respect has said that if you’re writing a note 24 hours or more after the session, you’re probably writing fiction. A tool like this has the potential to greatly help the documentation process. But I agree that it should never become a replacement. I thoroughly read all my notes and often make edits to make them more relevant to me.

        An attorney I know who specializes in representing therapists and regularly conducts legal and ethics trainings has also said that from a legal standpoint, when comparing human to AI generated notes, the AI notes are usually superior. They contain details like quotes and they automatically include all the stuff that matters for legal or insurance requirements. This attorney is VERY risk averse and honestly I thought she would have been against this, expecting horror stories like artifacts. Her opinion was a factor in me trying it out.

        Again, I stress that this is a tool and not a replacement. When I read through a note, I am considering the things my clients said and my interventions to see if it matches up. It’s not perfect but it is very good and I’ve regularly been surprised with how helpful it can be.

        • fizzle@quokk.au
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          Thanks for a considered response. As in all things, there’s nuance and I acknowledge there are benefits.

          I’m genuinely curious as to whether you think reliance on this service will diminish someone’s opportunity to build the related skills?

          • MerrySkeptic@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            4 days ago

            I think that given human nature, there will certainly be some providers who overly rely on it. There are already therapists and other professionals who cut corners where they shouldn’t in a variety of ways. Probably the most common example of this is when therapists write bare-bones notes with practically no useful information to bridge one session to the next. That’s been happening since documentation was a legal requirement.

            However, as always, any serious professional is going to take the time to do it right. They will understand how to use a tool effectively while keeping their skills sharp. In my field, with this tool, that would mean every note is read and edited so that it is truly useful. For example, editing the content of the note so that it can be interpreted through the therapist’s theoretical orientation.

            I would hope that training programs and continuing education providers emphasize that any note they sign, including one generated by AI, is one that they are still legally responsible for. So it behooves them to always read it thoroughly and check it for accuracy.

            With any new tool, certain skills will diminish but new skills will be developed. So writing skills may suffer, but good therapists will be good at editing and using effective prompts to get a good note.

            Also, for what it’s worth, documentation skills and intervention skills are very different. I have known a few excellent therapists who were absolute shit at documenting. These therapists tend to be so naturally gifted and intuitive that they don’t need to document very well to be effective. And many therapists write very good notes but are mediocre at the actual therapy. So, at least for now, I tend to see the potential pros as outweighing the potential cons. That could change though!

      • MerrySkeptic@sh.itjust.works
        link
        fedilink
        arrow-up
        8
        ·
        7 days ago

        Yes basically, but since it is HIPAA compliant the recording is automatically destroyed when the note is saved. Also no protected recordings are used to teach the AI. The therapist can also choose from a number of different case note formats that might focus on different things

          • SuperUserDO@piefed.ca
            link
            fedilink
            English
            arrow-up
            8
            ·
            7 days ago

            People conflate security with risk mitigation. It’s not secure in the way that you can confirm the data has been deleted. The risk however is mitigated due to vendor attestations reinforced by contracts.

            • Helix 🧬@feddit.org
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 days ago

              Yep, so you can’t actually know if the recording is destroyed, it’s just contractually required to be destroyed. Big difference in my book.

              Wished these sensitive audios would be processed locally and never leave the therapist’s network instead.

          • MerrySkeptic@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            6 days ago

            I can’t know for certain, as I’m not on the product side of things. But I do know that HIPAA standards are very rigorous and if it were discovered that they were intentionally misleading therapists and clients then it would invite a class action lawsuit that would be insanely large.

            I do ask for and document my clients’ consent, though, so if anyone is not comfortable with it that’s fine. I just write the note the old fashioned way. Most are fine but a few have said they don’t want to and it’s not a big deal.

          • lepinkainen@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            6 days ago

            A HIPAA violation is a death sentence to a company, along with massive fines.

            There’s no incentive for them to fuck around

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    6 days ago

    Anything that’s fuzzy and impossible to automate with traditional algorithms, but that also has a reasonably high tolerance for error. It just makes up stuff a good portion of the time, you see.

    However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.

    Watch out, personal finances are not one of those things.