I know its a bit of a hot topic but I’ve always seen people (online anyways) are either a hard yes or absolutely no on using AI. There are many types of “AI” that have already been part of technology before this hype, I’m talking about LLMs specifically (ChatGPT, Claude, Gemini, etc…). When this bubble burst its absolutely not going anywhere. I’m wondering if there is case where you’ve personally used it and found it beneficial (not something you’ve read or seen somewhere). The ethics of essentially stealing vast amount of data for training without compensation or enshitification of products with “AI” is a whole other topic but there is absolutely no way that the use of the technology itself is not beneficial somehow. Like everything else divisive the truth is definitely somewhere in the middle. I’ve been using lumo from proton for the last three weeks and its not bad. I’ve personally found it useful in helping me troubleshoot issues, search or just use it to help with applying for jobs:

  • its very good at looking past SEO slop plaguing the internet and it just gets me the information I need. I’ve tried alternative search engine (mojeek, startpage, searXNG, DDG, Qwant, etc…) Most of them unfortunately aren’t very good or are just another way to use google or bing.
  • I was having some wifi problem on a pc i was setting up and i couldn’t figure out why. i told it exactly what was happening with my computer along with exact specs. It gave gave me some possible reasons and some steps to try and analyze my computer it was very very useful.
  • I’ve been applying for so many jobs and it so exhausting to read hundreds of description see one tiny thing in the middle that disqualifies me so I pass it my resume with links and tell it to compare what i say on my resume and what the job is looking for to see if im a fit. When i find a good job i ask rewriting tips to better focus on what will stand out to a recruiter (or an application filtering system to be real).

I guess what I’m trying to say is it cant all be bad.

  • I Cast Fist@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    44 minutes ago

    Regarding the job application, most companies and sites are using shitty AI to rummage through the piles of resumes they receive.

    The whole job application process is frankly one of the worst real world use of most technologies, not only AI

  • ThunderComplex@lemmy.today
    link
    fedilink
    arrow-up
    1
    ·
    53 minutes ago

    For image gen I don’t have a good use but it is very complex which means sometimes I can just lose an hour or 2 fumbling around in a complex network of nodes.
    What I found fascinating was how strangely good the results were when I created an image, then fed the result back as input, and repeated that process.
    The only useful thing I used image gen for was creating references for an artist to create a PFP for me that looks rad as hell.

    As for LLMs, also not really. I think about 90% of the time LLMs either give a useless or just wrong answer. I can’t seem to find the thing that LLMs are supposed to be good for. One thing all LLMs I tried have failed consistently at was finding a movie from a vague description I gave.

  • Helix 🧬@feddit.org
    link
    fedilink
    arrow-up
    6
    ·
    17 hours ago

    Inspiration for writing emails, letters, text messages. I always check what the thing wrote though.

  • altphoto@lemmy.today
    link
    fedilink
    arrow-up
    5
    ·
    18 hours ago

    For engineering… Get me a script that calculates the length of a window based on a similar size. Or calculate the tip velocity of a turbine blade given the speed of the gas going into it and the diameter of the turbine. Basically things we would have to take a month to design so as to answer other questions. Cuz nobody pays you to make quick calculation tools.

      • altphoto@lemmy.today
        link
        fedilink
        arrow-up
        2
        ·
        36 minutes ago

        No, you don’t just get a script and run it blindly! You use your knuckle to figure out if it works first by reading the code and calculating known data as a test.

        You can’t even rely on AI to have real formulas for the area of a circle. You have to rely on your own knowledge and on books to confirm if the code is doing what you need it to do.

        What AI does is it shortens the code creation time to just a few seconds vs days of coding… Because engineers are the best back seat coders I know. Once there’s good code they can move mountains. But confronted with a blank page they freeze.

  • m532@lemmygrad.ml
    link
    fedilink
    arrow-up
    1
    ·
    15 hours ago

    I’ve tried learning digital drawing before, but my programmer brain finds prompt engineering to be much more intuitive, so i’ve been doing that a lot lately.

    Also its surprisingly good at upscaling in “image-to-image, 0.1 strength” mode. I thought I would need a dedicated upscaling mode for that. The result looks noticeably better than with normal bicubic upscaling.

  • melsaskca@lemmy.ca
    link
    fedilink
    arrow-up
    10
    ·
    1 day ago

    I see it as a toy. No different from the Slinky or Silly Putty I had as a kid. Just something to play with.

  • cRazi_man@europe.pub
    link
    fedilink
    arrow-up
    16
    arrow-down
    2
    ·
    edit-2
    1 day ago

    I’ve used it to help me set up a home server. I can paste text from log files or ask about something not working and it tells me what the problem is. It gets things wrong a lot, but this is the perfect low risk use for AI…for sending me in the right direction when I have no idea why things aren’t working. When it’s completely wrong, it doesn’t really matter.

    The real test for AI is: “does it matter when it is completely wrong”. If the answer is yes, then that’s not a suitable use for AI.

    • Eril@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      7 hours ago

      This. I’m a software engineer and I also sometimes use it by providing it a problem and asking it for ideas how to solve them (usually with the addition of “don’t write any code”, I do that myself, thanks). It gives me a few pointers that I can then follow. Sometimes it’s garbage, sometimes it’s quite good.

      • UltraBlack@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        5 hours ago

        99% garbage.

        If you have ever touched C++ you will know that it has godawful errors and not even chatgpt knows what the fuck is happwning

        • Eril@feddit.org
          link
          fedilink
          arrow-up
          1
          ·
          2 hours ago

          That’s why I’m not asking it to give me actual code I should use, but keep it high level. If it then says there are patterns x,y and z that could be usable, I can look it up myself and also write the code itself. Using it to actually write the code is mostly garbage, yes. And in any case you still need to have an idea of what you’re doing yourself anyway.

          • UltraBlack@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 hour ago

            No, I’m not asking it to write code, I’m asking it to interpret the error and point to the actual problem in the code. It just can’t…

  • comfy@lemmy.ml
    link
    fedilink
    arrow-up
    14
    ·
    1 day ago

    Creating low-effort images for ideas that don’t warrant effort, like silly jokes.

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    edit-2
    1 day ago

    I self host Deepseek R1 and it’s been pretty helpful with simple Linux troubleshooting, generating bash commands, and even programming troubleshooting. The thinking feature is pretty cool and I do find myself learning stuff from it.

    What took it from gimmick to actual nice to have for me is when my jerry rigged home network broke and wouldn’t connect to the internet. Having what is entially an interactive StackOverflow/ServerFault running on a local machine was really helpful.

    Running the model locally makes it easier to not overly rely on AI because of the limited token rate.

  • Chaser@lemmy.zip
    link
    fedilink
    arrow-up
    4
    ·
    1 day ago

    I have some Hone Assistant automations, that creates some todos in Habitica for me. These todos are ai generated, so they sound like quests in a rpg 😎 this really motivates me. Also it’s funny

  • aberrate_junior_beatnik (he/him)@midwest.social
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    8
    ·
    2 days ago

    It’s got lots of uses:

    • driving up fossil fuel revenues
    • providing a solid excuse for laying off a bunch of employees
    • disciplining labor
    • offloading blame for unpopular decisions
    • increasing surveillance and nonconsensual data collection
    • pulsewidth@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      29 minutes ago
      • corporate theft from artists, claiming ‘its just learning data bro’, only to have the output often be 99% identical to the original ‘learning data’
      • making fake videos much easier for swift political disinformation campaigns
      • LLM voice agents that make scams much easier to perpetuate on the elderly
  • darthelmet@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    23 hours ago

    I took AI courses in college and it was fun to learn about then when it was a bunch of toy examples that showed the potential of these systems, but it was clear enough to anyone in those classes or doing that research how not ready they were for real applications because of all the known flaws with how model training worked. And then some ceos just ignored all that and started blowing up the bubble.

    So my answer is the research models that could play video games kinda good. Everything after that was getting ahead of ourselves.