• 🌞 Alexander Daychilde 🌞@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    25
    ·
    edit-2
    13 hours ago

    The alternative is Facebook with lies that go unchecked completely. This is actually an area where AI is not bad.

    edit: sigh. Refusing to acknowledge where things can be useful. NO, ALL BAD. BAD BAD BAD! AI BAD! ALWAYS BAD! NO USE! NO GOOD! ONLY BAD! BAD BAD BAD! Such fucking blindness.

    • stabby_cicada@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 hours ago

      “Correcting” incorrect information with more incorrect information doesn’t improve the situation.

      AI tools are inherently unreliable because of the randomness in their text generation algorithms.

      And worse, Europe doesn’t build its own AIs. LLM fact checking would have to be done by Grok or Claude or some other product from big American tech. And there’s an obvious problem with a social media network trying to avoid American censorship and political bias and corporate domination but “fact checking” with a tool that has American censorship and political bias and corporate domination built into it.

      And on a personal level, I don’t want to use a social media site that has a bot scanning my posts and flagging them for wrongspeak - or that interjects automated bot opinions into conversations between humans. I use social media to talk to other human beings, not bots, thanks. If I wanted to know what chatGPT thinks of a post I’d fucking ask chatGPT.

    • LwL@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 hours ago

      I doubt it, honestly. It’d likely catch a lot of misinfo, yes, but it would likely also classify any new findings that run counter to previous assumptions as misinfo. LLMs can’t keep up to date. And they still have the same issue that whoever trains them gets to decide what is and isn’t misinfo, which starts being a problem when it’s an ubiquitous social media site.

    • FreddyNO@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      10 hours ago

      The system that is notorious for lying being used for fact checking. Yea maybe you should write “bad” in caps lock one more time, that will make you right.

      • mimavox@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        18 hours ago

        If i’s implemented the right way, it could be. AI can be used for good things even if the knee-jerk reaction of so many people online is to equate it with crap.

        • LuceVendemiaire@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          18 hours ago

          Recoiling upon smelling shit is also a kneejerk reaction

          its always this same bullshit, “if we just implemented this correctly” where can an AI participate in fact-checking? It can’t be trusted because of hallucinations, so the solution would be to uh… manually review everything it does? just rely on third parties to do it? What ACTUAL USE does this shit have?

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          14 hours ago

          It couldn’t be. Lying bias machine that gives people psychosis can’t magically stop being what it is. So it will always be terrible and unnecessary at best, harmful at regular.