The alternative is Facebook with lies that go unchecked completely. This is actually an area where AI is not bad.
edit: sigh. Refusing to acknowledge where things can be useful. NO, ALL BAD. BAD BAD BAD! AI BAD! ALWAYS BAD! NO USE! NO GOOD! ONLY BAD! BAD BAD BAD! Such fucking blindness.
“Correcting” incorrect information with more incorrect information doesn’t improve the situation.
AI tools are inherently unreliable because of the randomness in their text generation algorithms.
And worse, Europe doesn’t build its own AIs. LLM fact checking would have to be done by Grok or Claude or some other product from big American tech. And there’s an obvious problem with a social media network trying to avoid American censorship and political bias and corporate domination but “fact checking” with a tool that has American censorship and political bias and corporate domination built into it.
And on a personal level, I don’t want to use a social media site that has a bot scanning my posts and flagging them for wrongspeak - or that interjects automated bot opinions into conversations between humans. I use social media to talk to other human beings, not bots, thanks. If I wanted to know what chatGPT thinks of a post I’d fucking ask chatGPT.
I doubt it, honestly. It’d likely catch a lot of misinfo, yes, but it would likely also classify any new findings that run counter to previous assumptions as misinfo. LLMs can’t keep up to date. And they still have the same issue that whoever trains them gets to decide what is and isn’t misinfo, which starts being a problem when it’s an ubiquitous social media site.
The system that is notorious for lying being used for fact checking. Yea maybe you should write “bad” in caps lock one more time, that will make you right.
If i’s implemented the right way, it could be. AI can be used for good things even if the knee-jerk reaction of so many people online is to equate it with crap.
Recoiling upon smelling shit is also a kneejerk reaction
its always this same bullshit, “if we just implemented this correctly” where can an AI participate in fact-checking? It can’t be trusted because of hallucinations, so the solution would be to uh… manually review everything it does? just rely on third parties to do it? What ACTUAL USE does this shit have?
It couldn’t be. Lying bias machine that gives people psychosis can’t magically stop being what it is. So it will always be terrible and unnecessary at best, harmful at regular.
The alternative is Facebook with lies that go unchecked completely. This is actually an area where AI is not bad.
edit: sigh. Refusing to acknowledge where things can be useful. NO, ALL BAD. BAD BAD BAD! AI BAD! ALWAYS BAD! NO USE! NO GOOD! ONLY BAD! BAD BAD BAD! Such fucking blindness.
“Correcting” incorrect information with more incorrect information doesn’t improve the situation.
AI tools are inherently unreliable because of the randomness in their text generation algorithms.
And worse, Europe doesn’t build its own AIs. LLM fact checking would have to be done by Grok or Claude or some other product from big American tech. And there’s an obvious problem with a social media network trying to avoid American censorship and political bias and corporate domination but “fact checking” with a tool that has American censorship and political bias and corporate domination built into it.
And on a personal level, I don’t want to use a social media site that has a bot scanning my posts and flagging them for wrongspeak - or that interjects automated bot opinions into conversations between humans. I use social media to talk to other human beings, not bots, thanks. If I wanted to know what chatGPT thinks of a post I’d fucking ask chatGPT.
I doubt it, honestly. It’d likely catch a lot of misinfo, yes, but it would likely also classify any new findings that run counter to previous assumptions as misinfo. LLMs can’t keep up to date. And they still have the same issue that whoever trains them gets to decide what is and isn’t misinfo, which starts being a problem when it’s an ubiquitous social media site.
The system that is notorious for lying being used for fact checking. Yea maybe you should write “bad” in caps lock one more time, that will make you right.
just because it is a solution doesn’t mean its a good solution or even better than no solution
If i’s implemented the right way, it could be. AI can be used for good things even if the knee-jerk reaction of so many people online is to equate it with crap.
Recoiling upon smelling shit is also a kneejerk reaction
its always this same bullshit, “if we just implemented this correctly” where can an AI participate in fact-checking? It can’t be trusted because of hallucinations, so the solution would be to uh… manually review everything it does? just rely on third parties to do it? What ACTUAL USE does this shit have?
It couldn’t be. Lying bias machine that gives people psychosis can’t magically stop being what it is. So it will always be terrible and unnecessary at best, harmful at regular.
I mean, there’s always Lemmy.