When a generated photo or video becomes indistinguishiable from reality, does reality just collapse? How do we know what’s real anymore, and if society deems an image/video as false, how do we know it isn’t just a government cover-up?
Just a few words into a Gen-AI program and there will be a video on the news of you commiting a terrorist bombing of a pre-school, even tho you were never anywhere near there. They can send the secret police to murder people, then post a video of the people they’ve killed as “resisting arrest” or “trying to shoot the officers on scene”, even tho they were unarmed and cooperative.
Like… do governments just get to shape the world as they see fit?
It does say “check the results manually”. Not that this changes anything. For the record, always double check anything any AI tells you unless you can verify the response off the top of your head. Also for the record, double check anything anybody else tells you. If you haven’t seen it from more than one source, you don’t know if it’s true.
Hell, if the thing people learn from AI summaries is to never believe anything the see on the Internet without double checking it we’ll be better off than we were before.
Also, every negative impact you assign to AI is applicable to traditional search. I was hearing communication scholars warn people of the issues with algorithmic selection and personalized search back in the 90s. They were correct.
I am endlessly fascinated by the billions of boiling frogs that hadn’t realized their perception of the world was owned by Google until Google made a noticeably change to their advertising engine. Did you think them getting to select which answers you got at the top of the page and which ones to bury past the fold was any less misleading? I am increasingly glad that AI is as unreliable as it is at this point. We definitely need a change in how people acquire information.