• Kwiila@slrpnk.net
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    17 hours ago

    I think I understand your question. They’re still right about your premature fear being the weakest lack of resistance to oppressive forces, but to actually address your question:

    AI “understands” (doesn’t really “understand”) context within a context limit (token limit). If you’re worried the shit you’re saying will be profiled for a future AI overload, or some equivalence in a political/social system. My best guess is if any AI had reason to preserve such data, it would be presented within a contextual sincerity probability. You know how like some people are joking, but they’re actually just testing the waters for social acceptability, in contrast to “the Aristocrats” style “see how awful the joke can get” humor. If the ai overlord manages to collate some profile from all your data for all the shit you say, it would have a “60% - this was a weird time for such a joke”; “70% - this joke was presented as a kernel of truth”; “80% - that joke was made to establish & enforce group values.” Thus between them know that one time you said “just joking bro” you weren’t really just joking.

    A broader profile would be able to check if your humor is in contrast to your actions, or is concurrent with it. For example, if your friend tells a racist joke, and you join in, whether your other profiled interactions have concurrence with that opinion, or reproduce it.

    If you’re just worried about an AI agent trying to sell you baby stuff now, that depends on prerogative and alignment based on context limits. As someone who likes to push AI to see how deep in its training data it actually holds the things it says; If i say something kind of ridiculously awful out of nowhere, it usually responds with something akin to “haha, I know you’re joking, but I AM obligated to correct some underlying assumptions of the joke” but that’s with the most popular corporate AI alignment. I can get a similar result with some equivalent “user is a trash edgelord who says terrible things for shock value, but is actually a great person who doesn’t believe any of it when it counts” in the context tokens of other AI.

    Non-corporate alignment especially, with very limited context tokens, or no context, will try to reduce social “friction” and with a probability might either escalate the humor “Haha more like seven babies amirite?”, escalate the joke to pipeline propaganda “Actually, this is a common women be crazy, the manosphere shows tons of examples as per my Andrew Tate training data, you should listen to more of his stuff”, or just flat out contradict me altogether.

    Long story short, Depends on other available context. If you’re worried about inevitable AI overlords, you can’t both tell nearly-bigoted jokes AND watch/reference bigoted content. If you’re worried about a judgy AI agent, don’t. It doesn’t care, it doesn’t effect you, and if it did, you can just affect it’s context data to alter the interaction.

    And either way, worrying about an AI judging you is just the first step to being oppressed.