• over_clox@lemmy.worldOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      4
      ·
      edit-2
      20 hours ago

      I’m really confused here, what do you mean?

      I’m referring to two friends right next to each other, talking silly shit in person, like what’s the random phone microphone and associated AI think of the words it hears?

      We (me and my roommate) just got done joking about a crazy woman that wanted to have 6 babies with me and wanted to move to Canada 😂🤣

      I’m not asking about any AI chatbot, I’m asking what do the random phone microphones and associated AI systems think of random jokes it heard?

      • RidcullyTheBrown@lemmy.world
        link
        fedilink
        arrow-up
        31
        ·
        20 hours ago

        OP is responding to your original question. If you’re asking if it safe to do it, then you are worried about the consequences of doing it and wonder if you should stop. That would mean that you are “obeying a rule before it exists”.

        As to your current comment, you should understand that current “AI” solutions do not think anything. If the current solutions were used for mass surveillance, then they would be used to classify your actions in some predefined way and whomever is using them will use this classification in some (possibly nefarious) unknown way

        • over_clox@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          20 hours ago

          Yo, you both make good points. Sorry I didn’t explain the joke/banter we were discussing.

          We (in person) were joking for around 10 minutes or so about a crazy woman that literally wanted to have 6 children with me and move to Canada.

          She’s crazy and was recently evicted, but we continue to joke about it. But it really makes me wonder, what if the microphones and AI shit around hear our silly banter and take it seriously, like thinking I’m about to move to Canada (I’m not).

          Long question short, does AI know the difference between serious words or jokes?

          • reksas@sopuli.xyz
            link
            fedilink
            arrow-up
            2
            ·
            14 hours ago

            i talked about subject like this with someone who is an anarchist and attends kind of protests that can get you in trouble and he wasnt that worried about it. It was in finland though, so there is that too. if i lived in america i would be way more worried too considering what is happening there.

            But lets assume that your microphone is recording everything you say to some llm, and everyone else too. imagine the shitshow it must be to sort all that data, considering how unreliable llm are. There is no way to know for sure what the original meaning and intention of words is without actual human confirming it. there would be so much constantly going on that even with decent automation it would be nightmare to manage, i think.

            and the subject you joked about is so innocent too. If you were talking about violently opposing the regime as a joke or something else like that, then it would be prudent to practice better security. I have occasionally tried to investigate if my phone is listening even though it doesnt indicate so, but i havent noticed anything that would point to that. I dont know THAT much about it either though. Though if you have all the bloatware installed and llm stuff enabled then it probably does listen constantly, but i dont think you would have them since you are worried about this.

            There is also this nice program called tracercontrol, it lets you know what spying libraries programs have on your phone and you can also cut them off internet with it, and also selectively block the traffic so only necessary traffic is allowed (it does require some getting used to, but its not that difficult imo.) There is some version of it in googles malwareshop called google play, but i downloaded it from f-droid. There is also similar program called re-think, which is even more heavyduty, though i think it uses battery faster. but at least you can control even better what goes in and out of your phone with it.

            And why do you even care what the companies think about you? also, isnt it better less accurate information they have about you? let them think you are moving to canada with 15 kids, to decorate garages with ornamental potatoes and live off the good wibes of the universe.

            • WhyJiffie@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 hours ago

              But lets assume that your microphone is recording everything you say to some llm, and everyone else too. imagine the shitshow it must be to sort all that data, considering how unreliable llm are. There is no way to know for sure what the original meaning and intention of words is without actual human confirming it. there would be so much constantly going on that even with decent automation it would be nightmare to manage, i think.

              filtering for interesting words in a transcription would go a long way. I’m not convinced they would need an LLM for this. keyword based targeted advertising was a thing for quite a fewyears before LLMs became widespread.

              I have occasionally tried to investigate if my phone is listening even though it doesnt indicate so, but i havent noticed anything that would point to that.

              I have friends of all ages who are not really technically adept, and not caring much about privacy either, but who tell me from time to time that it’s like facebook is listening, because of some ad they saw on there. I still don’t have any clue how are they pulling that off, and I hear it too much to accept it being a coincidence.

              but I don’t use facebook, I refuse to use their apps, my phone is clean from google too, while theirs is littered with every garbage in addition to the factory bloat, so I can’t do much to figure it out

              • reksas@sopuli.xyz
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                7 hours ago

                if you dont block some applications from internet, they definitely are listening somehow. not sure if they can bypass the microphone permission, but if any has constant access then that is that. what i meant that the phone itself doesnt seem to have thing where it listens on you. average user who is not technically adept at all will have their phone riddled with spyware so no wonder they are seeing targeted ads.

                Fakebook is just facilitating the ads, the advertising companies are the ones with the data. Though what facebook does with it is likely that they try manipulate you through their algorithm, suggesting stuff they want you to see based on the data collected. Be it for increased addiction, manipulation of opinions or something else.

          • Kwiila@slrpnk.net
            link
            fedilink
            arrow-up
            2
            ·
            18 hours ago

            To answer your question MUCH more concisely: with a probability, within a context limit.

      • Kwiila@slrpnk.net
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        17 hours ago

        I think I understand your question. They’re still right about your premature fear being the weakest lack of resistance to oppressive forces, but to actually address your question:

        AI “understands” (doesn’t really “understand”) context within a context limit (token limit). If you’re worried the shit you’re saying will be profiled for a future AI overload, or some equivalence in a political/social system. My best guess is if any AI had reason to preserve such data, it would be presented within a contextual sincerity probability. You know how like some people are joking, but they’re actually just testing the waters for social acceptability, in contrast to “the Aristocrats” style “see how awful the joke can get” humor. If the ai overlord manages to collate some profile from all your data for all the shit you say, it would have a “60% - this was a weird time for such a joke”; “70% - this joke was presented as a kernel of truth”; “80% - that joke was made to establish & enforce group values.” Thus between them know that one time you said “just joking bro” you weren’t really just joking.

        A broader profile would be able to check if your humor is in contrast to your actions, or is concurrent with it. For example, if your friend tells a racist joke, and you join in, whether your other profiled interactions have concurrence with that opinion, or reproduce it.

        If you’re just worried about an AI agent trying to sell you baby stuff now, that depends on prerogative and alignment based on context limits. As someone who likes to push AI to see how deep in its training data it actually holds the things it says; If i say something kind of ridiculously awful out of nowhere, it usually responds with something akin to “haha, I know you’re joking, but I AM obligated to correct some underlying assumptions of the joke” but that’s with the most popular corporate AI alignment. I can get a similar result with some equivalent “user is a trash edgelord who says terrible things for shock value, but is actually a great person who doesn’t believe any of it when it counts” in the context tokens of other AI.

        Non-corporate alignment especially, with very limited context tokens, or no context, will try to reduce social “friction” and with a probability might either escalate the humor “Haha more like seven babies amirite?”, escalate the joke to pipeline propaganda “Actually, this is a common women be crazy, the manosphere shows tons of examples as per my Andrew Tate training data, you should listen to more of his stuff”, or just flat out contradict me altogether.

        Long story short, Depends on other available context. If you’re worried about inevitable AI overlords, you can’t both tell nearly-bigoted jokes AND watch/reference bigoted content. If you’re worried about a judgy AI agent, don’t. It doesn’t care, it doesn’t effect you, and if it did, you can just affect it’s context data to alter the interaction.

        And either way, worrying about an AI judging you is just the first step to being oppressed.