I believe that because Reddit is generally left-leaning and the majority of those users are opposed to AI, we may see a disproportionate rise in AI-generated right-wing content, which could influence public opinion. And the pentagon also showed interest in using LLMs to gaslight people.

  • pyrinix@kbin.melroy.org
    link
    fedilink
    arrow-up
    1
    ·
    1 hour ago

    People are already manipulating historical figures in what they say, to make them aligned with whatever ideals and beliefs one has politically in modern times.

    Once AI generation is advanced enough to actually manipulate video from historical periods, people are going to do that too. They’ll probably try making Hitler seem friendlier.

    Whatever baffling bullshit scheme people have.

  • wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    13 hours ago

    The political bias of AI will be set by those tuning the models. Now mix in a bunch of voters asking LLMs who they should vote for, because people will outsource their thinking any chance they get. The result is model owners being able to sway elections with very little effort.

  • davel@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    Reddit doesn’t matter nearly as much as you think. It’s not going to move the needle appreciably.

    Almost all English language text is liberal, meaning capitalist, very little is socialist, virtually none is communist, and quite a lot is anti-communist. So there’s your baked-in political bias for English language models.

    • Carl [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      I wonder if using a chinese language model and then translating it would get better or worse political content bean-think

  • mesa@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Not 100% on the same topic, but in the software development world we are seeing this quite often. Because there is a bias for react and python on the original pull for ai, there is a deluge of new projects with those two stacks because people ask ai for software…and those two pop up.

    In the same way, the talking points for ai will seem wierdly stick in a certain year/decade because that’s where most of the talking points were pulled from.

  • PumpkinDrama@reddthat.comOP
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    1 day ago

    I believe that because Reddit is generally left-leaning and the majority of those users are opposed to AI, we may see a disproportionate rise in AI-generated right-wing content, which could influence public opinion. And the pentagon also showed interest in using LLMs to gaslight people.

    • wickedrando@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      We already see this: actual right wing political advertisements using AI and Twitter is full of the shit. It’s legitimately easier to trick conservatives with the slop.