Besides obvious traps in differing levels of quality of LLM’s and resources, being availiable to commoners for cheap or free and paid for corporate rich clients, there’s still an undiscovered question about how these different AIs are biased.
After reading a lenghty thread where I’ve seen many takes about if LLM could’ve pushed a teen into commiting suicide, I thought to myself: if there are obviously different models availiable, may they be taught differently for each userbase?
May, for example, some genAI for rich and poor differ, helping first ones to procreate and others to die off?
What if some data engineers trained a popular model to represent one specific agenda, to serve their favorite bosses and institutions?
What if, for the argument’s sake, their GenAIs serve this role as an enabler of suicide, as it was intentionally programmed for?
The amount of people that wil be significantly influencable by GenAI towards suicide is significant enough that it matters but stil way to insignificant to be talking about “die off”. Most people really want to stay living even with a lot of shit in their lives and despite AI.
If a person is perceived as a threat, purposedly adjusted GenAI can suggest them to touch live wires or mix a homemade explosive while they do a casual home fixing for the first time.
My initial thought, although greatly blurred, is in that if we outsource our research and decision making to AI, the owners of AI can spoil it for us to either make us fall in line or sabotage our ways.
The normal GenAI can be gated and availiable for rich persons, but the one that is cheap or free, that is used by low to middle households, can as well be used to inspire us to follow corporate agenda or even self-terminate if we aren’t in their picture of the future.
Not a showerthought.
Acid rainshower (Liquid LSD)
Whatevere, whatever, if closeness to platfroms my and languages, their AI pick, are a statement
We’re cooked…
Not class warfare but political/ ideology warfare is already out there.



