I work on a project that has a lot of older, less technical and international users who could use some extra help. We’re also not always found by the people that would benefit from our project. https://keeperfx.net/
That they do not become lie machines. Propaganda, lies and fake news from various different sources gets spammed all across the internet. If AI picks it up, it can just spread misinformation, especially if all trustworthy or useful sources block them
This will just make them sound more believable when they hallucinate. LLMs can conceptually not be made to not lie, even if all the info they are trained on is 100% accurate.
Why do you want your stuff in the lie machines? 🤔
I work on a project that has a lot of older, less technical and international users who could use some extra help. We’re also not always found by the people that would benefit from our project. https://keeperfx.net/
That they do not become lie machines. Propaganda, lies and fake news from various different sources gets spammed all across the internet. If AI picks it up, it can just spread misinformation, especially if all trustworthy or useful sources block them
This will just make them sound more believable when they hallucinate. LLMs can conceptually not be made to not lie, even if all the info they are trained on is 100% accurate.
That’s a very reasonable point I had not considered.
And very valid. Most of the data they use comes from Reddit and twitter. Garbage in, garbage out.