The fediverse used to feel pretty anti-ai, but over the past month or two I’ve noticed a LOT of generated memes and images, and they tend to have positive votes.
Has there been a sudden culture shift here? Or is there a substantial percentage of people just unable to tell the difference anymore?
There’s really nothing good about ai if you really look into it. MAYBE medical advances, and thats it.
Translation tools (like DeepL and Google Translate), proof assistance for mathematicians, camera settings optimisations, data analysis assistance in pretty much any field of research, anomaly detection, compression algorithms, ADAS systems like following a lane or self-parking, I can’t remember the specifics, but I know Nokia uses ML/AI methods for signal transmision/receiving optimisation, noise removal, image recognition for various purposes, I recall a system for automatic tree pruning, etc etc.
And before I get the usual “only GenAI is AI”, the underlying methods for creating a generative model and something like a model that detects street signs or abnormalities in medical scans are based on the same principles, they are the same field of computer science.
They have been doing machine learning for novel proteins for over 15 years now. “AI” is just a buzzword grant writers have to add these days to have any chance at funding is all
I have to correct you here, machine learning (AI) is extremely important in research. There is just no doubt about it.
Is AI image generators beneficial for society? Probably, I have artist friends who use AI images to help them paint for example, but is it out weighing the cost? Dunno.
Is AI slop beneficial? Orobably not :-)
There are a massive number of scientific research and other pattern matching positive uses that all involve using the AI to help narrow down what to focus on. All of those use AI as a way to filter and group information, not as the end result like the current trend is for the AI being shoved into everything.
Heck, there are some positive uses that could be made with the right guardrails like as a supplemental tool when learning a language (with an educator for oversight!) or as a natural language output for something that is created through an algorithm that returns accurate results.
Mainly, the exact opposite of what is being forced on everyone right now which is inaccurate slop that is full of errors but presented as reliable and helpful.
Is there any resource I can read on or start from that’s useful for the average person? Just pointers so I don’t go on slop rabbit holes which I’m sure there are a lot of.
Not sure how detailed you want to get, but the two that I know of off the top of my head are looking for exoplanets and signs of where humans used to live. Here are a couple of easy reads on the application.
https://blog.tensorflow.org/2019/11/identifying-exoplanets-with-neural.html?m=1
https://www.themirror.com/news/world-news/groundbreaking-ai-uncovers-lost-ancient-945182
Useful medical applications are similar, where the pattern matching can be used to narrow down what to look for, but there is a human step afterwards to verify.