

In 2008 you could do a web search and have relevant real results
https://en.wikipedia.org/wiki/Google_bombing
Google had (mostly) solved this problem by 2007. I couldn’t name another search engine that could claim the same.
But the process of Spamdexing has been an ongoing war of the websites since the nineties. Google never fully solved it, they just did a better job than most up until the big executive shift in 2018.
The spam site takeover of your search results in the modern day is as much a consequence of modernization in Spamdexing as it is any search engine’s own failures. None of those AI content mill sites existed to index 20 years ago





I think it’s heavily predicated on what you’re using the Internet for. In the business world, we’ve improved system redundancy, backup/recovery, and transfer speeds by leaps and bounds.
Back in 2008, I was in my car driving to Dallas to escape Hurricane Ike, with a trunk full of server hardware needed to keep our business running. Datacenter proliferation has fully eliminated the need to do anything like that again.
We have significantly more high speed broadband. We have superior wireless connectivity. HTML5 is much better than it’s predecessors. We’ve modernized APIs and broadly adopted JSON for transmission. The hardware is so much better, from phones to routers to raspberry pis for self-hosting.
I get you don’t like the current content of big Web 2.0 publishers. But you’re really missing the forest for a few big ugly trees