

My word filters reliably block 3rd of my front page here. Includes every keyword seen here and much more.
Freedom is the right to tell people what they do not want to hear.
My word filters reliably block 3rd of my front page here. Includes every keyword seen here and much more.
Well let’s hear some suggestions then.
I mean - it’s certainly possible, but you’d still be risking that 500k prize if you got caught.
And most people seem to tap out because of loneliness or starvation, so if you were going to cheat, you’d pretty much have to smuggle in either food or a better way of getting it - like a decent fishing rod and proper lures.
I’ve put things in my ass for no points. 1000 points sure sounds worth it.
They do regular health check-ins with the contestants, and if you’re not losing weight but there’s no footage of you catching food, they’re going to figure out pretty quickly that something’s up.
On top of that, the locations are chosen so that just hiking out to you with food would be a survival challenge in itself - and coming in by boat would almost certainly be noticed.
Interestingly, I’ve just been binge watching the show for the first time. I’m on season 5 currently.
Two
spaces
before
you
press
enter.
What are you suggesting exactly? You have an actual solution here to offer or you just want to be a smart ass?
When people have sex, they usually do it in private, without any witnesses. Whatever happens during that time is often difficult to prove afterward, since it typically comes down to one person’s word against the other’s. Unless there’s clear physical evidence of assault, it can be extremely hard to establish that something was done against someone’s will. Most reasonable people would agree that “she said so” alone doesn’t amount to proof - and isn’t, by itself, a valid basis for sending someone to prison.
“If we just trusted women”
We don’t trust people based on their gender. We trust them based on credibility and evidence. If there’s even the tiniest amount of doubt then it better to let the guilty walk free rather than put an innocent person in jail. And I’m speaking broadly here - not about Trump specifically.
Now is a prime berry/mushroom season so few months with relative ease, then few more months if I’m able to fish and stay warm but the winter would be the end of me.
They’re generally just referred to as “deep learning” or “machine learning”. The models themselves usually have names of their own, such as AlphaFold, PathAI and Enlitic.
The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’
By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.
You’re moving the goalposts. First you claimed understanding requires awareness, now you’re asking whether an AI knows what a molecule is - as if that’s even the standard for functional intelligence.
No, AI doesn’t “know” things the way a human does. But it can still reliably identify ungrammatical sentences or predict molecular interactions based on training data. If your definition of “understanding” requires some kind of inner experience or conscious grasp of meaning, then fine. But that’s a philosophical stance, not a technical one.
The point is: you don’t need subjective awareness to model relationships in data and produce useful results. That’s what modern AI does, and that’s enough to call it intelligent in the functional sense - whether or not it “knows” anything in the way you’d like it to.
Most definitions are imperfect - that’s why I said the term AI, at its simplest, refers to a system capable of performing any cognitive task typically done by humans. Doing things faster, or even doing things humans can’t do at all, doesn’t conflict with that definition.
Humans are unarguably generally intelligent, so it’s only natural that we use “human-level intelligence” as the benchmark when talking about general intelligence. But personally, I think that benchmark is a red herring. Even if an AI system isn’t any smarter than we are, its memory and processing capabilities would still be vastly superior. That alone would allow it to immediately surpass the “human-level” threshold and enter the realm of Artificial Superintelligence (ASI).
As for something like making a sandwich - that’s a task for robotics, not AI. We’re talking about cognitive capabilities here.
“Understanding requires awareness” isn’t some settled fact - it’s just something you’ve asserted. There’s plenty of debate around what understanding even is, especially in AI, and awareness or consciousness is not a prerequisite in most definitions. Systems can model, translate, infer, and apply concepts without being “aware” of anything - just like humans often do things without conscious thought.
You don’t need to be self-aware to understand that a sentence is grammatically incorrect or that one molecule binds better than another. It’s fine to critique the hype around AI - a lot of it is overblown - but slipping in homemade definitions like that just muddies the waters.
The issue here is that machine learning also falls under the umbrella of AI.
So… not intelligent.
But they are intelligent - just not in the way people tend to think.
There’s nothing inherently wrong with avoiding certain terminology, but I’d caution against deliberately using incorrect terms, because that only opens the door to more confusion. It might help when explaining something one-on-one in private, but in an online discussion with a broad audience, you should be precise with your choice of words. Otherwise, you end up with what looks like disagreement, when in reality it’s just people talking past each other - using the same terms but with completely different interpretations.
Both that and LLMs fall under the umbrella of machine learning, but they branch in different directions. LLMs are optimized for generating language, while the systems used in drug discovery focus on pattern recognition, prediction, and simulations. Same foundation - different tools for different jobs.
It’s certainly not any task, that’d be AGI.
Any individual task I mean. Not every task.
I have next to zero urge to “keep up with the news.” I’m under no obligation to know what’s going on in the world at all times. If something is important, I’ll hear about it from somewhere anyway - and if I don’t hear about it, it probably wasn’t that important to begin with.
I’d argue the “optimal” amount of news is whatever’s left after you actively take steps to avoid most of it. Unfiltered news consumption in today’s environment is almost certainly way, way too much.