

22·
6 days agoI think you could rationally explore ghosts in the “radically redefining” them arena. Ghosts could rationally exist as an artifact of your mind, and saying that is not the same thing as saying they don’t exist. Hallucinations exist. They aren’t real, but they exist. Ghosts could rationally exist in the exactly same way, as processes in our own heads. It’s when you start saying they interact with the world in a way outside people’s heads that you can’t really reconcile.
It’s fundamentally not the same thing as autocomplete. Give autocomplete all the data an LLM has, every gig, every terabyte if it, and it still won’t be an LLM. Autocomplete lacks the semantic meaning layer as well as some other parts. People say it’s nothing but autocomplete from a misunderstanding of what a reward function does in backpropagation training (saying “the reward function is to predict the next word” is not even close to the equivalent of “it’s doing the same thing as autocomplete”)
I’m writing this short reply with hopes that when I have more time in the next two days or so I’ll come back with a more complete explanation, (including why context windows have to be limited).