

Yeah, there’s a mysticism that’s sprung up around LLMs as if they’re some magic blackbox, rather than a well understood construct to the point where you can buy books from Amazon on how to write one from scratch.
It’s not like ChatGPT or Claude appeared from nowhere, the people who built them do talks about them all the time.
If these AI researchers really have no idea how these things work, then how can they possibly improve the models or techniques?
Like how they now claim all that after upgrades that now these LLMs can “reason” about problems, how did they actually go and add that if it’s a black box?