I have some data science background, and I kinda understand how LLM parameter tuning works and how model generates text.
Simplifying and phrasing my understanding, an LLM works like - Given a prompt: Write a program to check if input is an odd number (converts the prompt to embedding), then the LLM plays a dice game/probability game of: given prompt, then generate a set of new tokens.
Now my question is, how are the current LLM’s are able to parse through a bunch of search results and play the above dice game? Like at times it reads through say 10 URLs and generate results, how are they able to achieve this? What’s the engineering behind generating such huge verbose of texts? Cause I always argue about the theoretical limitations of LLM, but now that these “agents” are able to manage huge verbose of text I dont seem to have a good argument. So what exactly is happening? And what is the limit of AI non theortical limit of AI?
Edit


The underlying issues, in my opinion, regarding LLMs is their indeterministic nature. Even zeroing out the temperature (randomness of outputs), you can get significantly different results between two almost identical texts.
However, building out an ecosystem supporting new technology is a fairly common progression. If you compare it to the internet things like browser caches, CDNs (content delivery networks), code minifiers, etc. are all ways to help combat latency (a fundamental problem for the internet).
As for the effectiveness of these solutions, RAGs do help a lot when generating text against a select corpus. Its what allows the linked sources in things like ChatGPT and Googles AI results. It’s also what a lot of companies are using for searching their support pages/etc. It’s maybe not quite as good as speaking to a person, but is faster.
Similarly, the reasoning models and managing the models “context” both have shown demonstrable improvements for models in benchmarking.
I’m not sure I personally believe this makes LLMs a replacement for humans in most situations, but it at least demonstrates forward progress for GenAI.
Interesting, the thing is I can quite easily pick up something new but at the same time I am very resistant to change until there is good reasoning and some sort of a scientific conformation.
Need to discover good uses cases for LLM/AI and make peace with it I guess!
Yeah, that’s fair. I haven’t jumped into the whole agentic side of things as I find LLMs consistently fail at lower level stuff.
Everyone says it’s great at prototyping or writing documents, etc, but I think that’s just cause people have low standards. When coding I find that it quickly messes things up or lacks good quality control (which you only notice if you’re familiar with the domain). For writing it’s fine, but the tone and language always feels off and certainly doesn’t sound like me.
Either way, I would suggest playing around with them to see how they fit into how you do things. I think we’re starting to see things finally slow down on new implementations, and they aren’t going away, so it may be a good time to see if all the fuss is worth it to you.