A fun thing I recently learned about Large Language Models (LLMs) is that they understand base64, a simple encoding of text. Here’s a demonstration: the base64 encoding of What is 2 + 3? is V2hhdC...
Attached a pretty cool article covering it. This is something I never would have thought of before.
That’s not the LLM that understand your encoded string, it’s simply a preprocessing filter recognizing the signature of a base64 encoded string that decodes it and pass it back to the LLM.
That’s not the LLM that understand your encoded string, it’s simply a preprocessing filter recognizing the signature of a base64 encoded string that decodes it and pass it back to the LLM.
Agreed, this is a relatively simple “tool” as the LLM parlance goes. It’s what Model Context Protocol (MCP) is designed to facilitate
To verify, the author should try the same prompts on a local LLM with no tools enabled and most likely the LLM will respond with some nonsense
I was thinking the same thing, does anyone have a local LLM that they could test against? Local shouldn’t have the same preprocessing up front?