I spent the better part of 3 days pulling my hair out over a script that just wouldn’t cooperate. Logs, testing, asking deepseek—nothing.
I made a post here yesterday asking about agentic llm models, and someone mentioned opencode.
I ran it from the code location, asked it to find the bug, and within a minute it pointed out a stupid error that I think I wouldn’t have ever found out. A silly little mistake. Fixed in a minute.
If a free model caught that instantly, it really puts things into perspective. Anthropic recently found 22 vulnerabilities in Firefox using their largest models. That’s not just fixing syntax; that’s hardening a massive browser against exploits.
I’m excited because the barrier to shipping stable code just dropped through the floor. But I’m also scared. Not of the tech itself, but of what happens when capitalists decide to fully automate labor. The game is changing fast.
The open‑source community is great at building tools. We need to get equally good at talking about who those tools really serve—and how we make sure they empower workers, not just replace them.


While it is a good use case, also require human verification. I use to run my code (C++) trough clang analizers and sometime also with open llm models. The clang is pretty good, the llms also hallucinate on that too.
Code review is a much better use case than writing code for llm s, but still require a grain of salt.