I Built a Python script that uses a local Ollama LLM to automatically find and add movies to Radarr.
It picks random films from your library, asks Ollama for similar suggestions based on theme and atmosphere, validates against OMDb, scores with plot embeddings, then adds the top results to Radarr automatically.
Examples:
- Whiplash → La La Land, Birdman, All That Jazz
- The Thing → In the Mouth of Madness, It Follows, The Descent
- In Bruges → Seven Psychopaths, Dead Man’s Shoes
Features:
- 100% local, no external AI API
- –auto mode for daily cron/Task Scheduler
- –genre “Horror” for themed movie nights
- Persistent blacklist, configurable quality profile
- Works on Windows, Linux, Mac
GitHub: https://github.com/nikodindon/radarr-movie-recommender


Anti-AI evangelism is at its peak rn.
@pfr @nikodindon That assumes it won’t get worse, which I hope it does. AI companies have forced me to take down web stuff that I had running for almost 2 decades, because their scrapers are so aggressive.
Found the time traveler!
Like what and what have you tried to block it?
@meldrik They’re impossible to block based on IP ranges alone. It’s why all the FOSS git forges and bug trackers have started using stuff like anubis. But yes, I initially tried to block them (this was before anubis existed).
It was a few things that I had to take down; a gitweb instance with some of my own repos, for example. And a personal photo gallery. The scrapers would do pathological things like running repeated search queries for random email addresses or strings.
I’m hosting several things, including Lemmy and PeerTube. I haven’t really been aware of any scrapers, but do you know of any software that can help block it?