I Built a Python script that uses a local Ollama LLM to automatically find and add movies to Radarr.

It picks random films from your library, asks Ollama for similar suggestions based on theme and atmosphere, validates against OMDb, scores with plot embeddings, then adds the top results to Radarr automatically.

Examples:

  • Whiplash → La La Land, Birdman, All That Jazz
  • The Thing → In the Mouth of Madness, It Follows, The Descent
  • In Bruges → Seven Psychopaths, Dead Man’s Shoes

Features:

  • 100% local, no external AI API
  • –auto mode for daily cron/Task Scheduler
  • –genre “Horror” for themed movie nights
  • Persistent blacklist, configurable quality profile
  • Works on Windows, Linux, Mac

GitHub: https://github.com/nikodindon/radarr-movie-recommender

  • illusionist@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    8
    ·
    1 day ago

    Huh? There are other ways to link similarities of movies without the use of a llm. You may use ai to find similar movies but it’s nonsense that everyone has to ask a llm to link movies.

      • illusionist@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        OP wrote a python script that call a llm to ask for a recommendation.

        But you are right, op doesn’t say that everyone shall do it

        • Eager Eagle@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          1 day ago

          No, it also doesn’t do that. It gets embeddings from an LLM and uses that to rank candidates.

          • bandwidthcrisis@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            14 hours ago

            I had to look up embeddings: so this is comparing the encoding of movies as a similarity test?

            Which can work because the encoding methods can indicate closeness of meaning.

            And that’s why this isn’t running an llm in any way.

          • illusionist@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            edit-2
            1 day ago

            Are you a trollm?

            If not, I’m just too stupid to understand op.

            I Built a Python script that uses a local Ollama LLM to automatically find and add movies to Radarr.

            OP wrote a python script that call a llm to ask for a recommendation.

            If that’s not the same, I don’t know what is. Gotta go back to school, I guess.

            • Eager Eagle@lemmy.world
              link
              fedilink
              English
              arrow-up
              12
              ·
              edit-2
              1 day ago

              It’s not, I read the code. It’s not merely asking the LLM for recommendations, it’s using embeddings to compute scores based on similarities.

              It’s a lot closer to a more traditional natural language processing than to how my dad would use GPT to discuss philosophy.