I Built a Python script that uses a local Ollama LLM to automatically find and add movies to Radarr.

It picks random films from your library, asks Ollama for similar suggestions based on theme and atmosphere, validates against OMDb, scores with plot embeddings, then adds the top results to Radarr automatically.

Examples:

  • Whiplash → La La Land, Birdman, All That Jazz
  • The Thing → In the Mouth of Madness, It Follows, The Descent
  • In Bruges → Seven Psychopaths, Dead Man’s Shoes

Features:

  • 100% local, no external AI API
  • –auto mode for daily cron/Task Scheduler
  • –genre “Horror” for themed movie nights
  • Persistent blacklist, configurable quality profile
  • Works on Windows, Linux, Mac

GitHub: https://github.com/nikodindon/radarr-movie-recommender

  • Scrath@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 hours ago

    I remember building something vaguely related in a university course on AI before ChatGPT was released and the whole LLM thing hadn’t taken off.

    The user had the option to enter a couple movies (so long as they were present in the weird semantic database thing our professor told us to use) and we calculated a similarity matrix between them and all other movies in the database based on their tags and by putting the description through a natural language processing pipeline.

    The result was the user getting a couple surprisingly accurate recommendations.

    Considering we had to calculate this similarity score for every movie in the database it was obviously not very efficient but I wonder how it would scale up against current LLM models, both in terms of accuracy and energy efficiency.

    One issue, if you want to call it that, is that our approach was deterministic. Enter the same movies, get the same results. I don’t think an LLM is as predictable for that

    • four@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      I’m not an expert, but LLMs should still be deterministic. If you run the model with 0 creativity (or whatever the randomness setting is called) and provide exactly the same input, it should provide the same output. That’s not how it’s usually configured, but it should be possible. Now, if you change the input at all (change order of movies, misspell a title, etc) then the output can change in an unpredictable way

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        Yes. I think determinism a misunderstood concept. In computing, it means exact same input leads to always the same output. Could be entirely wrong, though. As long as it stays the same. There’s some benefit in introducing randomness to AI. But it can be run in an entirely deterministic way as well. Just depends on the settings. (It’s called “temperature”.)

    • LiveLM@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      One issue, if you want to call it that, is that our approach was deterministic. Enter the same movies, get the same results. I don’t think an LLM is as predictable for that

      Maybe lowering the temperature will help with this?
      Besides, a tinge of randomness could even be considered a fun feature.