Hello!

As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

It lies. Confidently. ALL THE TIME.

(Technically, it “bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5

I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.

The thing: llama-conductor

llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).

Not a model, not a UI, not magic voodoo.

A glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.

TL;DR: “In God we trust. All others must bring data.”

Three examples:

1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)

You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:

  • >>attach <kb> — attaches a KB folder
  • >>summ new — generates SUMM_*.md files with SHA-256 provenance baked in
  • `>> moves the original to a sub-folder

Now, when you ask something like:

“yo, what did the Commodore C64 retail for in 1982?”

…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:

The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.

Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.

Confidence: medium | Source: Mixed

No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don’t GIGO yourself into stupid.

And when you’re happy with your summaries, you can:

  • >>move to vault — promote those SUMMs into Qdrant for the heavy mode.

2) Mentats: proof-or-refusal mode (Vault-only)

Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:

  • no chat history
  • no filesystem KBs
  • no Vodka
  • Vault-only grounding (Qdrant)

It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:

FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.

Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]

Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.

The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”

3) Vodka: deterministic memory on a potato budget

Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.

Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).

  • !! stores facts verbatim (JSON on disk)
  • ?? recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)
  • CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages

So instead of:

“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”

you get:

!! my server is 203.0.113.42 ?? server ip203.0.113.42 (with TTL/touch metadata)

And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.


There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.

TL;DR:

If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:

PS: Sorry about the AI slop image. I can’t draw for shit.

PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.

  • brettvitaz@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 hours ago

    I’m sure this is neat but I couldn’t get through the ai generated description without getting turned off. The way ai writes is like nails on a chalkboard

  • WolfLink@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    3 hours ago

    I’m probably going to give this a try, but I think you should make it clearer for those who aren’t going to dig through the code that it’s still LLMs all the way down and can still have issues - it’s just there are LLMs double-checking other LLMs work to try to find those issues. There are still no guarantees since it’s still all LLMs.

  • UNY0N@lemmy.wtf
    link
    fedilink
    arrow-up
    4
    ·
    2 hours ago

    THIS IS AWESOME!!! I’ve been working on using an obsidian vault and a podman ollama container to do something similar, with VSCodium + continue as middleware. But this! This looks to me like it is far superior to what I have cobbled together.

    I will study your codeberg repo, and see if I can use your conductor with my ollama instance and vault program. I just registered at codeberg, if I make any progress I will contact you there, and you can do with it what you like.

    On an unrelated note, you can download wikipedia. Might work well in conjunction with your conductor.

    https://en.wikipedia.org/wiki/Wikipedia:Database_download

  • Disillusionist@piefed.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 hours ago

    Awesome work. And I agree that we can have good and responsible AI (and other tech) if we start seeing it for what it is and isn’t, and actually being serious about addressing its problems and limitations. It’s projects like yours that can demonstrate pathways toward achieving better AI.

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 hours ago

    This seems astonishingly more useful than the current paradigm, this is genuinely incredible!

    I mean, fellow Autist here, so I guess I am also… biased towards… facts…

    But anyway, … I am currently uh, running on Bazzite.

    I have been using Alpaca so far, and have been successfully running Qwen3 8B through it… your system would address a lot of problems I have had to figurr out my own workarounds for.

    I am guessing this is not available as a flatpak, lol.

    I would feel terrible to ask you to do anything more after all of this work, but if anyone does actually set up a podman installable container for this that actually properly grabs all required dependencies, please let me know!

  • bilouba@jlai.lu
    link
    fedilink
    arrow-up
    12
    ·
    7 hours ago

    Very impressive! Do you have benchmark to test the reliability? A paper would be awesome to contribute to the science.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      13
      ·
      7 hours ago

      Just bush-league ones I did myself, that have no validation or normative values. Not that any of the LLM benchmarks seem to have those either LOL

      I’m open to ideas, time wiling. Believe it or not, I’m not a code monkey. I do this shit for fun to get away from my real job

      • bilouba@jlai.lu
        link
        fedilink
        arrow-up
        2
        ·
        5 hours ago

        I understand, no idea on how to do it. I heard about SWE‑Bench‑Lite that seems to focus on real-world usage. Maybe try to contact “AI Explained” on YT, he’s the best IMO. Your solution might be novel or not but he might help you figuring that. If it is indeed novel, it might be worth it to share it with the larger community. Of course, I totally get that you might not want to do any of that. Thank you for your work!

  • BaroqueInMind@piefed.social
    link
    fedilink
    English
    arrow-up
    49
    ·
    10 hours ago

    I have no remarks, just really amused with your writing in your repo.

    Going to build a Docker and self host this shit you made and enjoy your hard work.

    Thank you for this!

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      20
      ·
      10 hours ago

      Thank you <3

      Please let me know how it works…and enjoy the >>FR settings. If you’ve ever wanted to trolled by Bender (or a host of other 1990s / 2000s era memes), you’ll love it.

  • Alvaro@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    6
    ·
    7 hours ago

    I don’t see how it addresses hallucinations. It’s really cool! But seems to still be inherently unreliable (because LLMs are)

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      7 hours ago

      don’t see how it addresses hallucinations. It’s really cool! But seems to still be inherently unreliable (because LLMs are)

      LLMs are inherently unreliable in “free chat” mode. What llama-conductor changes is the failure mode: it only allows the LLM to argue from user curated ground truth and leaves an audit trail.

      You don’t have to trust it (black box). You can poke it (glass box). Failure leaves a trail and it can’t just hallucinate a source out of thin air without breaking LOUDLY and OBVIOUSLY.

      TL;DR: it won’t piss in your pocket and tell you it’s rain. It may still piss in your pocket (but much less often, because it’s house trained)

  • FrankLaskey@lemmy.ml
    link
    fedilink
    English
    arrow-up
    21
    ·
    10 hours ago

    This is very cool. Will dig into it a bit more later but do you have any data on how much it reduces hallucinations or mistakes? I’m sure that’s not easy to come by but figured I would ask. And would this prevent you from still using the built-in web search in OWUI to augment the context if desired?

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      4
      ·
      7 hours ago

      Comment removed by (auto-mod?) cause I said sexy bot. Weird.

      Restating again: On the stuff you use the pipeline/s on? About 85-90% in my tests. Just don’t GIGO (Garbage in, Garbage Out) your source docs…and don’t use a dumb LLM. That’s why I recommend Qwen3-4 2507 Instruct. It does what you tell it to (even the abilterated one I use).

  • kadu@scribe.disroot.org
    link
    fedilink
    arrow-up
    11
    ·
    9 hours ago

    Not a model, not a UI, not magic voodoo.

    Did your AI write the post or your brain is tuned to writing like AI because of your constant usage of it?

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      17
      ·
      9 hours ago

      Probably that latter. I unironically used “Obeyant” the other day, like a time traveling barrister from the 1600s.

      I have 2e ASD and my hyperfocus is language.

      • kadu@scribe.disroot.org
        link
        fedilink
        arrow-up
        7
        ·
        8 hours ago

        I like you. I was a bit of an asshole on my comment.

        But I do worry because I keep noticing very specific AI language quirks showing up in casual conversation more and more…

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          5
          ·
          7 hours ago

          I know. Unfortunately, LLMs were overwhelmingly trained on people such as myself. Ipso facto, it tends to sound like us, not reverse.

          PS: Yes, I’m aware I just wrote ipso facto, in the context of sounding like a clanker. It really is turtles all the way down :)

          • FauxLiving@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            3 hours ago

            I do hate that if you speak at a language above ‘Reddit Common’ people assume that you’re a neural network instead of just someone who exercises theirs.

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          16
          ·
          9 hours ago

          Good question.

          It doesn’t “correct” the model after the fact. It controls what the model is allowed to see and use before it ever answers.

          There are basically three modes, each stricter than the last. The default is “serious mode” (governed by serious.py). Low temp, punishes chattiness and inventiveness, forces it to state context for whatever it says.

          Additionally, Vodka (made up of two sub-modules - “cut the crap” and “fast recall”) operate at all times. Cut the crap trims context so the model only sees a bounded, stable window. You can think of it like a rolling, summary of what’s been said. That summary is not LLM generated summary either - it’s concatenation (dumb text matching), so no made up vibes.

          Fast recall OTOH stores and recalls facts verbatim from disk, not from the model’s latent memory.

          It writes what you tell it to a text file and then when you ask about it, spits it back out verbatim ((!! / ??)

          And that’s the baseline

          In KB mode, you make the LLM answer based on the above settings + with reference to your docs ONLY (in the first instance).

          When you >>attach <kb>, the router gets stricter again. Now the model is instructed to answer only from the attached documents.

          Those docs can even get summarized via an internal prompt if you run >>summ new, so that extra details are stripped out and you are left with just baseline who-what-where-when-why-how.

          The SUMM_*.md file come SHA-256 provenance, so every claim can be traced back to a specific origin file (which gets moved to a subfolder)

          TL;DR: If the answer isn’t in the KB, it’s told to say so instead of guessing.

          Finally, Mentats mode (Vault / Qdrant). This is the “I am done with your shit" path.

          It’s all of the three above PLUS a counter-factual sweep.

          It runs ONLY on stuff you’ve promoted into the vault.

          What it does is it takes your question and forms in in a particular way so that all of the particulars must be answered in order for there to BE an answer. Any part missing or not in context? No soup for you!

          In step 1, it runs that past the thinker model. The answer is then passed onto a “critic” model (different llm). That model has the job of looking at the thinkers output and say “bullshit - what about xyz?”.

          It sends that back to the thinker…who then answers and provides final output. But if it CANNOT answer the critics questions (based on the stored info?). It will tell you. No soup for you, again!

          TL;DR:

          The “corrections” happen by routing and constraint. The model never gets the chance to hallucinate in the first place, because it literally isn’t shown anything it’s not allowed to use. Basic premise - trust but verify (and I’ve given you all the tools I could think of to do that).

          Does that explain it better? The repo has a FAQ but if I can explain anything more specifically or clearly, please let me know. I built this for people like you and me.

  • Angel Mountain@feddit.nl
    link
    fedilink
    arrow-up
    9
    ·
    9 hours ago

    Super interesting build

    And if programming doesn’t pan out please start writing for a magazine, love your style (or was this written with your AI?)

      • Karkitoo@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        8 hours ago

        meat popsicle

        ( ͡° ͜ʖ ͡°)

        Anyway, the other person is right. Your writing style is great !

        I successfully read your whole post and even the README. Probably the random outbursts grabbed my attention back to te text.

        Anyway version 2, this Is a very cool idea ! I cannot wait to either :

        • incorporate it to my workflows
        • let it sit in a tab to never be touched ever again
        • tgeoryceaft, do tests and request features so much as to burnout

        Last but not least, thank you for not using github as your primary repo

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          4
          ·
          8 hours ago

          Hmm. One of those things is not like the other, one of those things just isn’t the same…

          About the random outburst: caused by TOO MUCH FUCKING CHATGPT WASTING HOURS OF MY FUCKING LIFE, LEADING ME DOWN BLIND ALLEYWAYS, YOU FUCKING PIEC…

          …sorry, sorry…

          Anyway, enjoy. Don’t spam my Github inbox plz :)

          • Karkitoo@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 hours ago

            Don’t spam my Github inbox plz

            I can spam your codeberg’s then ? :)

            About the random outburst: caused by TOO MUCH FUCKING CHATGPT WASTING HOURS OF MY FUCKING LIFE, LEADING ME DOWN BLIND ALLEYWAYS, YOU FUCKING PIEC… …sorry, sorry…

            Understandable, have a great day.

    • FrankLaskey@lemmy.ml
      link
      fedilink
      English
      arrow-up
      12
      ·
      9 hours ago

      Yes, because making locally hosted LLMs actually useful means you don’t need to utilize cloud-based and often proprietary models like ChatGPT or Gemini which Hoover up all of your data.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      4
      ·
      9 hours ago

      Yes. Several reasons -

      • Focuses on making LOCAL LLMs more reliable. You can hitch it to OpenRouter or ChatGPT if you want to leak you personal deets everywhere, but that’s not what this is for. I built this to make local, self hosted stuff BETTER.

      • Entire system operates on curating (and ticketing with provenance trails) local data…so you don’t need to YOLO request thru god knows where to pull information.

      • In theory, you could automate a workflow that does this - poll SearXNG, grab whatever you wanted to, make a .md summary, drop it into your KB folder, then tell your LLM “do the thing”. Or even use Scrapy if you prefer: https://github.com/scrapy/scrapy

      • Your memory is stored on disk, at home, on a tamper proof file, that you can inspect. No one else can see it. It doesn’t get leaked by the LLM any where. Because until you ask it, it literally has no idea what facts you’ve stored. The content of your KBs, memory stores etc are CLOSED OFF from the LLM.