I’m a big proponent of self-hosting, right to repair, and rolling your own whatever when you can. That probably started as teenage rebellion that got baked in - I was lucky enough to read both Walden and The Hobbit during a week-long cyclone lockdown several decades ago - but I suspect there’s a non-trivial overlap between that space and privacy-minded people in general.

My endgame is a self-sufficient intranet for myself and family: if the net goes down tomorrow, we’d barely notice.

I also use LLMs as a tool. True self-hosted equivalence to state-of-the-art models is still an expensive proposition, so like many, I use cloud-based tools like Claude or Codex for domain-specific heavy lifting - mostly coding. Not apologising for it; I think it’s a reasonable trade-off while local hardware catches up.

That context is just to establish where I’m coming from when I say this caught my attention today:

https://support.claude.com/en/articles/14328960-identity-verification-on-claude

To be accurate about what it actually says: this isn’t a blanket “show us your passport to use Claude.” Not yet.

The policy as written is narrower than it might first appear.

My concern isn’t what it says - it’s that the precedent now exists. OAI will do doubt follow suite.

Scope creep is a documented pattern with this kind of thing, and “we only use it for X” describes current intent, not a structural constraint.

Given the nature of this community, figured it was worth flagging.

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    11 hours ago

    IMHO LLM usage isn’t coherent with independence. That being said I wrote quite a bit on self-hosting LLMs. There are quite a few tools available, like ollama itself relying on llama.cpp that can both work locally and provide an API compatible replacement to cloud services. As you suggested though typically at home one doesn’t have the hardware, GPUs with 100+GB of VRAM, to run the state of the art. There is a middle ground though between full cloud, API key, closed source vs open source at home on low-end hardware : running STOA open models on cloud. It can be done on any cloud but it’s much easier to start with dedicated hardware and tooling, for that HuggingFace is great but there are multiples.

    TL;DR: closed cloud -> models on clouds -> self-hosted provide a better path to independence, including training.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      11 hours ago

      Yeah, me too :)

      https://bobbyllm.github.io/llama-conductor/

      https://codeberg.org/BobbyLLM/llama-conductor

      I’m thinking about coding a >>cloud side car at the moment, with the exact feature you mentioned…but…that’s scope creep for what I have in mind.

      Irrespective of all that, I agree: an open cloud co-op could be a good way to have SOTA (or near SOTA - GLM 5.1 is about as close as we have right now) access for when needed.

      (Not teaching you to suck eggs, so this comment is for the lay-reader):

      For coding, you can do some interesting stuff where the cloud model is the “general” and the locally hosted LLM is the “soldier” that does the grunt work. We have some pretty decent, consumer-level-hardware runnable “soldiers” now (I still like Qwen 3 coder)…they just don’t quite have the brains to see the full/big picture for coding.