I’m a big proponent of self-hosting, right to repair, and rolling your own whatever when you can. That probably started as teenage rebellion that got baked in - I was lucky enough to read both Walden and The Hobbit during a week-long cyclone lockdown several decades ago - but I suspect there’s a non-trivial overlap between that space and privacy-minded people in general.

My endgame is a self-sufficient intranet for myself and family: if the net goes down tomorrow, we’d barely notice.

I also use LLMs as a tool. True self-hosted equivalence to state-of-the-art models is still an expensive proposition, so like many, I use cloud-based tools like Claude or Codex for domain-specific heavy lifting - mostly coding. Not apologising for it; I think it’s a reasonable trade-off while local hardware catches up.

That context is just to establish where I’m coming from when I say this caught my attention today:

https://support.claude.com/en/articles/14328960-identity-verification-on-claude

To be accurate about what it actually says: this isn’t a blanket “show us your passport to use Claude.” Not yet.

The policy as written is narrower than it might first appear.

My concern isn’t what it says - it’s that the precedent now exists. OAI will do doubt follow suite.

Scope creep is a documented pattern with this kind of thing, and “we only use it for X” describes current intent, not a structural constraint.

Given the nature of this community, figured it was worth flagging.

  • lsjw96kxs@sh.itjust.works
    link
    fedilink
    Français
    arrow-up
    1
    ·
    9 hours ago

    Personally, I would like to use AI, but I don’t due to it being non local. I know there are local AI that could do things, but I don’t know which models are the good one for each task. If someone can give me pointers for it, I’d be grateful, for exemple a good model for local coding :)

    • lime!@feddit.nu
      link
      fedilink
      arrow-up
      2
      ·
      9 hours ago

      depends on your hardware and your preferred language. i think wizardcoder is a pretty common choice but the smallest useful version is around 14GB so you need the vram to accommodate it.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      9 hours ago
      1. How much VRAM do you have?
      2. Which GPU?
      3. What sort of coding do you want to do?

      No point in telling you “yo, dude, just grab MinMax 2.7 or GLM5.1”…unless you happen to have several GPUs running concurrently with a total combined VRAM pool of 500GB or more.

      There are strong local contenders… (Like Qwen3-Coder-Next but as you can see, the table ante is probably in the 45GB vram range just to load them up. Actually running them with a decent context length is likely to mean you need to be in the 80-100GB range.

      Do-able…but maybe pay $10 on OpenRouter first to test drive them before committing to $2000+ worth of hardware upgrades.

      There are other, more reasonable, less hardware dependent uses for local LLMs, but if you want fully local coders, it’s the same old story: pay to play (and that’s even if you don’t mind slow speed / overnight batch jobs).

      Right now, cloud-based providers are hemorrhaging money because they know it will lead to lock-in (ie: people will get use to what can be achieved with SOTA models, forgetting the multi-million dollar infrastructure required to run them). Then, when they realize you can’t quite do the same with local gear (at least, without spending $$$), they can ratchet the prices up.

      Codex pro-plan just went to $300/month.

      We’ve seen this playbook before, right?