I’m a big proponent of self-hosting, right to repair, and rolling your own whatever when you can. That probably started as teenage rebellion that got baked in - I was lucky enough to read both Walden and The Hobbit during a week-long cyclone lockdown several decades ago - but I suspect there’s a non-trivial overlap between that space and privacy-minded people in general.
My endgame is a self-sufficient intranet for myself and family: if the net goes down tomorrow, we’d barely notice.
I also use LLMs as a tool. True self-hosted equivalence to state-of-the-art models is still an expensive proposition, so like many, I use cloud-based tools like Claude or Codex for domain-specific heavy lifting - mostly coding. Not apologising for it; I think it’s a reasonable trade-off while local hardware catches up.
That context is just to establish where I’m coming from when I say this caught my attention today:
https://support.claude.com/en/articles/14328960-identity-verification-on-claude
To be accurate about what it actually says: this isn’t a blanket “show us your passport to use Claude.” Not yet.
The policy as written is narrower than it might first appear.
My concern isn’t what it says - it’s that the precedent now exists. OAI will do doubt follow suite.
Scope creep is a documented pattern with this kind of thing, and “we only use it for X” describes current intent, not a structural constraint.
Given the nature of this community, figured it was worth flagging.


I’m right there with you…but may I offer an alternative narrative in two parts and then address the pipeline issue you raise.
The first part:
There’s a small (but real) subset of people turning their back on big corpo. Retro-tech, dumb-phones, self hosting, linux, right-to-repair advocates, OSS and FOSS, privacy groups … everyone can smell the enshittification and are (in their own ways) pushing back. That’s not nothing.
I think the way forward is not to play the game. Big corpo will do what big corpo always does. But we can use the tools we have to make the things we want.
Will it compete with SOTA? No. But…does it need to? At an individual level, I’d argue “probably not”. It just needs to work for the individual.
More to the point, there’s something to be said about doing more with less. Constraints can bring about real innovation. If the answer cannot be “Throw more X at it” (where X is $$$, compute, whatever)…then how can you leverage the tools and intelligence you have to build what you want? I think that’s the real question.
Now for the second part:
I’m more sanguine about it because I think this is down to the individual. Look at where you are now - it’s not Reddit or Facebook :). You and I choose to be here because…reasons. We can choose to run Linux, LibreOffice, Mullivad, llama.cpp, SearXNG, Syncthing, Immich etc for the same reasons.
I think the trick will be figuring out how to navigate from your home ecosystem into the wider world, without getting f’d in the a.
The one thing I don’t have a clean answer for is your pipeline point. If the content web collapses into AI slop - and it’s already going that way - then the human-generated signal that makes these models worth using starts to degrade. You may need to hold onto your “Good Old LLMs” for a while yet (or start training your own from scratch. There are ways and means but that’s beyond the scope of this conversation I think).
In any case, individual sovereignty doesn’t fix that. You can opt out personally and still live in a world where the epistemic commons has been strip-mined.
That…probably what WILL happen, come to think of it. Ok, fine. But partial answers already exist - cryptographic provenance of human content, federated communities being structurally harder to slop-flood (maybe).
Honestly? Nobody has solved that problem just yet. The people building the biggest models know it’s a problem and don’t have a clean answer either. Anyone who says they do is selling something.
All I can say is the only way to win is not to play the game. Which WORP would no doubt meep-morp at.
Oh I hear you (and appreciate the response).
For me, I can’t help but think of another alternative, which I’m surprised I haven’t heard of yet …
stripping down one’s personal technological cognitive load to a stack of systems that can fit into one’s brain (like the Python mantra), focusing on learning that stack well building sustainable and stable systems, and then just detoxing from the increasingly polluted digital information stream (protected commons, traditional formats such as books and in person engagement … dunno).
Depends on what the end goal is, but AIs seem to be about using tech more or just opting out of sovereignty. Something like the above seems to me to be about using tech less (in the end) and pushing toward being a secondary tool rather than an end of its own.
I agree.
God help me, I’m actually reading books again.
Books.
It’s…harder than it use to be. A lot harder, actually.
But there’s something to be said about marginalia etc.
Ha yes … on the other hand, it was easy to forget how good damn expansive non-internet information is: the whole world ran on that shit for millennia.