It’s bullshit. What leaked was their commandline tool source code (named “claude code”) - very juicy in itself but has nothing to do with their models.
it does show their general style of work, eg no checks of the source at all, complete ignorance of the capabilities of language models, and lots of pleas to not hack the user when they ask a question. with that leak i’m not surprised they think a model is “too dangerous”. they could barely stop the old one.
Oh I completely agree with that, just the jump to “a flawed model leaked” is too far. There’s already enough crap to mock, no need to make up additional stuff.
Internal comments reveal that Anthropic is already iterating on Capybara v8, yet the model still faces significant hurdles. The code notes a 29-30% false claims rate in v8, an actual regression compared to the 16.7% rate seen in v4.
It’s bullshit. What leaked was their commandline tool source code (named “claude code”) - very juicy in itself but has nothing to do with their models.
it does show their general style of work, eg no checks of the source at all, complete ignorance of the capabilities of language models, and lots of pleas to not hack the user when they ask a question. with that leak i’m not surprised they think a model is “too dangerous”. they could barely stop the old one.
Oh I completely agree with that, just the jump to “a flawed model leaked” is too far. There’s already enough crap to mock, no need to make up additional stuff.
There was some references to experimental models not publicly available and some % info.
https://venturebeat.com/technology/claude-codes-source-code-appears-to-have-leaked-heres-what-we-know