it does show their general style of work, eg no checks of the source at all, complete ignorance of the capabilities of language models, and lots of pleas to not hack the user when they ask a question. with that leak i’m not surprised they think a model is “too dangerous”. they could barely stop the old one.
Oh I completely agree with that, just the jump to “a flawed model leaked” is too far. There’s already enough crap to mock, no need to make up additional stuff.
it does show their general style of work, eg no checks of the source at all, complete ignorance of the capabilities of language models, and lots of pleas to not hack the user when they ask a question. with that leak i’m not surprised they think a model is “too dangerous”. they could barely stop the old one.
Oh I completely agree with that, just the jump to “a flawed model leaked” is too far. There’s already enough crap to mock, no need to make up additional stuff.