After Anthropic refused flat out to agree to apply Claude AI to autonomous weapons and mass surveillance of American citizens, OpenAI jumps right into bed with the United States Department of War.
Not cancelled. But they may have been flagged internally, I don’t know.
We weren’t violating their terms, only violating their built in model guidelines. American models are usually very sensitive. They’d rather err on the side of blocking content than risk allowing questionable content that is lawful.
But even adjusting prompts, it didn’t yield reliable results. So we have to use uncensored open weights models for many things. It’s not SOTA, but it’s better than nothing.
I have a question about those guardrails. At any point, did any of your accounts get disabled for discussing abuse in this (or any) context?
I('m guessing this happened zero times, which probably means those guardrails are just irritating suggestions designed to keep you prompting…)
Not cancelled. But they may have been flagged internally, I don’t know.
We weren’t violating their terms, only violating their built in model guidelines. American models are usually very sensitive. They’d rather err on the side of blocking content than risk allowing questionable content that is lawful.
But even adjusting prompts, it didn’t yield reliable results. So we have to use uncensored open weights models for many things. It’s not SOTA, but it’s better than nothing.