After Anthropic refused flat out to agree to apply Claude AI to autonomous weapons and mass surveillance of American citizens, OpenAI jumps right into bed with the United States Department of War.
Not cancelled. But they may have been flagged internally, I don’t know.
We weren’t violating their terms, only violating their built in model guidelines. American models are usually very sensitive. They’d rather err on the side of blocking content than risk allowing questionable content that is lawful.
But even adjusting prompts, it didn’t yield reliable results. So we have to use uncensored open weights models for many things. It’s not SOTA, but it’s better than nothing.
Not cancelled. But they may have been flagged internally, I don’t know.
We weren’t violating their terms, only violating their built in model guidelines. American models are usually very sensitive. They’d rather err on the side of blocking content than risk allowing questionable content that is lawful.
But even adjusting prompts, it didn’t yield reliable results. So we have to use uncensored open weights models for many things. It’s not SOTA, but it’s better than nothing.