you can like… enforce this rule programatically? you don’t have to say “pretty please” to ai? basically, when AI requests some potentially unwanted thing (like deleting an email), this request goes through a proxy that asks the human for confirmation. Also you can have a safe word set up in the chat interface to act as a killswitch. I thought these are ABCs of ai safety but apparently these are foreign concepts to this “safety director”
You say that, but who do you think the AIs will go after first if they ever do develop actual intelligence? In that scenario, simple manners can go a long way!
The people who internalize this would never engage with a chatbot in this way in the first place. To them this is another intelligence they’re conversing with, where you get what you want by following social decorum and enforcing your will amounts to abuse.
you can like… enforce this rule programatically? you don’t have to say “pretty please” to ai? basically, when AI requests some potentially unwanted thing (like deleting an email), this request goes through a proxy that asks the human for confirmation. Also you can have a safe word set up in the chat interface to act as a killswitch. I thought these are ABCs of ai safety but apparently these are foreign concepts to this “safety director”
You say that, but who do you think the AIs will go after first if they ever do develop actual intelligence? In that scenario, simple manners can go a long way!
The people that design AI tools don’t implement guardrails because then they’d have to admit AI is not ready for the shit they’re trying to make
The people who internalize this would never engage with a chatbot in this way in the first place. To them this is another intelligence they’re conversing with, where you get what you want by following social decorum and enforcing your will amounts to abuse.
Program? Like a fucking farmer?