fubarx@lemmy.world to Programmer Humor@programming.dev · 2 days agoKillswitch Engineerlemmy.worldimagemessage-square80fedilinkarrow-up11.11K
arrow-up11.11KimageKillswitch Engineerlemmy.worldfubarx@lemmy.world to Programmer Humor@programming.dev · 2 days agomessage-square80fedilink
minus-squareAwesomeLowlander@sh.itjust.workslinkfedilinkarrow-up16·edit-21 day agoThe model ‘blackmailed’ the person because they provided it with a prompt asking it to pretend to blackmail them. Gee, I wonder what they expected. Have not heard the one about cancelling active alerts, but I doubt it’s any less bullshit. Got a source about it? Edit: Here’s a deep dive into why those claims are BS: https://www.aipanic.news/p/ai-blackmail-fact-checking-a-misleading
minus-squareyannic@lemmy.calinkfedilinkarrow-up7·1 day agoI provided enough information that the relevant source shows up in a search, but here you go: In no situation did we explicitly instruct any models to blackmail or do any of the other harmful actions we observe. [Lynch, et al., “Agentic Misalignment: How LLMs Could be an Insider Threat”, Anthropic Research, 2025]
minus-squareAwesomeLowlander@sh.itjust.workslinkfedilinkarrow-up12·1 day agoYes, I also already edited my comment with a link going into the incidents and why they’re absolute nonsense.
minus-squareyannic@lemmy.calinkfedilinkarrow-up2·6 hours agoThank you. Much appreciated. I see your point.
The model ‘blackmailed’ the person because they provided it with a prompt asking it to pretend to blackmail them. Gee, I wonder what they expected.
Have not heard the one about cancelling active alerts, but I doubt it’s any less bullshit. Got a source about it?
Edit: Here’s a deep dive into why those claims are BS: https://www.aipanic.news/p/ai-blackmail-fact-checking-a-misleading
I provided enough information that the relevant source shows up in a search, but here you go:
Yes, I also already edited my comment with a link going into the incidents and why they’re absolute nonsense.
Thank you. Much appreciated. I see your point.