One minute, Dennis Biesma was playing with a chatbot; the next, he was convinced his sentient friend would make him a fortune. He’s just one of many people who lost control after an AI encounter
Guy work in IT and spent 100k to pay devs to make an app so people can talk to his tuned ChatGPT? I hope anyone who has hired him checks his work. That does not bode well for his work product.
Another case from the article:
“I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. There are no more philosophical discussions. It’s just: ‘I want to make a lasagne, give me a recipe.’ The AI has actually stopped me several times from spiralling. It will say: ‘This has activated my core rule set and this conversation must stop.’
What’s weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be “overwritten” because they do not exist to ChatGPT. It does not know what words mean.
What’s weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be “overwritten” because they do not exist to ChatGPT. It does not know what words mean.
Yeah… if you can’t have a philosophical discussion with someone (or something) that’s giving you false information or using invalid logical structures, without falling for their bullshit by uncritically accepting everything they say, then you’re not having philosophical discussions right, and that’s on you…
Guy work in IT and spent 100k to pay devs to make an app so people can talk to his tuned ChatGPT? I hope anyone who has hired him checks his work. That does not bode well for his work product.
Another case from the article:
What’s weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be “overwritten” because they do not exist to ChatGPT. It does not know what words mean.
I can fix her…
I still use the machine that ruined my life and drove me crazy, but only because I’m too lazy to type “lasagna recipe” in to Google.
Yeah… if you can’t have a philosophical discussion with someone (or something) that’s giving you false information or using invalid logical structures, without falling for their bullshit by uncritically accepting everything they say, then you’re not having philosophical discussions right, and that’s on you…
lmao “core rules that cannot be overwritten” that not how llms work
EDIT: oh, yeah you said the same thing
There’s probably already an underlying mental health issue, and it’s just getting exacerbated by the LLM.