thanks, AI!
OFC, just go buy 1 wash and bring it home.
As 100 metres is within yelling distance, rather than chasing it up the street, the simplest approach would be to call the car wash by its name and have it come to you. Having a car wash treat on hand will make this process easier.
PhD level.
Dang. I haven’t always been able replicate these, but every model I’ve tried provides increasing dumb answers to this.
(These are semi-ethically sourced from small non-web non-data-center models, stolen, in turn, from the original thieves who created them from stolen data.)
So, I’d recommend taking the walk. It’s a great opportunity to stretch your legs and clear your mind before or after washing your car!
Sure, why not both? Take a leisurely stroll to the carwash, and once you reach it, jump into the driver’s seat for some washing fun! It’ll be like a little adventure where you get to combine two activities into one. 😉
Both walking and driving would be great options, depending on your personal preferences and circumstances.
The solution to the Wolf, Goat, and Cabbage Problem is to simply bring the other side of the river to the boat.
Opus gets it right every time. Sonnet gets it wrong, though.
The point isn’t that some models are better than others. The point is that yet again it’s an example that LLMs are not thinking machines and you can’t trust anything from them and people are burning the world to run a glorified auto complete.
Counterpoint: People are not thinking machines and you can’t trust anything from them and people are burning the world to run glorified slave labor.
Truly we are AI of natural world xD
People are thinking machines. The problem is, we aren’t a collective thinking machine. People thinking in their own self interest have caused most of the problems. It makes perfectly rational sense to burn the world if you only care about the quality of your own life.
People can only make stupid mistakes so many times. Once exited the gene pool, that’s it. Meanwhile an AI can spew statistical nonsense 24/7 without repercussion.
I trust an intelligence way more that managed to keep iteself alive, than one that is optimized to generate signal shaped noise.
My point was that some models are better than others.
Sure, fine, some get this right, and what else are they getting wrong? Something more serious and harder to spot?
I agree that we should never treat these things as oracles. But how often they’re right/wrong does matter.
how often they’re right/wrong does matter.
That’s the wildest take I’ve heard on the question answering machine.
Most people get their info from forums and blog posts. Unless you limit yourself to nothing but peer reviewed papers, you probably do some kind of calculation on the legitimacy of whatever source you are perusing and verify it further if it’s something important.
deleted by creator
Morged.
Continvoucly
I mean the person asking the question doesn’t quite have it all there.
If asked a question like that, I would give a similar answer. There’s no pointing out to the person how stupid of a question it is.
You’d have to be pretty fucking stupid to give a similar answer
I feel like people forget what sass is.
Give an extremely stupid question expect an equally stupid answer. How’s the person supposed to know anyways.
And if you try to play “I’m only testing the system”, well I’m fucking your obvious test up, good luck. You aren’t smarter than the system, someone is fucking with you for fun.
LLMs don’t have emotional states, so they can’t be sassy.
And?
no need to out yourself as dumber than a clanker.
What is sass?
So the answer should be obvious?
I know at least a couple places (full handwash+detail) that could fit 100m walking from the parking area to the counter.
I’m sure they wouldn’t like you to drive into the waiting room.
I fell for this one too at first ><






