Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
You seem pretty sure of that. Is your position firm or are you willing to consider contrary evidence?
Definition: https://www.wordnik.com/words/reasoning
Evidence or arguments used in thinking or argumentation.
The deduction of inferences or interpretations from premises; abstract thought; ratiocination.
Evidence: https://lemmy.world/post/43503268/22326378
I believe this clearly shows the LLM can perform something functionally equivalent to deductive reasoning when given clear premises.
“Auto-complete” is lazy framing. A calculator is “just” voltage differentials on silicon. That description is true and also tells you nothing useful about whether it’s doing arithmetic.
The question of whether something is or isn’t reasoning isn’t answered by describing what it runs on; it’s answered by looking at whether it exhibits the structural properties of reasoning: consistency across novel inputs, correct application of inference rules, sensitivity to logical relationships between premises. I think the above example shows something in that direction. YMMV.
I can be convinced by contrary evidence if provided. There is no evidence of reasoning in the example you linked. All that proved was that if you prime an LLM with sufficient context, it’s better at generating output, which is honestly just more support for calling them statistical auto-complete tools. Try asking it those same questions without feeding it your rules first, and I bet it doesn’t generate the right answers. Try asking it those questions 100 times after feeding it the rules, I bet it’ll generate the wrong answers a few times.
If LLMs are truly capable of reasoning, it shouldn’t need your 16 very specific rules on “arithmetic with extra steps” to get your very carefully worded questions correct. Your questions shouldn’t need to be carefully worded. They shouldn’t get tripped up by trivial “trick questions” like the original one in the post, or any of the dozens of other questions like it that LLMs have proven incapable of answering on their own. The fact that all of those things do happen supports my claim that they do not reason, or think, or understand - they simply generate output based on their input and internal statistical calculations.
LLMs are like the Wizard of Oz. From afar, they look like these powerful, all-knowing things. The speak confidently and convincingly, and are sometimes even correct! But once you get up close and peek behind the curtain, you realize that it’s just some complicated math, clever programming, and a bunch of pirated books back there.
Ok, if you’re willing to think together, I’ll take that in good faith and respond in kind.
“It needed the rules, therefore it’s not reasoning” is doing a lot of work in your argument, and I think it’s where things come unstuck.
Every reasoning system needs premises - you, me, a 4yr old. You cannot deduce conclusions from nothing. Demanding that a reasoner perform without premises isn’t a test of reasoning, it’s a demand for magic. Premise-dependence isn’t a bug, it’s the definition.
If you want to argue that humans auto-generate premises dynamically - fair point. But that’s a difference in where the premises come from, not whether reasoning is occurring.
Look again at what the rules actually were: https://pastes.io/rules-a-ph
No numbers, containers, or scenarios. Just abstract rules about how bounded systems work. Most aren’t even physics - they’re logical constraints. Premises, in the strict sense.
It’s the sort of logic a child learns informally via play. If we don’t consider kids learning the rules by knocking cups over “cheating”, then me telling the LLM “these are the rules” in the way it understands should be fair game.
When the LLM correctly handles novel chained problems, including the 4oz cup already holding 3oz, tracking state across two operations, that’s deriving conclusions from general premises applied to novel instances. That’s what deductive reasoning is, per the definition I cited. It’s what your kid groks (eventually).
“Without the rules it fails” - without context, humans make the same errors. Ask a 4 year old whether a taller cup holds more fluid than a rounder one. Default assumptions under uncertainty aren’t a failure of reasoning, they’re a feature of any system with incomplete information.
“It’ll fail sometimes across 100 runs” - so do humans under load. Probabilistic performance doesn’t disqualify a process from being reasoning. It just makes it imperfect reasoning, which is the only kind that exists.
The Wizard of Oz analogy is vivid but does no logical work. “Complicated math and clever programming” describes implementation, not function. Your neurons are electrochemical signals on evolved heuristics. If that rules out reasoning, it rules out all reasoning everywhere. If it doesn’t rule out yours, you need a principled account of why it rules out the LLM’s.
PS: I believe you’re wrong about the give it 100 runs = different outcomes thing. With proper grounding, my local 4B model hit 0/120 hallucination flags and 15/15 identical outputs across repeated clinical test cases. Draft pre-publication data, methodology and raw outputs included here: https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/prepub/PAPER.md
I’m willing to test the liquid transformations thing and collect data. I might do that anyway. That little meme test is actually really good.