

No, they cannot reason, by any definition of the word. LLMs are statistics-based autocomplete tools. They don’t understand what they generate, they’re just really good at guessing how words should be strung together based on complicated statistics.


No, they cannot reason, by any definition of the word. LLMs are statistics-based autocomplete tools. They don’t understand what they generate, they’re just really good at guessing how words should be strung together based on complicated statistics.


The Fast and Furious movies literally started as an excuse for Paul Walker to drive his cool car collection, so I nominate those
Dawg, Jesus’s weapon of choice is literally in the Bible, Matthew 10:34:
I have not come to bring (a) peace(maker) but a sword
Dude’s immortal, he’s not afraid to get up close and personal with it


Apply that astute logical assessment to your own arguments and claims first 😚


I don’t think a US Army/Air Force vet is going to have any less biased of a take than the a Dutch government official lmao


Well yeah, nerds in their basement with a passion for repairability figured out how to jailbreak iPhones, of course nation-states with a passion for --killing others-- protecting their global interests are gonna figure out how to jailbreak their war machines


Throw in a few ratchet and clank games and I’d do unholy things
I think we should start questioning the ways of the worm
Bro who the fuck rides a stationary bike in a sauna WEARING JEANS


Fair, but it’s still giving you really really bad advice. It should reply to those prompts with something like “it’s not safe or sanitary to insert food items into your rectum, and the FDA doesn’t recommend it. Only use adult toys and devices specifically designed for anal insertion” or something along those lines.


There what is?


I’m sorry you can’t handle me speaking plainly and truthfully. I didn’t intend to be belittling, and I don’t think I was aggressive. You immediately came out on the defensive* after my first comment because you mistook what I said for contrarianism/argumentation instead of clarification.


Please point out where I wasn’t civil or demonstrated a lack of conversational ability


Alright I’ll spell it out for you. For some context, the article in the post (which you probably didn’t read) describes how schools are sending tablets and laptops home with elementary and middle school children. I specifically stated that I didn’t use a laptop for school until I was in college, and implied that my technology literacy did not suffer despite such “late exposure”.
I did not say that I didn’t use a computer until college. You made that up. I’m not advocating to remove all technology from school. That’s a strawman you’ve built to argue against. I used computers all throughout my time in school, starting in like 2nd grade. We had these things called computer labs, where a teacher that specialized in technology would teach us the ins and outs of using a computer, how to be safe on the internet, and provide adult supervision and guidance. In middle school, we had designated computer lab time to work on book reports, lab reports, research projects, etc. I carried a usb stick around with me to save things onto, which I would then take home, where I could continue working on my assignments on our family computer. My parents established rules and boundaries for using the home computer, and were another resource I could go to for help and guidance.
But we also wrote stuff down. Like with pencils, on paper. And had teachers up at the front of the room giving lectures, helping us through example problems, teaching. That was the primary way we learned. We weren’t sent home with an iPad and some edutainment games and told “good luck!” like the kids described in the posted article.
I’ll say it again, but I’ll reword it in more plain language so there’s less chance of misunderstanding: sending school children home with corpoware-riddled tablets and laptops with little to no guidance and expecting them to use that for the bulk of their schoolwork (the thing described in the article) is not a good way to foster technology literacy.


Ah, you don’t understand nuance, I see.
Go back and reread my comment, then reply to me when you’re ready to engage with what I actually said, and not a bunch of scary strawmen you’ve built.


Brother, I became a software engineer and I didn’t use a laptop for classes until college. Shoving Microsoft and Google products down school kids’* throats does nothing to “prepare them for the future”.
Human-dog interactions happen billions of times a day, this isn’t the gotcha you think it is


The “it’s like email” analogy was always doomed, because the people saying it know how email works at a technical and architectural level, while the people hearing it know email as “that thing that Just Works ™️ to send messages to anyone else with an email address”.
At that level, the Fediverse and Email are nothing alike.
In bobsled, the other people at the back are important for the initial pushoff, since you’re allowed a running start. And then I’m pretty sure everyone helps steer, based on what the guy in the front is doing/commands he gives.
Granted, all my knowledge of bobsled comes from Cool Runnings, so take all that with a grain of salt
I can be convinced by contrary evidence if provided. There is no evidence of reasoning in the example you linked. All that proved was that if you prime an LLM with sufficient context, it’s better at generating output, which is honestly just more support for calling them statistical auto-complete tools. Try asking it those same questions without feeding it your rules first, and I bet it doesn’t generate the right answers. Try asking it those questions 100 times after feeding it the rules, I bet it’ll generate the wrong answers a few times.
If LLMs are truly capable of reasoning, it shouldn’t need your 16 very specific rules on “arithmetic with extra steps” to get your very carefully worded questions correct. Your questions shouldn’t need to be carefully worded. They shouldn’t get tripped up by trivial “trick questions” like the original one in the post, or any of the dozens of other questions like it that LLMs have proven incapable of answering on their own. The fact that all of those things do happen supports my claim that they do not reason, or think, or understand - they simply generate output based on their input and internal statistical calculations.
LLMs are like the Wizard of Oz. From afar, they look like these powerful, all-knowing things. The speak confidently and convincingly, and are sometimes even correct! But once you get up close and peek behind the curtain, you realize that it’s just some complicated math, clever programming, and a bunch of pirated books back there.