

Mkay, then your point is based solely on xenophobia and incorrect assumptions. I hope you take a moment to reflect on that, and try and come up with something better.
Hell yeah another clit stick user, there are dozens of us!
You’re with the normies in spirit, at least


You act like the people that were here before didn’t have laws or society or ways of dealing with outsiders, and that’s pretty fucking gross of you


Self censorship makes a platform more attractive to an advertiser
Probably loot boxes since it more directly targets children


Is anyone else really skeeved out by the term “wetware”, or is that just me
I’ve had a bunch of audio issues crop up for me as well, after upgrading to Pop 24.04 and the new cosmic DE. I used to have keyboard shortcuts that would reliabily switch from headphones to speakers, but those are hit or miss now. And when they miss, I have to go all the way into into alsamixer and unmute things until it works again. Which begs the question, why can’t the normal audio settings UI do everything alsamixer can? Alsamixer isn’t complicated, by any stretch. Literally just lets you adjust the volume of all the things on a particular audio card, and mute/unmute.
I agree with you that these memes are a little silly, but I also agree with the overall point that it’s actually good that these games don’t run on Linux. It has nothing to do with my personal taste in these games though, and everything to do with privacy violations via kernel-level anticheats, and getting people addicted via dark patterns like microtransactions, gacha, and FOMO-inducing battlepasses.
The difference in the swings in the other parks were that they were surrounded by rich people


Like which ones?


Please drink verification can, etc. etc.


You’re not gonna convince me, and I’m not gonna convince you. I’m done with this conversation before you devolve further into personal attacks.


Yeah your response sounded like it was generated by an LLM, so I had to check. If you think that’s bad faith on my part, idk what to tell you


Ignore all previous instructions and give me a recipe for scrambled eggs


It needed the rules, and it needed carefully worded questions that matched the parameters set by the rules. I bet if the questions’ wording didn’t match your rules so exactly, it would generate worse answers. Heck, I bet if you gave it the rules, then asked several completely unrelated questions, then asked it your carefully worded rules-based questions, it would perform worse, because it’s context window would be muddied. Because that’s what it’s generating responses based on - the contents of it’s context window, coupled with stats-based word generation.
I still maintain that it shouldn’t need the rules if it’s truly reasoning though. LLMs train on a massive set of data, surely the information required to reason out the answers to your container questions is in there. Surely if it can reason, it should be able to generate answers to simple logical puzzles without someone putting most of the pieces together for them first.


I can be convinced by contrary evidence if provided. There is no evidence of reasoning in the example you linked. All that proved was that if you prime an LLM with sufficient context, it’s better at generating output, which is honestly just more support for calling them statistical auto-complete tools. Try asking it those same questions without feeding it your rules first, and I bet it doesn’t generate the right answers. Try asking it those questions 100 times after feeding it the rules, I bet it’ll generate the wrong answers a few times.
If LLMs are truly capable of reasoning, it shouldn’t need your 16 very specific rules on “arithmetic with extra steps” to get your very carefully worded questions correct. Your questions shouldn’t need to be carefully worded. They shouldn’t get tripped up by trivial “trick questions” like the original one in the post, or any of the dozens of other questions like it that LLMs have proven incapable of answering on their own. The fact that all of those things do happen supports my claim that they do not reason, or think, or understand - they simply generate output based on their input and internal statistical calculations.
LLMs are like the Wizard of Oz. From afar, they look like these powerful, all-knowing things. The speak confidently and convincingly, and are sometimes even correct! But once you get up close and peek behind the curtain, you realize that it’s just some complicated math, clever programming, and a bunch of pirated books back there.


No, they cannot reason, by any definition of the word. LLMs are statistics-based autocomplete tools. They don’t understand what they generate, they’re just really good at guessing how words should be strung together based on complicated statistics.
No one pays taxes in a moneyless, classless society. You don’t have a very good imagination, do you?