• 16 Posts
  • 1.53K Comments
Joined 1 year ago
cake
Cake day: April 10th, 2025

help-circle


  • This is not meant to be a chatbot.

    It is meant to evaluate gaming sessions of CS2 (and/or potentially any VAC enabled game, maybe).

    Its an experimental, prototype of improving VAC’s serverside, backend analysis capabilities, to better detect cheaters and hackers.

    You don’t need kernel level access into everyone’s pcs.

    You can run analytics on what the server records as happening in the game session, to detect odd patterns and things that should be impossible.

    LLMs are … the entire thing that they do is handle massive inputs of data, and then evaluate that data.

    The part of an LLM that generates a response, in text form, to that data, is a whole other thing.

    They can also output… code, or spreadsheets, or images, or 3d models, or… any other kind of data.

    Like say, a printout of suspicious activity in a game session, with statistically derived confidence intervals and timestamps and analysis.

    The you have another, differently tuned LLM, ingest the data the first LLM produces, and turn it into something else.

    You see the ModelEvaluation and then MetaModelEvaluation?

    That looks like what they’re doing to me.

    Detailed Server Logs -> Model Evaluation -> MetaModelEvaluation.

    If you’ve ever run a dedicated multiplayer server and had to deal with hackers… you’re gonna be looking through server logs to sniff out nonsense.

    Server-side cheat detection, very oversimplified, is having automatic systems do that.



  • Valve’s customer service responses have always been mostly a canned series of bot messages.

    Their in-house support has always been 99% automated.

    Its very obvious if you’ve ever interacted with them at more than an occasional, superficial level.

    You have to be quite persistent to get a message from an actual human being.

    Yep, the automated messages often have the name of ostensibly a human attached to them.

    So do all kinds of other bots, since way before ChatGPT and LLMs took off.

    What, did you a think a human person actually read every single complaint report of a hacker or cheater in a video game with any kind of a massively used anti cheat system?

    No! You have bots, analytic systems, screen that shit, just the same as all our resumes on Indeed, or our activity and profiles on dating apps have been being analysed and evaluated by bots, again, since way before LLMs got as prevalent as they are today.

    Then you filter. Humans only see the odd ones that defy categorization, basically, or trigger a certain set of flags that are designated as ‘probably needs an actual human to handle this one’.


    This has been a tech industry standard for almost two decades.

    Valve is just now overhauling that system to use an LLM, because those are actually better than a very complex series of chained regex searches.

    The alternative would be to do what Meta or Google or Amazon do: Hire armies of tens to hundreds of thousands of offshore contractors and give them all PTSD for pitiful wages, manually evaluating everything.

    Apparently this is not widely known, by people who’ve never worked in an entreprise level tech company?


    Using LLMs to evaluate and assist a massive anti-cheat system is actually a great way to be be able to do an anti-cheat system… without hooking directly into your kernel.

    These things are very good at pattern recognition, and if you tune them to specifically only work with inputs from the server from gaming sessions, you can significantly improve server-side/backend detection of players/clients doing things that are highly suspicious or outright impossible given the actual rules of the game.



  • Yeah see this is my whole main problem.

    The writing and story are just… ok?

    I could identify the racist tropes and body shaming just interwoven into the whole series before I hit puberty.

    Oh then of course slavery is good sometimes actually, here I invented a scenario where this is true.

    I got bored of Harry Potter, and just reread Animorphs.

    Which certainly has problematic characters and moments and events… but … they are bad things, that the characters have to actually deal with, its a much more complex and engaging story overall, imo, though of course the serial format means that some books are not so great in comparison to others.

    Kinda the same thing with Redwall.








  • sp3ctr4l@lemmy.dbzer0.comtoLemmy Shitpost@lemmy.worldProgress
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    2 days ago

    Has MSFT extended their gaslighting to include:

    Actually, a highly trained, educated, and capable NASA astronaut is just not using our product correctly.

    ???

    It really is farsical the degree to which they act like you are just some kind of idiot if their horrendously designed software isn’t immediately comprehensible and obvious to you.








  • Yeah, it is assinine.

    Because really what this all boils down to is an excercise in vanity, in self-validation essentially by force.

    You can see this everywhere in society… fucking, Trump is threatening the genocide of a civilization, ultimately, to avoid having to ever admit he was wrong about anything.

    What is wild to me is that… you know like schizotypal people get a horrible rap in society, because they’re unstable and dangerous!

    Yeah, yes, its tragic when one of them snaps and does something horrible.

    But you what is fucking everywhere, so commonplace that its just expected, seen as unavoidable?

    Absurdly overconfident malignant narcissists.

    They’re very often literally sociopaths, and they very often do nothing other than ruin the lives of those around them, to validate themselves, because they fundentally are not capable of self-validation.

    But again this is just apparently so normal, that it only even registers at a social level when it is hyper obvious and extreme, as with Destiny or Thor/Pirate Software.

    They’re all the same personality type… never take real accountability for anything, gaslight you into thinking you somehow made a mistake or misunderstood them… when they will and do just say anything that they think will make them more highly regarded.

    Its narcissists that are the most dangerous kind of person to human society, but we hardly at all treat them the way we treat a far smaller number of people with overall, far less dangerous neurotypes.

    Certainly doesn’t help the field of psychology itself is full of narcissists.


  • AAA knows that their primary competition is old games, that run better on more hardware, and are just better games.

    Via a combination of acting like a cartel in a market, and also personally being basically unimaginably vain… they distort reality around them to the greatest extent that they can, to continue to convince people their overpriced garbage is actually the hottest shit and you’re some kind of horrible bad person if you don’t pay them.

    That’s why they have entire networks of access journalists, why they constantly write essentially op-eds that follow the meta, or the meta-meta of the actual video game industry, why AAA games have marketing budgets bigger than Hollywood movies.

    It is literally a propaganda machine.

    Piracy is morally superior to feeding these nepo baby gaslighting rainbow capitalists that love to pretend they’re progressive.

    They’re not progressive, they’re entitled faux nobility, delusional narcissists that live in hugboxes that reward complancency and not rocking the boat.