LLM are entirely the wrong tool for that. They’ve used neural nets for their anti cheat for a while but cheating has nothing to do with language so large language models are not for that. They could however use them to police the steam forums but only if a comment gets reported otherwise you’re putting every comment through an llm which would cost a fuckton
Seems like they could be experimenting with an LLM to improve server-side anti-cheat.
LLM are entirely the wrong tool for that. They’ve used neural nets for their anti cheat for a while but cheating has nothing to do with language so large language models are not for that. They could however use them to police the steam forums but only if a comment gets reported otherwise you’re putting every comment through an llm which would cost a fuckton