In a recent survey, we explored gamers’ attitudes towards the use of Gen AI in video games and whether those attitudes varied by demographics and gaming motivations. The overwhelmingly negative attitude stood out compared to other surveys we’ve run over the past decade.

In an optional survey (N=1,799) we ran from October through December 2025 alongside the Gamer Motivation Profile, we invited gamers to answer additional questions after they had looked at their profile results. Some of these questions were specifically about attitudes towards Gen AI in video games.

Overall, the attitude towards the use of Gen AI in video games is very negative. 85% of respondents have a below-neutral attitude towards the use of Gen AI in video games, with a highly-skewed 63% who selected the most negative response option.

Such a highly-skewed negative response is rare in the many years we’ve conducted survey research among gamers. As a point of comparison, in 2024 Q2-Q4, we collected survey data on attitudes towards a variety of game features. The chart below shows the % negative (i.e., below neutral) responses for each mentioned feature. In that survey, 79% had a negative attitude towards blockchain-based games. This helps anchor where the attitude towards Gen AI currently sits. We’ll come back to the “AI-generated quests/dialogue” feature later in this blog post since we break down the specific AI use in another survey question.

  • Coelacanth@feddit.nu
    link
    fedilink
    arrow-up
    5
    ·
    1 day ago

    I’m actually also working on a project using LLMs to talk to NPCs. Though this one doesn’t use local models but online models called through a proxy using API keys, which lets you use much larger and better models.

    But yeah it’s been interesting digging deep into the exact and precise construction of the prompts to get the NPCs talking and behaving exactly like you want them, and be as real and lifelike as possible.

    • ByteSorcerer@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      12 hours ago

      I’ve also experimented with this. In my experience, getting the NPCs to behave the way you want with just a prompt is hard and inconsistent, and quickly falls apart when the conversation gets longer.

      I’ve gotten much better results by starting from a small model and fine-tuning it on lore-accurate conversations (you can use your conversations with larger models as training materials for that). In theory you can improve it further with RLHF, but I haven’t tried that myself yet.

      The downside of this is of course that you’re limited to open-weight models for which you have enough compute resources available to fine-tune them. If you don’t have a good GPU then the free Google Collab sessions can give you access to a GPU with 15GB of VRAM. The free version has a daily limit on GPU time though so set up your training code to regularly save checkpoints so that you can continue the training on another day if you run out. Using LoRa instead of doing a full fine-tune can also reduce the memory and computational resources required for the fine-tune (or in other words, allows you to use a larger and better model with your available resources).

      • Coelacanth@feddit.nu
        link
        fedilink
        arrow-up
        1
        ·
        11 hours ago

        Well, what I’m working on is a mod for STALKER Anomaly, and most large models already seem to have good enough awareness of the STALKER games setting. I can imagine it’s a much bigger challenge if you’re making your own game set in your own unique world. I still need to have some minor game information inserted into the prompt, but only like a paragraph detailing some important game mechanics.

        Getting longer term interactions to work right is actually what I’ve been working on the last few weeks, implementing a long-term memory for game characters using LLM calls to condense raw events into summaries that can be fed back into future prompts to retain context. The basics of this system was actually already in place created by the original mod author, I just expanded it into a true full on hierarchical memory system with long- and mid-term memories.

        But it turns out creating and refining the LLM prompts for memory management is harder than implementing the memory function itself!