• fallaciousBasis@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 hours ago

    Who gives a fuck? I can play your game “as intended” or my way. Mods make games awesome!

    If someone wants to play with AI slop mode fully enabled, that’s their choice and their prerogative and I’m assuming they bought the game so how about they just enjoy it however the hell they want to.

    From brain rot to slop. The word of the year, everybody.

  • SaharaMaleikuhm@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 hours ago

    Personally, I only care that it looks like a deepfake. And that alone is giving me the ick. Like I would pay more to have it NOT look like this.

    • 1984@lemmy.today
      link
      fedilink
      English
      arrow-up
      32
      ·
      17 hours ago

      Being young today must be fun. Just missed out on all the good stuff in the 80, 90s and 2000s to experience this mess we have now.

      • Joanie Parker@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 hours ago

        I was hanging out with my neighbour a few months back. When I listed off how fucked kids today are…

        Then I remembered he has two young boys and they’re growing up with this dystopia as their present and future.

        I apologised once I realized his family is directly affected.

        He said “Oh I already know they’re fucked.”

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    ·
    18 hours ago

    I don’t see how this will stay consistent enough for art directors to sign off on it. It’s effectively just a hallucination based on your current video game frame.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      19
      ·
      18 hours ago

      Unforunately the latest stuff I’ve seen is all about keeping character consistency, which is basically having a fixed frame of reference for every generation. What I don’t get not knowing much about the details is how LLM generation is faster than actual 3D modeling with more details? Perhaps overall it is faster per frame to generate a 2D image vs. tracking all the polys.

      Not saying which is right to do, there’s lots of baggage with discussing AI stuff, just wondering about the actual tech itself.

      • knightly the Sneptaur@pawb.social
        link
        fedilink
        English
        arrow-up
        28
        ·
        edit-2
        16 hours ago

        What I don’t get not knowing much about the details is how LLM generation is faster than actual 3D modeling with more details?

        It’s not, DLSS5 takes a frame as rendered normally by your GPU and feeds it into a second $3k GPU to run the AI image transformer.

        There is no performance benefit, in fact it adds a bit of latency to the process.

        • paraphrand@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          14 hours ago

          And Nvidia claims it will release without the need for a second 50 series card this year.

          Lots of bullshit being laid out by Nvidia here.

      • Eggymatrix@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        17 hours ago

        Polys is not where expensive computation is. The bottleneck is raytracing and volumetric fog etc. All those things that make a game look more real and natural.

        I think this dlss stuff could potentially substitute raytracing and other light/shadow/reflections/transparency things that are very expensive to both program correctly and calculate every frame.

        My two cents

        • egregiousRac@piefed.social
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          16 hours ago

          Lighting is the space image gen struggles in most now. Individual areas will show convincing shadows, atmosphere, etc, but motivation and consistency is lacking. The shots from Hogwarts Legacy show that really clearly. Slice out a random 10%x10% chunk of the frame and the lighting looks more realistic, but the overall frame loses the directional lighting driven by real things in the scene.

          • paraphrand@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            14 hours ago

            I’m curious how well it handles lighting from unseen light sources that otherwise didn’t contribute as much to the scene as they should have. In other words, off screen lights that shine into the scene but are not fully rendered by traditional means. Same thing goes for reflections.

            I expect a lot of nonsense being hallucinated in those areas.

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          4
          ·
          17 hours ago

          I try to avoid the overhyped and wrongly used term AI, so what’s the proper term? Related to diffusion models? Something different?

          • hobovision@mander.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 hours ago

            Generative AI is a decent catch-all that I think would apply to this.

            Another good option is “machine learning” or ML, but that’s fallen out of favor cause it doesn’t sound as impressive as AI. But really it’s teaching a machine to do a specific task. It’s not intelligent, it’s just that we don’t understand how it learns.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            16 hours ago

            Neural network would be the most technically accurate given what they’ve announced so far.

            There’s no information on if it’s a diffusion or transformer architecture. Though given DLSS 4.5 introduced a transformer for lighting, my guess would be that it’s the same thing just being more widely applied. But the technical details haven’t been released from anything I’ve seen, so for the time being it’s being described as “neural rendering” using an unspecified neural network.

            https://www.nvidia.com/en-us/geforce/news/dlss-4-5-dynamic-multi-frame-gen-6x-2nd-gen-transformer-super-res/

            • REDACTED@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 minutes ago

              Saw somewhere a mention of it doing only 1 pass. Stable diffusion takes 30-100+ passes, so this sounds like fast inpainting rather than actual generation

  • NONE@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    18 hours ago

    Personally, I don’t know much about this technology. That said, I’ve heard that the original purpose of DLSS was to improve gaming performance, give you more FPS, and so on.

    In that sense, many—myself included—are wondering: How is this slop generator going to improve game performance? How is giving Grace from RE9 a totally different face with make up on going to improve my gaming experience?

    • inclementimmigrant@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      18 hours ago

      All of this upscaling when it was presented over a decade ago was to give older cards a longer lease in life and now it’s morphed into the mandatory way to get a stable framerate since developers can now just rely on DLSS and to a lesser extent FSR to get them to a acceptable framerate instead of optimization.

      As for how will this improve the gaming experience? I honestly don’t see it at this point. Back when it was the original goal, sure, now with this “Chat GPT moment for graphics”, I see it only beneficial for corporate parasites and “shareholder value” as we wave good bye to artistic vision as everything goes to looking like AI only fans.

    • affenlehrer@feddit.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 hours ago

      I think the idea is that you could use low resolution / detail models that take up less RAM and are faster to process and DLSS hallucinates a high res image.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      18 hours ago

      Not a fan of reimagined stuff of any sort, it usually doesn’t hit well. But from a tech standpoint I can think of ways one could use the tech to improve game performance for new games. Usually making a game run faster or feel more realistic is all about fooling the player, not drawing what’s not seeable, showing hints of things that aren’t really there. Hell, that’s been true for movies and even stage, right?

      So my thought on how this could work is to have the actual core models be lower polys, enough for details but not as high as the best we’ve seen done, and minimal texturing. Then the generator uses that as a base to form the image it puts over the top. Still don’t see how that can be done that fast, but apparently we’re there now.

      • NONE@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        17 hours ago

        The problem I see is consistency. Whatever the AI generates for a given source won’t be consistent throughout the game. Even in the original Digital Foundry video, you can see how Grace’s face looks like a totally different person depending on the distance from which it’s viewed.

        The artistic style is supposed to solve that consistency issue, but this AI is ruining it.

        (Also, in the same video, you can see they were using two 5090s to run the DLSS 5 games, so…)
  • BillyClark@piefed.social
    link
    fedilink
    English
    arrow-up
    21
    ·
    18 hours ago

    I find it interesting that the AI parts of the video have very little video in them. They have the original game moving along, and then they show the AI version and mostly keep it as a still. I suspect that they did this so that you can’t do a side-by-side comparison and see that the AI version doesn’t actually play as well as the original version.

    Also, I’ve got to wonder about how it must feel to be an artist who worked on one of these games, and watch the thing you carefully hand-tuned to match the artistic vision of the game design be replaced by the mindless addition of wrinkles.

  • wraithcoop@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    16 hours ago

    I think Jensen said during a presentation a long time ago that a long term goal was to have graphics being entirely generated. This seems like a big step towards that.

    • Yggstyle@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      15 hours ago

      Nvidia realized they hit a wall on rendering… So they went full on into something that you can’t properly benchmark - especially against competitors AND prior generations.

      Nvidia went full snake oil… And really would like people not to notice.

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    17 hours ago

    Never cared for realism in games to begin with, so don’t particularly care to comment on how it looks, but I’ve been thinking that I genuinely find it creepy.

    Not just Uncanny Valley material, but also having these faces stare at you, always fully lit, it just gives me the creeps, kind of like a panopticon situation.
    I don’t fucking know, if that’s my own trauma playing into that, where for the longest time, people looking at me generally meant they’re about to bully me.

    But either way, I’m about to head to bed and genuinely feel like there’s a 20% chance I’ll have a mild nightmare from that shit.
    This whole AI craze has been a wild ride of all kinds of nightmare fuel, from depictions with missing/additional limbs to the weirdest warping of objects+limbs in those fucking generated videos. And the worst part is that some folks seem to just not see it or not want to see it, so they keep using the nightmare generators.

    • Imgonnatrythis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      I guess by this description that perhaps it would be good for something like resident evil that strives to give you that uncanny creepiness no?

      • Ephera@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        Don’t think the playable characters are supposed to be creepy, but yeah, a yassified zombie would probably fit right in. 😅

  • Sanguine@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    18 hours ago

    Easy to avoid nvidia products these days anyway. Overpriced, marginal performance over AMD, and tied to AI slop globally. Never been so easy to vote with your wallet.

    • Shadowcrawler@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 hours ago

      That would only work i they still gave a shit about gamers or the consumer market in general. They became a full AI Datacenter company, thats where they make the money. I don’t think they will even produce consumer graphics cards in 5 years anymore.

  • ToiletFlushShowerScream@piefed.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    17 hours ago

    Aren’t video games Art? Why does nvidia think it knows better than the artists? . And maybe I am the only one, but DLSS has never made any of my games look better ever. It’s always looked worse, and I now permanently turn that shit off and regret spending a 20 percent higher price for something I don’t use. I don’t usually agree with IGN articles, but this one hits it on the head.

  • Dariusmiles2123@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    18 hours ago

    To be honest it looks great and at first I thought about how impressive it is.

    Then you think about how it modifies what was created and how it could lack consistency with a character looking a certain way in a part of the game and differently later.

    I don’t know know what to think about these technologies like FSR and DLSS…

    I don’t think I mind the fake frames as it could give you more performance on something like a Steam Deck, but the rest is tricky…

    • Player2@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      16 hours ago

      The problem is that these ‘features’ give you the fake appearance of performance but at the cost of actual performance. And they aren’t even good in the first place. Upscaling is like smearing vaseline all over your screen and saying “see how good it looks” when it’s just a blurry mess, especially with temporal elements. Frame generation gives artificial smoothness but doesn’t help input latency which is the part of frame rate that actually matters.

      The kicker is that it costs real frames to generate fake ones. The game ends up looking and feeling worse.

    • [deleted]@piefed.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 hours ago

      I dislike upscaling, it always looks off to me and frequently causes edges of things to be weird kind of like aliasong but different. After upgrading from a gtx 3060, where I only used it for a few games that were hectic enough to not notice it as much, to a 9070xt I turned off upscaling globally and run everything in the native resolution. Games look so much better without dlss or fsr to me, fewer weird artifacts causing distractions.

      I’ll take 20 less fps to not constsntly feel like something is off even if I can’t point out exactly what it is.

  • webghost0101@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    6
    ·
    17 hours ago

    The upside of enshitification in newer games and crap like this is that i am less interested in playing the new stuff and the more so enjoy replaying older games.

    Which also run at peak performance.