This was actually the sub-headline of the article but I thought was the more important party of the article.

Speaking with developers and artists at studios that have agreed to DLSS 5, including CAPCOM and Ubisoft, Insider Gaming was told that the DLSS 5 tech was revealed to them at the same time as everyone else.

“We found out at the same time as the public,” said one Ubisoft developer.

Developers at CAPCOM tell Insider Gaming that the announcement and the publisher’s involvement were particularly shocking, as CAPCOM has previously been historically very “anti-AI” with projects such as Resident Evil Requiem and other unannounced projects in development. Some at the publisher fear that the DLSS 5 announcement could prompt a change in the publisher’s view on generative AI and its implementation in its games.

    • imjustmsk@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      25 minutes ago

      They just care about making money, and how they do that is making investors happy, and I guess more AI slop is what will make them jump around nowadays.

      Gone are the days companies trying to make money by providing good services (well not all of them), They get us using the product and Make it shitty,

      F$CK ensh^ttification :(

    • phlegmy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 hours ago

      That’s the point.
      Then they’ll add more slop cores to their next generation of cards, and use that as their demo for how great their latest cards are.

      Their whole consumer business strategy since rtx has been to push the use of new rendering techniques that existing hardware is less capable of, while releasing new cards with dedicated cores designed specifically for their new techniques.
      If they didn’t have rtx or dlss, people would still be using their 980 ti’s.

  • dellhiver@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    116
    ·
    1 day ago

    There was an interesting post that I was linked to on Reddit, supposedly from Assassins Creed dev.

    I’ll quote it here:

    "I been watching the fallout of the DLSS 5 video, and wanted to check in with with some game devs to check if I have been taking crazy pills, or if I have understood game dev incorrectly.

    Games are not visuals, they are game mechanics and game loops skinned in visual interface. When we make games, we make all the things that work with our mechanism and loops, visually distinct and more importantly repeatable.

    In assassins creed, all ledges that I can climb, look visually distinct from all other ledges. In most games, outlines and color is much more important, than what they look up close. They are used to identify what we are looking at, more than how realistic they look. These things are icons in the world, more than they are objects.

    Light and Shadow are not just for visual pleasure, they are used to draw the eye towards objectives and where you should go.

    In short, there is information in the visual representation of the game mechanics that are telling players what they should do and where they should go.

    When I see video games processed through DLSS 5, I see stripped away game information, making games less playable, and more confusing. I could understand having this in a photo mode, but why on earth should we have this in any of our games, if we don’t know what it will change it to? Or if it even will remain consistent next time you look at it?

    Will it remove the yellow paint on my assassins creed ledges, or perhaps only up-rez the rest of the assets, and make the yellow ledges stand out like a sore eye? Will it remove scars that are story relevant from an RPG Character? Will it smooth out a wall that is supposed to look like it can be destroyed? There are so many visual important things in games, that I know this thing won’t adhere to.

    Did no one involved in making this video understand Game Design or Art Design?"

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      18 hours ago

      Will it smooth out a wall that is supposed to look like it can be destroyed?

      Yeah, at the very least, it will throw a whole bunch of details into the general area, which will make it harder to tell what’s interactable.

      We’ve had photorealistic games before, by taking literal photographs and using those as point-and-click levels. You practically don’t see that anymore these days, because not being able to tell what’s interactable was a major weakness.

      Doesn’t mean that DLSS 5 or the like will strictly have the same problem, but it certainly feels like these companies are trying to throw in photorealism again, with no regards for the cost.

    • PlzGivHugs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 day ago

      From my understanding, it may be possible to work around some of this, since the program is meant to hook into the game in a number of different ways. Its very possible that an “importance” mask could be added as in input, for example. This wouldn’t fix everything, but would still give a way to separate game elements from environmental details.

      That said, theres been so much focus on how it looks. IMO, its completely overblown, especially when all of this needs to be manually configued on a game-by-game basis. Devs can tweak the settings to their own preferences, and make things more or less extreme.

      The part thats much more worthwhile of mockery is the fact that they’re demoing a consumer product on professional grade hardware, during a hardware shortage. They couldn’t even get the demo working on a high-end gaming PC, and they think this tech is worth advertising? That is the funny part of all this.

      • Ech@lemmy.ca
        link
        fedilink
        English
        arrow-up
        40
        ·
        edit-2
        24 hours ago

        That said, theres been so much focus on how it looks. IMO, its completely overblown, especially when all of this needs to be manually configued on a game-by-game basis. Devs can tweak the settings to their own preferences, and make things more or less extreme.

        It’s wild that every defense of this bs is “Just have devs spend even more time finetuning for this.” Yes, let’s double (or more) the workload of artists and programmers that are already overworked and crunched beyond reason, all for a “feature” that looks like garbage in its showcase demo and that’s so resource intensive that very few users will be able to utilize it, if they even want to.

        • PlzGivHugs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          23 hours ago

          Its more an argument against the, “artisit’s intent” and “disrupting gameplay” points.

          Yes, let’s double (or more) the workload of artists and programmers

          Do you have any evidence for this? Given whats been shown, this seems relatively easy to implement on the game dev side.

          • Quetzalcutlass@lemmy.world
            link
            fedilink
            English
            arrow-up
            21
            ·
            23 hours ago

            Even if implementing it turns out to be trivial, testing art assets for quality and consistency will be a nightmare. Especially if the underlying generative AI isn’t deterministic.

            • Katana314@lemmy.world
              link
              fedilink
              English
              arrow-up
              11
              ·
              18 hours ago

              Even if implementing it is trivial, it’s also still “one more thing”. Just like optimizing for the Steam Deck, considering features that might not be on the lowest-tier console release, accessibility requirements, and dozens of other checklist items that might go further and further down the list. Worse, if DLSS ends up interfering with those other checklist items after it’s already been verified.

              • PlzGivHugs@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                18 hours ago

                Yes, but what the tech costs to implement has a huge impact on what it is, and how (or if) its ever implemented. So far as I can tell from my own research, the original commenter was lying, which makes sense. If it actually increased dev time that much, even Nvidia wouldn’t be stupid enough to try and sell it. “AI graphics costs $10 million dollars to implement, and has negligible impact on sales.” would not look good for their bubble.

            • PlzGivHugs@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              22 hours ago

              Yes, depending on implementation details. I mean, its never going to be completely consistant, but I don’t expect these companies to mind a little brand damage if they get short-term boost in invest.

              I’m more thinking that as it stands, the hardware requirements make it DOA for users. They’re saying they’ll improve it, although I have my doubts. That said, even if no one can run it, it may be popular among publishers for screenshots and marketing. On the other hand, if it does actually double dev costs, then it’ll be DOA even for corporate use.

      • nightlily@leminal.space
        link
        fedilink
        English
        arrow-up
        6
        ·
        20 hours ago

        The inputs from everything Nvidia has said, are simply the final pixel colour values and motion vector information. It’s meant to sit in the same post-processing stack as the upscale. It’s effectively a screen-space post-processing filter over the final image. Nvidia have said that the artist controls are masking (blocking certain areas from it), intensity (so a slider value), and some kind of colour re-grading (since it destroys the original grading). It’s extremely limited.

        • cheat700000007@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          19 hours ago

          And Nvidia are full of shit judging from how it clearly changes geometry in the demos, women’s faces in particular

        • PlzGivHugs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 hours ago

          The inputs from everything Nvidia has said, are simply the final pixel colour values and motion vector information.

          If it is the same as DLSS 4 Super Resolution, it seems to use motion vectors, colour buffers, depth buffers, and camera information like exposure. That said, this might change, as, like I said, they’re showing off something they haven’t even got running on the target hardware. Its clearly not even close to being a finished product.

  • red_tomato@lemmy.world
    link
    fedilink
    English
    arrow-up
    75
    ·
    1 day ago

    Anyone who thinks DLSS 5 is a good thing doesn’t understand art. It’s just a filter that removes all artistic intention. Like taking someone else’s artwork, paint all over it, and then claim it’s ”better”.

    • 1984@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      Like the filters people use on Instagram to make themselves look perfect. All of this is pretty sick but nobody seems to be thinking its a problem.

    • cmhe@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      10 hours ago

      “Better” is in the eye of the beholder. DLSS 5 is optional, as are the shader and texture mods that are available for many games for ages. They both change the look of the game in ways the people creating them didn’t intend. I don’t really care about what the creators of games intended, I want to have fun playing it, and I’m okay with changing/modding the game until I have more fun. That is “better” to me.

      DLSS5 probably doesn’t matter to me anyway, since the Nvidia together with their AI business centipedes actually don’t want to sell GPUs to consumers anymore.

      (If you downvote, I would be really interested in hearing your argument. From my POV you either dislike people modding their game, or are a hypocrite. If it is about hating Nvidia and the current AI bubble, I’m with you there.)

      • Holytimes@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        Gamers remove shadows all the fucking time by choice. My experience shows that 99% of gamers would willfully and happily play on the lowest settings with a 5090 because big fps matters more then actually getting any fucking value out of the hardware.

        • Larry13@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          48 minutes ago

          That will be depend on the game. For something competitive like CS2 yeah FPS above all for most players. But for more immersive games like Expedition 33 or other “cinematic” games I think most players will play at the highest settings that still give them a stable 60+ fps.

    • Ech@lemmy.ca
      link
      fedilink
      English
      arrow-up
      26
      ·
      edit-2
      23 hours ago

      That’s unfortunately a lot of people going by the amount of “fixed” (ie sloppified) art and photos I see going around online.

      • morphite88@thelemmy.club
        link
        fedilink
        English
        arrow-up
        6
        ·
        23 hours ago

        All those “filters” are preinstalled and pushed on the users. It’s kinda hard to fault them when they push the “make pretty” button, but I do take umbrage when my family members post pictures of “me” online that look like someone else. It’s a weird time to be alive.

        • Ech@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          17 hours ago

          I can fault them for choosing to push the button and for just accepting that the result is “pretty” without rebuke. For defending the end result against those of us criticizing it. The companies may be responsible for the pushed adoption of this ridiculousness, but people are the ones happily going along with it and actively defending it.

    • Jakeroxs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 hours ago

      That’s why only studio quality headphones are even worth listening to music on, if you’re not hearing exactly what was intended then it’s shit garbage. Too bad if you’re poor

      • eyes@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        21 hours ago

        This is kinda kinda a bad example, it’s more akin to listening to a bad cover of the original song. Also this tech sure as shit isn’t going to run on cheap hardware which makes it even more egregious.

        • Jakeroxs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          20 hours ago

          No, because the original information is still there, it’s just filtered on top. Exactly like how listening to the same audio on different headphones can sound completely different.

          Edit: Because a dip/rise in a certain frequency can completely change the sound of any individual element of sound.

          • abbotsbury@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            20 hours ago

            If you put new information on top of a pixel, the pixel is changed and it is no longer the original information. Your headphones example would be more accurately applied to the visual medium as running custom color profiles, like adjusting saturation and contrast. The original information is there (music waveform or pixel color) but affected by delivery (bass boost or colorblind adjustments).

            • Jakeroxs@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              19 hours ago

              I’m not sure I understand the difference when DLSS is a toggle.

              You made exactly my point in your last sentence.

              • YewEyeOwe31@lemmy.world
                link
                fedilink
                English
                arrow-up
                7
                ·
                19 hours ago

                The DLSS 5 effect is less like a different pair of headphones that don’t have a flat response and more like if your music player added AI generated instruments to the songs in your music library. I think that was what the previous poster was arguing (I agree with them).

                Part of me wonders if it is internally consistent, or if Leon’s face changes just a little every time he pops up in a new scene in the new RE with DLSS on.

                • Jakeroxs@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  19 hours ago

                  But it’s details not entire extra characters, so it’s literally not “adding instruments” it’s attempting to sharpen details based on prior frames values for various parts of the image.

              • abbotsbury@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                15 hours ago

                You made exactly my point in your last sentence.

                Then you didn’t understand it because that doesn’t apply to DLSS.

      • red_tomato@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        21 hours ago

        Low quality headphones don’t add sounds that doesn’t exist in the original track. The thing with DLSS is that it adds details that doesn’t exist in the original image.

          • Goodeye8@piefed.social
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            20 hours ago

            Poor headphones takes something away, but (unless they’re so cheap they pick up static) it won’t add anything to the song. What Nvidia is selling, in terms of audio, is having an AI filter between the song and your headphones that enhances the sound however it sees fit. It might take something away but more often than not it’s just going to add something to it. You want to listen to Bad Bunny but the AI is going to generate English over Spanish because people are more likely to understand what he’s singing about if it’s in English. If you had headphones like that you’d throw them in the trash because they are trash.

            • Jakeroxs@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              19 hours ago

              That actually sounds really fucking cool, if you could automatically translate songs in real time? That would be bad because it’s not the original artistic version of the song? Are we really stooping to that level of groupthink where having options to change how you enjoy something is actually a bad thing now?

            • Jakeroxs@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              19 hours ago

              Does that matter when all the arguments I’m seeing against it are “its not the original vision of the artist” as if most of the corporate garbage games had any soul to begin with?

              The original vision is not the same, what can be changed are now all bad? I remember when people complained about games not having a way to disable bloom or chromatic abberation or whatever, that somehow wasn’t taking away from the “original artistic vision” but now we have to get out our pitchforks?

              • red_tomato@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                19 hours ago

                Imagine listening to a song you know, but the headphones keeps adding new instruments and sounds that doesn’t belong to it. It’s not consistent either. Every time you listen to the track it hallucinates new instruments. The artist were never part of these sounds.

                That’s DLSS 5.

                • Jakeroxs@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  18 hours ago

                  Adding insrruments would be more like if entire NPCs apear that weren’t there, it’s more like frequency expanding compressed/lossy audio.

      • nightlily@leminal.space
        link
        fedilink
        English
        arrow-up
        2
        ·
        20 hours ago

        Studio quality headphones tend to be a flat response curve, which is not what professional music producers master for - so no, that’s a poor argument.

      • nutsack@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        20 hours ago

        mixing headphones aren’t expensive. industry standard headphones cost less than a lot of consumer grade headphones. don’t ask me to list examples but I’ll do it if you want

        • Jakeroxs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          19 hours ago

          I mean I own a pair of sennheiser HD800, let’s compare audio quality.

          But I’m obviously not saying it’s a good argument, I figured the sarcasm was evident, I think the “its not the original intention of the artist” argument is a bad one.

          There are plenty of legitimate arguments against DLSS, such as companies not properly optimizing their games because they can just make it “good enough” and tell people to use DLSS. That is obviously bad.

          Adjusting literally any of the many possible settings in a game “takes away from the original artistic vision” yet generally we see people complain if certain options to their taste/needs isn’t present.

          • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            19 hours ago

            Adjusting literally any of the possible settings in a gane “takes away from the original artistic vision”

            Those settings don’t completely alter what the art looks like. It changes the method in which the math behind the scenes works. Like setting shadows from Ultra to Low doesn’t remove the shadows, it just alters how they are rendered. Often this does not really affect the appearance at all.

            Any game with any actual design put into it woild account for these and also be part of the artistic vision and intention.

            • Jakeroxs@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              18 hours ago

              I don’t see how you could argue adjusting graphical settings in a game doesn’t change how the art looks.

              • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                18 hours ago

                First of all:

                Any game with any actual design put into it woild account for these and also be part of the artistic vision and intention.

                Second: There is a differende between aesthetics and graphics. The terms are not interchangeable. You adjust the graphics; this does not affect the aesthetics. All those signs get a little blurrier each time you lower the setting; but the art style is uniform still.

                • Jakeroxs@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  18 hours ago

                  Okay, let’s try something more specific, what about tint filter removers for games like fallout 3 and fallout new Vegas.

                  Do you have the same vehement opposition to those mods since its taking away from the initial vision? Or do you understand that some people want to be able to enjoy it in the way they prefer, or in the case of DLSS, to be able to run games in general at a higher quality then their hardware would otherwise be able to push?

                  What about Minecraft shaders/texture packs? Are those horrible because they take away from the “original vision”? I just think it’s a really stupid argument.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 day ago

    Guh. Look I’m not against AI, when it’s used in a good place. Hot take, DLSS 5.0 features could have an interesting place. HOWEVER, before you mass downvote me

    NVidia is not the one to tell when a game should use something like this. This should be used as a fine tuning option if the developer thinks it legitimately will make their games slightly better, and only for super realistic games. I think Cyberpunk, where main characters are all super detailed but background NPCs are more or less fairly low poly and not detailed. It might be good. However, then it comes up against the real kicker, which is that those limitations of those engines and the hardware at the time is what made artists think about their decisions. They made design choices at the time which drove how their game would look. I said in another thread, Master Chief’s now iconic armor was because they had such heavy restrictions. It’s a few triangles of green at the end of the day.

    Added to a few games where it’s been tested and fits in the artistic style? I classify that as upscaling, a proven use of AI, and fits within the DLSS brand. Slapped into every game to make all men beefy hunkcakes and all women look like OF models? That’s when it’s slop.

    Sorry Jensen, you’re pushing the slop angle, and that makes me sour to the concept.

    • jwiggler@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 day ago

      Honestly i felt the giddiness that DF did in that first video. It was exciting not just because it did look like a huge upgrade to realism – but because this is another pandoras box moment for genAI. Exciting in a holy shit way, not a “I can’t wait” way. Also, I believed it when they said it was just changing the lighting. Looking at the Grace model more, along with peoples comments, there has to be more than lighting going on here. and with Jensens comments, I think that seems clearer.

      He really is pushing the slop angle, and, well – me too I agree with you. I’ve soured on the whole idea. I think there’s been too much backlash toward DF though.

  • SkyNTP@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    21 hours ago

    Star Field is a great example of a game that has amazing, immersive visuals, but the crappiest gameplay imaginable. All style, no substance. In the end it makes for an overall still crappy experience.

    I can’t think of a more fitting title to showcase this AI tech.

        • subignition@fedia.io
          link
          fedilink
          arrow-up
          7
          ·
          1 day ago

          Kind of wild that it’s crowd sourced data and still such a severe ratio, thanks for the link!

          • EarMaster@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            23 hours ago

            The extension extrapolates the dislikes based on the YouTube like counter and the ratio of extension users who liked and disliked the video. So the dislike count is based on the extension user’s opinion. While I am sure the dislikes overwhelm the likes, I don’t think that this is a representation of the real data.

      • BakedCatboy@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        Yes, Return YouTube Dislike made by Dmitry Selivanov basically maintains it’s own like/dislike database using like/dislike data from those using the extension, then applies that ratio to the public number of likes to estimate how many total dislikes there are among everyone including those not using the extension, assuming the data from users using the extension is representative of all people who liked/disliked. Overall it works well enough to tell what people’s sentiment is, and the more people use it, the closer you get to having the actual number of dislikes.

        • iamthetot@piefed.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          23 hours ago

          Surely that only tells you the sentiment of people… who also use the add-on. I don’t really see how that can correlate to the overall public sentiment.

          • BakedCatboy@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            23 hours ago

            It doesn’t always! That’s why I specifically said it assumes the people using the addon are representative of the overall population.

    • inclementimmigrant@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 day ago

      I think when it was presented over a decade ago now when the claim was that DLSS/FSR was there to give more life into older video cards that was a good thing.

      What it’s morphed to, where it’s a mandatory crutch and now with AI Slop, I do think it’s a crap thing.

      • warm@kbin.earth
        link
        fedilink
        arrow-up
        4
        ·
        23 hours ago

        Yep, existing for old cards would be great, but it was never even supported properly on old cards, instead it became a crutch for game performance on all GPUs and Nvidia started locking newer versions behind their new hardware and paying developers to implement it and advertise it.

    • keimevo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      I actually like DLSS <= 4.5, when it’s well used. It’s just a scaling technique, like bilinear or 2xSai, but instead of using a regular mathematical formula to calculate the interpolated pixels, it uses a neural network. Of course the final results vary, depending on how much of the image you interpolate, the training data and if you use previous frame data and stuff like that (motion vectors, etc.).

      OTOH, DLSS 5, sloptracing or whatever you want to call it, doesn’t seem to be a scaling technique (even if it most likely can do that too). It seems to be a video enhancing technique, with stability features included (anchored to 3d objects) to avoid the common morphing artifacts in early video GenAI (pre-Sora 2).

  • peacefulpixel@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    no corporation under capitalism, especially a multibillion dollar one like CAPCOM, is “Anti-AI.” GenAI is a get out of jail free card for doing what the games industry has done for years now, pushing more bloated, buggier, and blurrier games while charging more for it. it’s also great for devaluing human labour. even if it can’t replace it, doesn’t matter, that’s not the point. you lay off thousands of people, “replace their jobs with AI” and then when that inevitably doesn’t work you hire humans again for worse benefits and pay. this is enshittification, and it’s the natural process of capitalism.