Jyk0L8eLs7jd7es.png

I’m completely speechless. This looks so terrible I thought it was a joke, but apparently Nvidia released these demos to impress people. DLSS 5 runs the entire game through an AI filter, making every character look like it’s running through an ultra realistic beauty filter.

The photo above is used as the promo image for the official blog post by the way. It completely ignores artistic intent and makes Grace’s face look “sexier” because apparently that’s what realism looks like now.

I wouldn’t be so baffled if this was some experimental setting they were testing, but they’re advertising this as the next gen DLSS. As in, this is their image of what the future of gaming should be. A massive F U to every artist in the industry. Well done, Nvidia.

  • Crozekiel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 minutes ago

    I don’t understand why anyone paying attention is giving nvidia any money at this point. It is abundantly clear they don’t give any fucks about the consumer GPU customer. They will charge you $1000 to sell you to OpenAI as soon as that baffling sentence somehow becomes possible.

  • Destide@feddit.uk
    link
    fedilink
    English
    arrow-up
    10
    ·
    14 hours ago

    Going to go with 0% consistency and characters flipping between multiple faces

  • orca@orcas.enjoying.yachts
    link
    fedilink
    English
    arrow-up
    123
    ·
    edit-2
    1 day ago

    Even if it looked good, it has zero context of the original artists’ intent. This is like having AI summarize pages of a book as you read. You’re now locked a layer away from the original artist’s work and it’s a layer controlled by corpos. No thank you.

    • Whitebrow@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      ·
      24 hours ago

      At least 2 layers.

      LLMs don’t think. They copy paste something that’s been found repeatedly in the data it was trained on, statistical probability of words going with other words. Hell, it doesn’t even know what words are or much less mean. So it’s at least 2+ layers removed from the truth, one being the one you pointed out, and another being an amalgamation (mishmash) of the data it was trained on.

      • Iunnrais@piefed.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        20 hours ago

        I get that lemmy hates AI, and I’m not going to try to talk you out of that, but please stop repeating this factually incorrect myth. LLMs are not stochastic parrots, despite what you may have heard. And they do think… to a degree. Note that they’re by no means everything CEOs and tech bros want them to be, but if you’re going to criticize them, please do it accurately.

        They do know the meaning of words, but only in relation to other words. It’s how they work. It’s not a statistical thing like word frequency patterns— they’re not doing the same thing autocomplete does. Instead, they’re doing math on words in a several hundred-thousand dimensional array where placement on this grid indicates the meaning of the word— one vector direction indicates plurals, another indicates rudeness or politeness, another indicates frog-like, another might indicate related to 1993 ibm pentium CPUs, etc, etc, etc. It developed this array via training on terabytes of text, but it’s not storing a copy of that text, nor looking it up, nor copying anything from it… it’s defining words based on how they are used, then doing math on it to figure out what is the most appropriate thing to say next— not the most likely thing according to statistics, the most meaningful based on the definitions of the words it understands.

        They really do not copy and paste. They do use definitions. They do think about the words in a very real way.

        They don’t apply logical consistency and fact checking. There are hacks to make them talk to themselves in a way that following the meaningful definitions of words will more likely lead to fact checking and logical consistency, but it’s not 100% fool proof.

        • Mr. Satan@lemmy.zip
          link
          fedilink
          English
          arrow-up
          12
          ·
          15 hours ago

          but if you’re going to criticize them, please do it accurately

          You should take your own advice.

          They do know the meaning of words, but only in relation to other words.

          That’s only one part to meaning and it’s the only one LLMs have. It’s facinating what this one part can do, but we don’t operate this way. LLM have no world model, no logic model to associate a word to. It doesn’t think, it’s still just and input - output machine.

          It’s not a statistical thing like word frequency pattern.

          Instead, they’re doing math on words in a several hundred-thousand dimensional array where placement on this grid indicates the meaning of the word

          I’m sorry, how is this not statistics?
          The training is by it’s very nature statistical. We give millions of text inputs with expected outputs and tune the model until they match. How is this anything but statistics??

          It developed this array via training on terabytes of text, but it’s not storing a copy of that text, nor looking it up, nor copying anything from it

          Yes and no? Yes - it’s not storing a copy of the training data in the text form. No - it most definetly can “memorize” text, if that’s not a copy I don’t know what is.
          I could memorize foreign script text without understanding it and then I could recreate it. Did I make a copy? no. Can I make a copy? yes.

        • LurkingLuddite@piefed.social
          link
          fedilink
          English
          arrow-up
          26
          ·
          edit-2
          20 hours ago

          Having a number that relates words to other words is not understanding words. Stop believing the hype for fuck’s sake. What they ‘know’ is NOT knowledge. They do not know anything. Period.

          There is a reason they start to fail when trained on other slop; because they don’t know what any of it means!

          Their ‘knowledge’ comes from the basic weights of what word is most likely to follow. Period. The importance of that weight comes from humans. It is not intrinsic knowledge even after training. It is pure association, and not association like you or I do word association.

          • Whitebrow@lemmy.world
            link
            fedilink
            English
            arrow-up
            17
            ·
            19 hours ago

            Seen a bit of a rise of those sort of people since moltbook or whatever it’s called emerged, trying to sucker people into believing the random bullshit generator is sentient or cognizant of its assets in any way.

            What’s worse homie said “nu-uh” it’s not statistical probability and then proceeded to describe a statistical probability mesh.

            Might help a bit if we all stop slapping the AI term on everything and start calling things what they are such as scripting, large language models, cronjobs, etc.

            Trying to argue with those people just makes me sad and tired :(

          • Lojcs@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            11 hours ago

            Saying that an LLM knows words is not a value judgement. It doesn’t mean “LLMs are sentient” or “LLMs are smart like humans”. It’s doesn’t imply they have real world experiences. It’s just a description of what they do. That word has been used to describe much more basic kinds of information / functionalities of computers already. What makes it so offensive now?

            There is a reason they start to fail when trained on other slop; because they don’t know what any of it means!

            If you taught children slop at school they would not get far either. Although training LLMs on LLM output is more akin to getting rid of books and relying on what teachers remember to teach the students.

            The importance of that weight comes from humans. It is not intrinsic knowledge even after training.

            It comes from the llm and not from the outside, that’s what intrinsic means. How is it not intrinsic knowledge? I think you mean to say without humans to read it, an llm’s output holds no inherent value. That is true and nobody is claiming that it does. llms don’t derive pleasure from talking like humans do so the only value llm output has is from the the person reading it.

            Their ‘knowledge’ comes from the basic weights of what word is most likely to follow. It is pure association, and not association like you or I do word association.

            llm weights are anything but basic, but regardless, this is also true and lunnrais said as such:

            They do know the meaning of words, but only in relation to other words.

            The difference between human knowledge and llm knowledge is that an llm’s entire universe is words while humans understand words in relation to real world experiences. Again, nobody is claiming those two understandings are equivalent, just that they exist.

            Also on the point of statistics, I think the way people understand statistics and the statistics used in llms are vastly different. It is true that an llm finds which word is most likely to be next, but how it does that is not a classical statistical method. An llm itself is a statistical model. When one says an llm ‘knows’ or ‘understands’ they mean it has captured abstract information in a incomprehensibly complex neural network not dissimilar to how we do it. How it can only use that information for word prediction doesn’t change the fact that it has captured information beyond what is present in a word prediction.

            It seems to me that ‘statistics’ is often brought up to devalue llms by associating them with basic statistics. This association is wrong as I’ve explained in the previous paragraph. And themselves being a statistical model doesn’t mean their ability to express knowledge (although limited to textual domain) has to be inferior to a human’s.

            I understand the need to warn people of the limitations of llms. Their limitation is that they are text models with no concept of real life. Not that they are statistical models or copy paste machines

            • LurkingLuddite@piefed.social
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              5 hours ago

              Even simply using the word “know” is anthropomorphising them and is wholly incorrect.

              You are suffering from the ELIZA effect and it is just… sad.

              • Lojcs@piefed.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                4 hours ago

                Computers have been getting anthropomorphised for a long time. Why is it only when talking about llms that you start clutching your pearls about it? Why do you think that verb has to be exclusive to humans? To me that seems like a strange and inconsequential thing to dig your heels in.

                And I struggle to see how you could genuinely believe I was suffering from ‘ELIZA effect’ after reading my comment. You need more nuance and less absolutism in your world view if you genuinely do.

                • petrol_sniff_king@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 hour ago

                  Why is it only when talking about llms that you start clutching your pearls about it?

                  I am of the opinion now, and this is entirely AI’s fault, that for the collective mental health of our society, a grocery store self-checkout should not even be allowed to “thank” you for your purchase.

          • jacksilver@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            18 hours ago

            They do build a representation of words and sequences of words and use that representation to predict what should come next.

            A simplistic representation is this embedding diagram that shows how in certain vector spaces you can relate man/woman/king/queen/royal together:

            The thing is, these are static representations and are only bound to the information provided to the model. Meaning there is nothing enforcing real world representations and only statistically consistent representations will be learned.

            • LurkingLuddite@piefed.social
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              5 hours ago

              They don’t “learn” anything, though. They’re ‘trained’ (still a bad term but at least the industry uses it) to spit the correct answer out.

              People, especially CEOs and advertising firms, need to stop anthropomorphizing them. They do not learn. They do not “know”. They have statistically derrived association and that’s it. That’s all.

              Holy hell ELIZA effect is in full swing and it’s beyond sad. They don’t build the association themselves. They don’t know what the representations mean. They absolutely do not know why two words are strongly associated. It’s just a bunch of math that computes a path through that precomputed vector space. That’s it.

              • jacksilver@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                5 hours ago

                I didn’t use the word learn, although that’s really just a matter of semantics. I said they build a representation of words/sequences in a vector space to understand the interplay of words.

                You can down vote me all you want, but that’s literally just the math that’s happening behind the scene. Whether any of that approaches something called “learning”, probably not, but I’m not a neruoscientist.

        • jacksilver@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          18 hours ago

          You’re right that there is an internal representation for tokens and token sequences, but they also do copy. There is a whole area of research on this, and here is an example article on extracting image datasets.

      • orca@orcas.enjoying.yachts
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        Implying that it’s the only thing that matters is dumb. If you want uncanny valley faces, go for it. I’m not interested in dumb AI permeating yet another corner of my life.

  • bridgeenjoyer@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    75
    ·
    edit-2
    1 day ago

    Looks horrid.

    This will be the new motion plus shit that ruined all TV. Now, the kids think it IS good.

    I can’t express how much I hate motion plus and the fact that YOU CANT TURN IT OFF on a lot of TVs.

    Much like severely compressed limited music. People today hear a dynamic song and dont like it because its too peaky or hurts their ears. They want a sausage waveform.

    I’m not old, and I’m already yelling at clouds, ha. Just can’t stand these corpos brainwashing people into thinking their shit is good. Its not.

    • Doc_Crankenstein@slrpnk.net
      link
      fedilink
      English
      arrow-up
      17
      ·
      23 hours ago

      “A bird who has lived its life in a cage learns to fear the sky”

      I hate that this is our reality. People growing up without ever knowing what true freedom feels like. Even we never truly knew it, we just had a bigger cage.

      • bridgeenjoyer@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        ·
        23 hours ago

        Damn. That is exactly how ive been feeling lately. I think a lot about how sad a childhood would be right now unless you have a really good parent teaching you that everything today sucks and is corporate trash that needs to be destroyed, like capitalism.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      1 day ago

      Hard disagree on motion interpolation. Bad interpolation looks awful, of course, but when it’s good, it’s like night and day to my eyes, and every TV I’ve ever used can disable it.

      Sometimes you can’t disable “jitter reduction” or whatever that’s branded as, but that’s not the same thing.

      • joelfromaus@aussie.zone
        link
        fedilink
        English
        arrow-up
        5
        ·
        21 hours ago

        First off; downvoted for a lukewarm opinion? Come on Lemmy, be better.

        I’ve thought about this subject a lot and my thoughts are that it boils down to whether someone has been raised on movies (specifically 24fps) or video games (specifically 60fps).

        For me, movies look like a jittery mess. I have two TV’s and the motion smoothing on one is very good but I’ve never been able to get it just right on my other one. They’re the same brand of TV just a decade apart.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          21 hours ago

          Yeah, the ASICs in newer TVs are crazy powerful, and crazy good at it. They’re nothing like what you’d find in a phone or even a PC, and even a one-generation jump for our Sony TVs was an improvement.

          That’s what I was trying to emphasize. I think interpolation on old TVs, and maybe early versions of SVP, left a bad taste in people’s mouths. Kind of like fake HDR.


          …But I also think there’s a lot more sentiment against any kind of “processing” since the rise of AI slop.

          As an example I often cite, there was this old TV show I helped touch up for a “fan” release, a long time ago. One small component in a very long pipeline was a GAN upscaler… It worked fine. The original TV release was broken as hell, and people loved the improvement.

          Fast forward many years later, and I mention this was used in the “remaster” still floating around, and the same subreddit goes ballistic. They literally did not believe me, or cooed about the “flaws” of the original, or called it slop and against the rules and wanted me banned.

          And I suspect frame interpolation and resolution scaling in other contexts get tossed in that same bucket. Not that I blame anyone. AI does suck.

          • joelfromaus@aussie.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            21 hours ago

            Funny enough it’s actually the older of my two TVs that does it well. I think it marks a noticeable drop in product quality for that particular manufacturer. So still the same idea; that worse hardware gives bad results, but it’s not limited to the age of the TV just its component quality.

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              21 hours ago

              Oh yeah, definitely. Lines enshittify.

              I just mean, generally if you look at a 2014 TV and a 2025 one, the experience of that old one is likely not represenative of the new.

      • gravitas_deficiency@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 day ago

        It’s great for sports. And some sitcoms. And maybe news (but why are you even watching cable news these days). That’s it.

        Persistence of vision serves a real purpose in filmography. “Optimizing” it away is very literally a corruption of the art and a betrayal of the director and cameraman’s skills and intent.

        I’ll stick with my vintage 2010 Phillips plasma 55”, thank you very much.

      • bridgeenjoyer@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        Yeah sorry I’m not into high def TV myself. It looks awful unless all you watch is sports and brand new marvel movies (hard no).

        You may think you’re disabling it ,until you compare it with another TV that actually does zero processing . night and day.

        Same effect as me thinking “huh, I guess the lag on my flat screen isn’t too bad for gaming” then plug into my CRT and holy snap, the clarity and precision response. (Clarifying, this is with old and new consoles, obviously anything with an analog output into a new TV ia horirible without a upscaler, but even with a retrotink 2x upscaler, it still sucks. You need to send over $700 to make it look decent enough).

        people don’t know what They took from us.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          1 day ago

          I have. I A/B test it all the time. I pause and pixel peep.

          And I don’t watch any sports, nor any marvel movies.

          “huh, I guess the lag on my flat screen isn’t too bad for gaming”

          I’ve had CRTs. And I have one of those “zero latency” overclocked LCD monitors with no internal scaler. As much as I like them, they feel sluggish compared to something newer.

          Yeah sorry I’m not into high def TV myself.

          In that case, I suspect you haven’t tried it on more modern displays, or when its baked into transcoded footage with one of the better filters.

          Yes, it looks awful and artifacty processed by older LCDs. But it looks really good these days.

          • bridgeenjoyer@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            24 hours ago

            Yeah, I’m not one to pay a lot for TVs. I’d like an oled, but with the prices, I really have no need for it for gaming and the TV I have is fine for normal watching.

            Also isn’t it crazy how its taken this long for a display to be as good as a CRT (blacks and response time wise)?? Kind of the same thing with audio, how bad digital sucked originally and how we are just now fixing that with great DACS. Humans got it right the first time with tube amps and CRTs ! Not to mention they’re repairable.

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              22 hours ago

              I’d like an oled, but with the prices, I really have no need for it for gaming and the TV I have is fine for normal watching.

              That is entirely fair. Electronics are all crazy expensive, really.

              Yeah, LCDs went from bad to “mixed” and stayed that way for a long time. Granted, some things like absolute sharpness are not great on a CRT, but still.

    • tomalley8342@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      I always wondered why samsung phones and tvs had that “vivid” high contrast color tuning turned on by default that just blows out the contrast and saturation. I thought surely no one actually prefers this kind of look. Reading some of the comments on here and on youtube, now I understand.

  • tomalley8342@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    22 hours ago

    Two 5090s for this shit lol. The first 5090 calculates all the shadows and then the second 5090 takes it back out again lmao. What a fucking joke.

  • TheObviousSolution@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    15 hours ago

    I wouldn’t have called this generative AI, but Jensen did. Great for stills, uncanny valley for motions. People are claiming it generates completely new images, but in this instance it keeps the same geometry and texture and just processes motion and color vectors to create a hyper-rendered version of the characters.

    Ignoring the obvious problems of the hardware it requires and supporting the AI bubble feeding monopoly that is NVIDIA, it is interesting technology that doesn’t actually seem to act as a medium for IP theft, my beef with what I call AI slop. It might only be practically good for photo mode in games, but it will be interesting to see how it works out. It could kickstart interest in making Let’s Play in a more Machinima style.

  • Archangel1313@lemmy.ca
    link
    fedilink
    English
    arrow-up
    28
    ·
    1 day ago

    This is exactly the opposite of what I want a graphics card doing in the background. Just leave the games the way the developers made them, for fucks sake. If they suck, they suck…if they don’t, they don’t. But this just makes them all suck.

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    28
    ·
    edit-2
    1 day ago

    Would you say the same thing if you didn’t know that it was AI? I think it actually looks pretty good overall, although some of the changes (like deciding that this character dyes her hair had has undyed roots) are odd.

    Edit: It seems to do a better job with the soccer player.

    Edit 2: I wonder if it works better with male faces than with female ones. It’s making the woman’s eyes and lips bigger but not the man’s.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      edit-2
      1 day ago

      Agreed.

      The effect is waaaaay too strong in those screenshots, but a more subtle version would be alright.

      And yes. It’s definitely “sexifying” the woman in the shot. Transformers img2img models are notorious for basically:

      I could speculate why. Could be that it’s (unfortunately) mostly male Tech Bros developing them? Or it could be that a massive fraction of the dataset is sexualized photos of women scraped from social media. But TBH, while I don’t know why this is the case, pretty much all diffusion models tend to “Instagram” women more than men.

      • Skullgrid@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 day ago

        Or it could be that a massive fraction of the dataset is sexualized photos of women scraped from social media

        Bias in Bias out

    • MrFinnbean@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      1 day ago

      I kind of agree with you.

      It does not look bad. What im worried is that the ai cant keep up and will end up changing the look of the characters and i hate that it will take the agency from the artists.

      • BJW@lemmus.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 hours ago

        What of the agency of the players? Who will be sitting in the room and impacted, if this optional feature is enabled in the privacy of one’s home? Will the artist, halfway around the world suddenly wail and fall over in pain because of personal preference?

        I guess it’s the same as when people put steak sauce on a steak, and all the chefs who love the taste of bloody meat cry that the meal has been ruined… Except they’re not the ones eating it. They can enjoy their own creations however they like without getting all nitpicky about how others have their own preferences.

      • ClamDrinker@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        Realistically the artists working on it have a say into what graphics settings are allowed, and they already deal with the fact some people will need to run on very low settings, also affecting their ideal viewing conditions. If the newer DLSS really makes such sweeping changes they would either ask Nvidia for improvements, disable it, or heavily dissuade it.

        But player autonomy is also important, so it’s a balancing act. If the players end up wanting it and you take it away from them, it still won’t make sense to strip it out at the end of the day.

      • Klear@quokk.au
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Do you have a problem with all the reshade presets for Skyrim you can find on Nexus Mods?

    • calliope@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      1 day ago

      The popular “it looks awful” kneejerk is so telling.

      I dislike AI, but the utter delusion around it reminds me of how people complained about the internet in the 90s. There was a sensible fear, and then there were essentially Luddites.

      Everything repeats.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        24 hours ago

        But the younger women in those screenshots are absolutely “sexified.”

        I’m an AI evangalist as far as Lemmy goes, but that is a problem. It’s beyond a “sensible fear” problem, its unignorable and unacceptable. I’m kind of shocked DF didn’t point it out.

      • tomiant@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        20 hours ago

        “All technological development is good because there are always people who complain and then the technological development continues regardless. There’s no point in critique, everything is always perfect and nothing you say will change it anyway so who cares.”

        • calliope@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          19 hours ago

          “I only see things in black and white because I haven’t grown up”

          I actually blocked you on another account because you’re such a douchebag. I wish those could be exported more easily!

          • MrFinnbean@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            Dunno whats the beef between you guys, but its healthy to see and interract with people who have different opinions than you.

            Blocking people just because you think they are douchebags because they have different opinion just makes you live in a bubble where you hear only things that reinforce your own views.

    • Malta Soron@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      19 hours ago

      I feel like the soccer player does look more like his real life counterpart (Virgil van Dijk) with DLSS off.

    • Sanctus@anarchist.nexus
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      You also apparently need a separate GPU to run this. So not only can 90% of people not afford one, but the article states they used 2 to accomplish this.

  • favoredponcho@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    18 hours ago

    Meh, progress on graphical realism basically stalled out in the last 10 years. It was a matter of time before AI became the tech that pushes it to the next level. I personally don’t think it looks terrible. It looks more realistic if anything. If you don’t like this particular case or the beautification filter, that is one thing, but I don’t see it as refuting the use of the technology as a whole.

    Edit: yep, I clicked the post and watched the promo clip. I gotta say, it looks great. I think if you’re saying otherwise, you just aren’t being honest. Lemmy has an ax to grind about a lot of weird shit. It’s odd as hell. I could not careless about the downvotes. Doesn’t change my view.

    “The party told you to reject the evidence of your eyes and ears. It was their final, most essential command.” — George Orwell

    The Lemmy mob is no different.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    Almost makes me wish I hadn’t already switched to Team Red, so that I could switch to Team Red due to how comically bullshit this is (on top of their recent vibe coded driver releases)