- cross-posted to:
- pcgaming@lemmy.ca
- cross-posted to:
- pcgaming@lemmy.ca
This was actually the sub-headline of the article but I thought was the more important party of the article.
Speaking with developers and artists at studios that have agreed to DLSS 5, including CAPCOM and Ubisoft, Insider Gaming was told that the DLSS 5 tech was revealed to them at the same time as everyone else.
“We found out at the same time as the public,” said one Ubisoft developer.
Developers at CAPCOM tell Insider Gaming that the announcement and the publisher’s involvement were particularly shocking, as CAPCOM has previously been historically very “anti-AI” with projects such as Resident Evil Requiem and other unannounced projects in development. Some at the publisher fear that the DLSS 5 announcement could prompt a change in the publisher’s view on generative AI and its implementation in its games.



Poor headphones takes something away, but (unless they’re so cheap they pick up static) it won’t add anything to the song. What Nvidia is selling, in terms of audio, is having an AI filter between the song and your headphones that enhances the sound however it sees fit. It might take something away but more often than not it’s just going to add something to it. You want to listen to Bad Bunny but the AI is going to generate English over Spanish because people are more likely to understand what he’s singing about if it’s in English. If you had headphones like that you’d throw them in the trash because they are trash.
That actually sounds really fucking cool, if you could automatically translate songs in real time? That would be bad because it’s not the original artistic version of the song? Are we really stooping to that level of groupthink where having options to change how you enjoy something is actually a bad thing now?
I mean if you really think it’s fucking cool to listen to these words over these words I don’t think there’s anything further to discuss.