• pirateKaiser@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 hours ago

      It’s just that you’ve reached your free quota, further disappointments will be charged 0.0937 emotional stability per hour

  • Taldan@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    ·
    22 hours ago

    So Square Enix is demanding OpenAI stop using their content, but is 100% okay using AI built off stolen content to make more money themselves

    As a developer, it bothers me that my code is being used to train AI that Square Enix is using while trying to deny anyone else the ability to use their work

    I could go either way on whether or not AI should be able to train on available data, but no one should get to have it both ways

  • mavu@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    37
    ·
    21 hours ago

    Well, good luck with that. Software development is a shit show already anyway. You can find me in my Gardening business in 2027.

    • Rooster326@programming.dev
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      20 hours ago

      Good Luck. When the economy finally bottoms out the first budget to go is always the gardening budget.

      You can find me in my plumbing business in 2028.

      I deal with shit daily so it’s what we in biz call a horizontal promotion.

      • LostWanderer@fedia.io
        link
        fedilink
        arrow-up
        39
        ·
        1 day ago

        Exactly, as I don’t expect QA done by something that can’t think or feel to know what actually needs to be fixed. AI is a hallucination engine that just agrees rather than points out issues, in some cases it might call attention to non-issues and let critical bugs slip by. The ethical issues are still significant and play into the reason why I would refuse to buy any more Square Enix games going forward. I don’t trust them to walk this back, they are high on the AI lie. Human made games with humans handling the QA are the only games that I want.

        • NuXCOM_90Percent@lemmy.zip
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 day ago

          Exactly, as I don’t expect QA done by something that can’t think or feel to know what actually needs to be fixed

          That is a very small part of QA’s responsibility. Mostly it is about testing and identifying bugs that get triaged by management. The person running the tests is NOT responsible for deciding what can and can’t ship.

          And, in that regard… this is actually a REALLY good use of “AI” (not so much generative). Imagine something like the old “A star algorithm plays mario” where it is about finding different paths to accomplish the same goal (e.g. a quest) and immediately having a lot of exactly what steps led to the anomaly for the purposes of building a reproducer.

          Which actually DOES feel like a really good use case… at the cost of massive computational costs (so… “AI”).

          That said: it also has all of the usual labor implications. But from a purely technical “make the best games” standpoint? Managers overseeing a rack that is running through the games 24/7 for bugs that they can then review and prioritize seems like a REALLY good move.

          • osaerisxero@kbin.melroy.org
            link
            fedilink
            arrow-up
            4
            ·
            1 day ago

            They’re already not paying for QA, so if anything this would be a net increase in resources allocated just to bring the machines onboard to do the task

            • NuXCOM_90Percent@lemmy.zip
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 day ago

              Yeah… that is the other aspect where… labor is already getting fucked over massively so it becomes a question of how many jobs are even going away.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 day ago

      I would initially tap the breaks on this, if for no other reason than “AI doing Q&A” reads more like corporate buzzwords than material policy. Big software developers should already have much of their Q&A automated, at least at the base layer. Further automating Q&A is generally a better business practice, as it helps catch more bugs in the Dev/Test cycle sooner.

      Then consider that Q&A work by end users is historically a miserable and soul-sucking job. Converting those roles to debuggers and active devs does a lot for both the business and the workforce. When compared to “AI is doing the art” this is night-and-day, the very definition of the “Getting rid of the jobs people hate so they can do the work they love” that AI was supposed to deliver.

      Finally, I’m forced to drag out the old “95% of AI implementations fail” statistic. Far more worried that they’re going to implement a model that costs a fortune and delivers mediocre results than that they’ll implement an AI driven round of end-user testing.

      Turning Q&A over to the Roomba AI to find corners of the setting that snag the user would be Gud Aktuly.

      • Nate Cox@programming.dev
        link
        fedilink
        English
        arrow-up
        18
        ·
        1 day ago

        Converting those roles to debuggers and active devs does a lot for both the business and the workforce.

        Hahahahaha… on wait you’re serious. Let me laugh even harder.

        They’re just gonna lay them off.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          They’re just gonna lay them off.

          And hire other people with the excess budget. Hell, depending on how badly these systems are implemented, you can end up with more staff supporting the testing system than you had doing the testing.

        • pixxelkick@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          The thing about QA is the work is truly endless.

          If they can do their work more efficiently, they don’t get laid off.

          It just means a better % of edge cases can get covered, even if you made QAs operate at 100x efficiency, they’d still have edge cases not getting covered.

      • binarytobis@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        I was going to say, this is one job that actually makes sense to automate. I don’t know any QA testers personally, but I’ve heard plenty of accounts of them absolutely hating their jobs and getting laid off after the time crunch anyway.

      • Mikina@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 hours ago

        They already have a really cool solution for that, which they talked about in their GDC talk.. I don’t think there’s any need to slap a glorified chatbot into this, it already seems to work well and have just the right amount of human input to be reliable, while also leaving the “testcase replay gruntwork” to a script instead of a human.

  • Mikina@programming.dev
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    23 hours ago

    Square Enix actually has a pretty sick automated QA already. There’s a cool talk about how they did that for FFVII remake in GDC vault, and I highly recommend watching it, if you’re at all interested in QA.

    It has nothing to do with AI, it’s just plain old automation, but they solve most of the issues you get with making automated tests in non-discrete 3D playspace and they do that in a pretty solid way. It’s definitely something I’d love to have implemented in the games I’m working on, as someone who worked in QA and now works in development. Being able to have mostly reliable way how to smoke-test levels for basic gameplay without having to torture QA to run the test-case again is good, and allows QA to focus on something else - but the tools also need oversight, so it’s not really a job lost. In summary - I think the talk is cool tech and worth the watch.

    However, I don’t think AI will help in this regard, and something as unreliable and random as AI models are not a good fit for this job. You want to have deterministic testcases that you can quanitfy, and if something doesn’t match have an actual human to look at why. AI also probably won’t be able to find clever corner-cases and bugs that need human ingenuity.

    Fuck AI, I kind of hope this is just a marketing talk and they are actually just improving the (deterministic) tools they already have (which actually are AI by definition, since they also do level exploration on top of recorded inputs), and they are calling it an “AI” to satisfy investors/management without actually slapping a glorified chat-bot into the tech for no reason.

  • ghost9@lemmy.world
    link
    fedilink
    English
    arrow-up
    77
    ·
    1 day ago

    That’s a stupid idea. You’re not supposed to QA or debug games. You just release it, customers report bugs, and then you promise to fix the bugs in the next patch (but don’t).