The glory days of Epic Games are long gone and Tim Sweeney is a god damn moron.

  • Nibodhika@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    No, the issue with “AI” is thinking that it’s able to make anything production ready, be it art, code or dialog.

    I do believe that LLMs have lots of great applications in a game pipeline, things like placeholders and copilot for small snippets work great, but if you think that anything that an LLM produces is production ready and you don’t need a professional to look at it and redo it (because that’s usually easier than fixing the mistakes) you’re simply out of touch with reality.

    • Mika@piefed.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      Are you even reading what I say? You are supposed to have a professional approving generated stuff.

      But it’s still AI-generated, it doesn’t become less AI-generated because a human that knows shit about the subject approved it.

      • Nibodhika@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        This is what you said:

        Tbf AI tag should be about AI-generated assets. Cause there is no problem in keeping code quality while using AI, and that’s what the whole dev industry do now.

        At no point did you mention someone approving it.

        Also, you should read what I said, I said most large stuff generated by AI needs to be completely redone. You can generate a small function or maybe a small piece of an image, if you have a professional validating that small chunk, but if you think you can generate an entire program or image with LLMs you’re delusional.

        • Mika@piefed.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          https://vger.to/piefed.ca/comment/2422544 mentioned here.

          Dude are you a software dev? Did you hear about, like, tickets? You are supposed to split bigger task into smaller tickets at a project approval phase.

          LLM agents are completely capable of taking well-documented tickets and generating some semblance of code that you shape with a few upcoming prompts, criticising code style & issues until they are all fixed.

          I’m not theoretical, this is how it’s done today. MCPs into JIRA and Figma and UI tickets just get about 90% done in a single prompt. Harder stuff is done in “invesrigate and write .md how to solve” & “this is why that won’t work, do this instead” to like 70% ready.

          • Nibodhika@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            Sorry, I won’t go through your post history to reply to a comment, be clearer on the stuff you write.

            I’m a software engineer, and if that’s how you code you’re either wasting time or producing garbage code, which might be acceptable wherever you work, but I guarantee you that you would not pass code reviews where I do. I do use copilot, and it’s good at suggesting small snippets, maybe an if, maybe a function header, but even then 60% of the time I need to change what it suggested. Reviewing code is harder than writing it yourself, even if I could trust that the LLM would do exactly what I asked (which I can’t, not by a long shot) it would maybe be opened to bugs or special cases that I would have to read the code, understand what it tried to do, figure out edge cases on that solution and see if it handled them. In short, it would take me much longer to do stuff via LLMs than writing them myself, because writing code is the easy part of programming, thinking on the solution and it’s limitations and edge cases is the hard part, and LLMs can’t understand that. The moment you describe your solution in sufficient detail that an LLM can possibly generate the right code, you’ve essentially written the code yourself just in a more complicated and ambiguous format, this is what most non technical managers fail to understand, code is just structured English, we’re already writing something better than prompts to an LLM.

            • Mika@piefed.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              This is literally in this thread.

              Again, your solution should be already thought out and described in tickets and approved tech plan. If it’s not, SDLC problem.

              And it’s not true that agents can’t help with edge cases, they can. If you know which points to look at, you task to analyze the specific interaction and watch which parts of the code would be mentioned.

              I do write way less amount of symbols to LLM than I would when I write code. Those symbols don’t have to be structured and they can even have typos, so I can focus my brain activity on things that actually matter.

              Plus, copilot is shit.

              I rate your post as a skill issue.

              • Nibodhika@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 hours ago

                It’s not in the thread line I’m replying to, to get to that I would have had to read another reply, and all of the replies to that to spot yours.

                If the work you do can be fully specified in a Jira ticket, you’re a code monkey and not a software engineer, of course you can use LLMs to do your job since you can be replaced by an LLM.

                And it’s not true that agents can’t help with edge cases, they can. If you know which points to look at, you task to analyze the specific interaction and watch which parts of the code would be mentioned.

                You’re missing my point entirely, it’s not that it can’t help with, it’s that the solution it writes will not take them into account unless you tell it to, and to explain every edge case in enough details to be unambiguous about all of them is essentially the same as writing code directly. Not to mention that you can’t possibly know all of the edge cases of the solution it will write without seeing it, so you can’t directly tell it to watch for edge cases without knowing what code it will write.

                I do write way less amount of symbols to LLM than I would when I write code.

                Maybe, but then you have to review everything it wrote so you waste more time. Give me one concrete example of something that you can prompt an LLM to give you code that is advanced enough to be worth it (i.e. writing the prompt and reviewing the code it wrote would be faster than writing the code myself) and not generic enough that I would be able to find the answer in stack overflow.

                Those symbols don’t have to be structured

                If you don’t structure them the LLM might misinterpret what you meant. Structure in a language is required to make things unambiguous, this reminds me of the stupid joke of “go to the store and bring 1L of milk, if they have eggs bring 6” and the programmer coming back with 6L of milk because they had eggs. Of course that’s a stupid example, but anything complex enough to be worth using an LLM would be hard to describe unambiguously and covering all edge cases in normal human speak.

                and they can even have typos, so I can focus my brain activity on things that actually matter.

                Typos are very easy to correct, most editors will highlight them for you, and some can even autocorrect them but more likely you avoid most of them by using tab completion anyways. I don’t waste any brain activity on that, I’m thinking on the solution and structuring it in an unambiguous way, that is what writing code is, it’s not some cryptic art of writing the proper runes to make the machine do your will like you seem to be implying, it’s just structured thought.

                Plus, copilot is shit.

                Might be, wouldn’t know any other as that’s the one I have available to use, but sincerely I doubt others are that much better to make a difference.

                I rate your post as a skill issue.

                Yup, I have absolutely no skill in using LLMs, nor will I waste my time with it. Don’t get me wrong, it’s a neat tool for auto completing small snippets like we used to do with an actual snippet library a couple of years ago, it is also a decent tool to navigate unknown code bases asking it where certain parts are or how to achieve something in the. I would say that 60% of the time it gives you some good pointers, but 90% of the time most of the code it writes is wrong, but at least it points you in the right direction of where to start investigating.

                I don’t expect you to understand this since from what I’m reading here you probably never worked on anything big enough, but a software engineer job is not to write code, that’s just a side-effect, our job is to solve problems, so either you’re trying to get the LLM to solve the problem for you, or wasting lots of time explaining your solution in English, reading the generated code, understanding it, analyzing it, fixing any issues and testing it, possibly multiple times instead of explaining your solution once in code and testing it.

                • Mika@piefed.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 hours ago

                  you’re a code monkey

                  You write your code manually and I set up infrastructure to avoid that, who of us both is a code monkey? :-)

                  I write tech plans too, from time to time.

                  concrete example

                  I already gave one. With jira & figma MCPs you just tell “read ticket <link>, read figma, make a separate component named XYZ. Look at file QWE to follow the same code style.”