You’re not productive if you don’t use a lot of AI, says guy who makes all of his money selling AI hardware

  • 8oow3291d@feddit.dk
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    The database was an arbitrary example. A more relevant example would be tenserflow layers in a neural network. As I understand it, you can in some cases get a novel solution to a problem just by choosing a smart enough combination, with the right data.

    ChatGPT absolutely knows how to help doing the grunt work setting up the tenserflow configuration, following your directions.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      you can in some cases get a novel solution to a problem just by choosing a smart enough combination, with the right data.

      Smart, lucky, who can tell the difference?

      • 8oow3291d@feddit.dk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        If used by an expect developer, then the combinations are not just random “lucky” choices.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          Or, if you take the machine learning approach, you just try all the combinations and use the one(s) that perform the best.

          • 8oow3291d@feddit.dk
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            The world is not that simple. There are too many combinations to try. And you risk hitting local maxima, even if doing the gradient thing.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              1
              ·
              37 minutes ago

              And you risk hitting local maxima

              And there are standard strategies for that.

              The world is not that simple. There are too many combinations to try.

              And if you hit a good combination were you smart, or lucky? In a well studied field where a lot of smart people have refined the solution set before you even read the problem? The question is: smart or lucky - can anyone really tell the difference? And, does it matter?

    • baahb@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      If you are capable of giving good directions…

      I’m probably not arguing with you, and I’m not trying to regardless. You seem like you have tried this, watched it happen, go “huh, neet!” And then get it to take the next step in whatever you were doing in the first place only to find out you didn’t provide adequate requirements for your config.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 hour ago

        only to find out you didn’t provide adequate requirements for your config.

        Every software development project, ever.

        Review your requirements before starting development. Review them again after each phase of development. Address inadequacies, conflicts, ambiguities whenever you find them.

        AI is actually helpful in this process - not so much knowing what to choose to do, but pointing out the gaps and contradictions it can be helpful.

      • 8oow3291d@feddit.dk
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 hour ago

        Well, yes, that is a central point.

        I am a senior programmer. LLMs are amazing - I know exactly what I want, and I can ask for it and review it. My productivity has gone up at least 3-fold, with no decrease in quality, by using LLMs responsibly.

        But it seems to me that some people on social media just can’t imagine using LLMs in this way. They just imagine that all LLM usage is vibe coding, using the output without understanding or review. Obviously you are very unlikely to create any fundamentally new solutions if you only use LLMs that way.

        only to find out you didn’t provide adequate requirements for your config.

        Senior programmer. I know exactly what I want. My requirement communicated to the LLM are precise and adequate.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          What I find LLMs doing for my software development is filling in the gaps. Thorough documented requirements coverage, unit test coverage, traceability, oh you want a step by step test procedure covering every requirement? No problem. Installer scripts and instructions. Especially the stuff we NEVER did back in the late 1980s/early 1990s LLMs are really good at all of that.

          Nothing they produce seems 100% good to go on the first pass. It always benefits from / usually requires multiple refinements which are a combination of filling in missing specifications, clarifying specifications which have been misunderstood, and occasionally instructing it in precisely how something is expected to be done.

          A year ago, I was frustrated by having to repeat these specific refinement instructions on every new phase of a project - the LLM coding systems have significantly improved since then, much better “MEMORY.md” and similar capturing the important things so they don’t need to be repeated ALL THE TIME.

          On the other hand, they still have their limits and in a larger recent project I have had to constantly redirect the agents to stop hardcoding every solution and make the solution data driven from a database.

          • 8oow3291d@feddit.dk
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 hour ago

            I were simply unable to convince Codex to split a patch into separate git commits in a meaningful way. There are things that just doesn’t work.

            Still useful for lots of stuff. Just don’t use it blind.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              2
              ·
              39 minutes ago

              Never use it blind, and like I more or less said above: if you’re taking the first response you’re using it wrong. I go at it expecting everything it says to be 80% right, finding that 20% telling it what’s wrong with it, then getting to 96% right - if the 4% off target is a problem, refine again…

              Where it excels for me is generating long detailed (mind numbing) point by point descriptions of things - the kind of documents you can skim to see where they are right and wrong but would fall asleep or have a case of terminal ADHD before finishing creating them on your own.