• vermaterc@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Yes, sounds ridiculous, but how will this ratio change if we take into account the cost of hiring a programmer and the costs of implementation of a niche feature that this experiment provides at a cost of LLM inference?

      Also: we can cache and reuse enpoint implementation.

      • wischi@programming.dev
        link
        fedilink
        arrow-up
        14
        ·
        2 days ago

        Play tic tac toe a few times against Chat-GPT. Wouldn’t trust an LLM that can’t win tic tac toe against four year olds with production code 🤣

      • 4am@lemmy.zip
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        2 days ago

        The cost of an HTTP request with a normal web server is fractions of a penny, perhaps even less.

        $50 for 1000 requests is $5 per request. Per request. One page load on Lemmy can be 100 requests.

        Your company is bankrupt in 24 hours.

        Yes it’s much cheaper to hire a guy to create a feature than it is _have an LLM hallucinate a new HTTP response in realtime _ each time a browser sends a packet to your webserver.

        And from a .ml user too, I’d like to think you’d see through this LLM horseshit, brother. It’s a capitalist mind trap, they’re creating a religion around it to allow magical thinking to drive profits.

      • pelya@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        Considering that most techbro startups are going to be dead within a year, I’d say AI wins.
        Plus most of the competent programmers already have high resistance for technobabble bullshit, and will simply refuse to work on something like an online contacts app (are you copying a Facebook or what?)