• 4am@lemmy.zip
    link
    fedilink
    arrow-up
    16
    ·
    2 days ago

    Yeah so what I’m getting from the description is that this LLM doesn’t generate code, at all.

    This feeds HTTP traffic directly to an LLM that is prompted how to respond to those requests.

    This isn’t an LLM being served prompts to write code to create an HTTP server; the model’s output IS the HTTP server. The model itself is being the webserver, instead of being an autocomplete for an IDE.

    The author seems to acknowledge that “the future where it’s just us and our LLMs and intent, no code and no apps” is “science fiction” but he wanted to see how close we could get with today’s tech.

    • Beryl@jlai.lu
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      Thanks for making this clear. Certainly a fun little experiment, but the shear inefficiency of the whole thing just boggles one’s mind. Hopefully this is not the direction tech is going though, it’s not like we should curb our energy needs anyway…