• Zacryon@feddit.org
    link
    fedilink
    arrow-up
    2
    ·
    19 hours ago

    Depending on the application case and benchmark, being 0.1 to 0.3 % better than other SOTA approaches can still be statistically highly significant. Even though such a number does notmlook like much, it can mean a large leap forward in practise.

    Anyway, I wouod add a machine learning paper type that was written by an LLM and nobody cared to call that out in the peer review.

  • CheeseNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 day ago

    “Our model has no sense of permanence or real understanding of what words even mean and we re-interpreted this as the ability to lie.”

  • Fiery@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    26
    ·
    2 days ago

    To be fair an argument can be made for the Lego block one, using a novel combination of existing technologies to get better results is how nearly all innovation happens in machine learning.

    • addie@feddit.uk
      link
      fedilink
      arrow-up
      8
      ·
      1 day ago

      Proving a thing that’s only known empirically is extremely valuable, too. We’ve an enormous amount of evidence that the Riemann hypothesis is correct - we can produce an infinite amount of points on the line, in fact - but proving it is a different matter.

      • Septimaeus@infosec.pub
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        And for the kid challenging the 0.1% result, that’s about as close to pure scientific method as you can get.

    • foo@feddit.uk
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Especially in ML too. It’s currently easier to integrate multiple small specialised models than to train a big model for every use case. If I understand correctly, that was one of the main motivations for Anthropic developing the Model Context Protocol, including interacting with LLMs from front-end clients.

  • Fuckfuckmyfuckingass@lemmy.world
    link
    fedilink
    arrow-up
    29
    ·
    2 days ago

    I fucking loathe the term “compute”. Every time one of these mealy-mouthed motherfuckers lets it slide through sphincter-like lips I want to kick some teeth in.

  • ILikeTraaaains@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    2 days ago

    “We repeat the experiment with a newer dataset and act like we are the first doing this kind of experiment”

    “We talk about possible applications in the future writing like your run-of-the-mill generalist newspaper”

    “Another article resuming other articles”