• melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    13 hours ago

    That means in about 6 months or so the AI content quality will be about an 8/10. The processors spread machine “learning” incredibly fast. Some might even say exponentially fast. Pretty soon it’ll be like that old song “If you wonder why your letters never get a reply, when you tell me that you love me, I want to see you write it”. “Letters” is an old version of one-on-one tweeting, but with no character limit.

    • xthexder@l.sw0.com
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      10 hours ago

      What improvements have there been in the previous 6 months? From what I’ve seen the AI is still spewing the same 3/10 slop it has since 2021, with maybe one or two improvements bringing it up from 2/10. I’ve heard several people say some newer/bigger models actually got worse at certain tasks, and clean training data is pretty much dried up to even train more models.

      I just don’t see any world where scaling up the compute and power usage is going to suddenly improve the quality orders of magnitude. By design LLMs are programmed to output the most statistically likely response, but almost by definition is going to be the most average, bland response possible.

    • Arkthos@pawb.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 hours ago

      I doubt that. A lot of the poor writing quality comes down to choice. All the most powerful models are inherently trained to be bland, seek harmony with the user, and generally come across as kind of slimy in a typically corporate sort of way. This bleeds into the writing style pretty heavily.

      A model trained specifically for creative writing without such a focus would probably do better. We’ll see.

    • Skua@kbin.earth
      link
      fedilink
      arrow-up
      14
      ·
      12 hours ago

      Only if you assume that its performance will continue improving for a good while and (at least) linearly. The companies are really struggling to give their models more compute or more training data now and frankly it doesn’t seem like there have been any big strides for a while

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        12
        ·
        12 hours ago

        Yeah… Linear increases in performance appear to require exponentially more data, hardware, and energy.

        Meanwhile, the big companies are passing around the same $100bn IOU, amortizing GPUs on 6-year schedules but burning them out in months, using those same GPUs as collateral on massive loans, and spending based on an ever-accelerating number of data centers which are not guaranteed to get built or receive sufficient power.