• TheFogan@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 hours ago

    I mean there’s probably a pretty large aspect of slow storage that’s far more ideal for archiving/backup options, as LLMs are really only interested in the fastest of things.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 hours ago

      I would bet that a lot of the storage that AI companies are picking up isn’t for the model itself, but for storing the huge amount of information that they want to use as their training corpus.

      I’d bet that what they do is something like this:

      1. Download data and store in original form, non-destructively. This is probably not used incredibly frequently. When you see bots sucking down the whole Web, this is the sort of thing that is involved.

      2. Have some kind of filtered training corpus. This throws out a lot of stuff that is useless for training. This is generated from #1 by filtering software. It’s probably smaller than #1. Probably a lot smaller.

      3. Probably some sort of scored index is generated at this stage to put an estimate on how useful or reliable the data in step #2 should be considered; I’d assume that this is an input into the training.

      4. The generated model, generated via training.

      For the data in stage #1, I’d guess that AI companies might be able to use tapes. That being said, it might make sense to use faster storage if it accelerates the time to iterate on improving the filtering software.

      But, yeah, for the later stages, tapes probably aren’t gonna work.