Have they looked into middle-out lossless data compression developed by Richard Hendricks?
They made technology necessary to function in society and then they began gatekeeping it.
This is like stealing all the water.
Get out of our cyberspace. These people should be jailed.
Why do I feel a new dark age incoming
Can’t have hardware for us to archive. That’s only for the ai overlords to archive the internet for their own personal Ai models.
Technology gentrification go brrr
Just tell all the other computer related businesses they’re aren’t going to have any new customers.
“We gather over 100 terabytes of new materials each day, […]”
First I was like “What the hell? How can that much be worth saving?” But then I remembered it doesn’t only save web pages, but video, audio, and software as well. Sheesh, tough job.
There’s still a little time to get in at reasonable prices before they truly blow up and go insane. I’ve definitely started seeing increases such as a WD 8TB that went from $204.99 to $239.99 overnight after a short period of being out of stock. Depending on brand and size, some are still available at retail prices, but they’re quickly running out.
I mean, we’re already at 200-500% even for those low capacity drives, unfortunately. I bought a 26TB last fall for $240. I bought a 22TB in January for the same.
You will remember nothing and be happy.
Use tape libraries for the moment, with hard drives acting as a cache for them? Doesn’t need to mean moving the whole backing storage to tape, just predicting what won’t likely be used soon and letting the storage format indicate “go look on tape for this item”. Obviously, that can result in much higher cold storage retrieval latency, but as long as you are (a) doing predictive fetching with a reasonably good algorithm and (b) have a lot of hard drives, which I’m sure that The Internet Archive does, I’d think that tape should be workable.
https://en.wikipedia.org/wiki/Tape_library
In computer storage, a tape library is a physical area that holds magnetic data tapes. In an earlier era, tape libraries were maintained by people known as tape librarians and computer operators and the proper operation of the library was crucial to the running of batch processing jobs. Although tape libraries of this era were not automated, the use of tape management system software could assist in running them.
Subsequently, tape libraries became physically automated, and as such are sometimes called a tape silo, tape robot, or tape jukebox. These are a storage devices that contain one or more tape drives, a number of slots to hold tape cartridges, a barcode reader to identify tape cartridges, and an automated method for loading tapes (a robot). Such solutions are mostly used for backups and for digital archiving. Additionally, the area where tapes that are not currently in a silo are stored is also called a tape library. One of the earliest examples was the IBM 3850 Mass Storage System (MSS), announced in 1974.
In either era, tape libraries can contain millions of tapes.
Physically automated tape library devices can store immense amounts of data, ranging from 20 terabytes[13] up to 2.1 exabytes of data[14] as of 2016.
For large data-storage, they are a cost-effective solution, with cost per gigabyte as low as 2 cents USD.
I’d also guess — though I don’t know for sure — that it’s probably a lot easier to scale up manufacturing of tapes than it is hard drives.
EDIT: Does kind of make me wonder what the open-source options for tiered storage like that is. I’ve never really gone hunting, but it seems like there’d be a lot of commonality from place to place, and that for a lot of places that do it, it’s not really their core competency (that is, they just want to do something that deals with storing and processing lots of data, not that they really care principally about data storage).
I mean there’s probably a pretty large aspect of slow storage that’s far more ideal for archiving/backup options, as LLMs are really only interested in the fastest of things.
I would bet that a lot of the storage that AI companies are picking up isn’t for the model itself, but for storing the huge amount of information that they want to use as their training corpus.
I’d bet that what they do is something like this:
-
Download data and store in original form, non-destructively. This is probably not used incredibly frequently. When you see bots sucking down the whole Web, this is the sort of thing that is involved.
-
Have some kind of filtered training corpus. This throws out a lot of stuff that is useless for training. This is generated from #1 by filtering software. It’s probably smaller than #1. Probably a lot smaller.
-
Probably some sort of scored index is generated at this stage to put an estimate on how useful or reliable the data in step #2 should be considered; I’d assume that this is an input into the training.
-
The generated model, generated via training.
For the data in stage #1, I’d guess that AI companies might be able to use tapes. That being said, it might make sense to use faster storage if it accelerates the time to iterate on improving the filtering software.
But, yeah, for the later stages, tapes probably aren’t gonna work.
-






