- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
cross-posted from: https://beehaw.org/post/24650125
Because nothing says “fun” quite like having to restore a RAID that just saw 140TB fail.
Western Digital this week outlined its near-term and mid-term plans to increase hard drive capacities to around 60TB and beyond with optimizations that significantly increase HDD performance for the AI and cloud era. In addition, the company outlined its longer-term vision for hard disk drives’ evolution that includes a new laser technology for heat-assisted magnetic recording (HAMR), new platters with higher areal density, and HDD assemblies with up to 14 platters. As a result, WD will be able to offer drives beyond 140 TB in the 2030s.
Western Digital plans to volume produce its inaugural commercial hard drives featuring HAMR technology next year, with capacities rising from 40TB (CMR) or 44TB (SMR) in late 2026, with production ramping in 2027. These drives will use the company’s proven 11-platter platform with high-density media as well as HAMR heads with edge-emitting lasers that heat iron-platinum alloy (FePt) on top of platters to its Curie temperature — the point at which its magnetic properties change — and reducing its magnetic coercivity before writing data.
I just hope smaller sized drives become cheaper. The word “hope” is doing a lot of heavy lifting here.
Ten years from now…
Amazon search: “hard drive”
Result: 4TB $198
BARGAIN!
Probably still with only 1 year warranty…
Doesn’t this sound awfully similar to the Mini disc technology? The discs were only writable when heated by a laser. They were pretty impressive for the time… But not very fast. Especially when writing.
I wonder why current consumer HDD’s don’t have NVME connectors on them. Like I know speeding up the bus isn’t going to make the spinning rust access faster but the cache ram would probably benefit from not being capped at 550MBps
Holy fuck can you imagine how long it would take to re-stripe a failed drive in a z2 array 😭
Not a clue. Care to eli5?
When you are running a server just to store files (a NAS) you generally set it up so multiple physical hard disks are joined together into an array so if one fails, none of the data is lost. You can replace a failed drive by taking it out and putting in a new working drive and then the system has to copy all of the data over from the other drives. This process can take many hours to run even with the 10-20 TB drive you get today, so doing the same thing with 140 TB drive would take days.
@SmoothLiquidation @Telorand They also claim up to 8x speed improvements with HAMR. Obviously that remains to be seen, but if they could roughly match capacity improvements, that would keep restriping in the same ballpark.
with optimizations that significantly increase HDD performance for the AI and cloud era
Can somebody do anything with a normal consumer in mind these days? 😭
No, and it’s by design.
You’re gonna lease a tablet and use cloud-based storage services and like it.
The dystopia is here.
Not until somebody shuts off the investor money faucet for AI. Then they’ll come crawling back — although inevitably not until after they go whining to all the world’s governments about wanting a bailout.
But hey, look at the bright side. We’ve already had the cryptocurrency mining boom and bust, and “AI” boom and soon to be bust. There’s still time for some idiot to invent the next tech scam fad which will conveniently require a shitload of hardware for no recognizably useful purpose.
Does data take up less room when it’s being used by AI?
This would be a bitch to have to rebuild in a raid array. At some point a drive can get TOO big. And this is looking to cross that line.
At some point a drive can get TOO big
I was thinking the same. I would hate to toast a 140 TB drive. I think I’d just sit right down and cry. I’ll stick with my 10 TB drives.
This is not meant for human beings. A creature that needs over 140 TB of storage in a single device can definitely afford to run them in some distributed redundancy scheme with hot swaps and just shred failed units. We know they’re not worried about being wasteful.
Rebuild time is the big problem with this in a RAID Array. The interface is too slow and you risk losing more drives in the array before the rebuild completes.
Realistically, is that a factor for a Microsoft-sized company, though? I’d be shocked if they only had a single layer of redundancy. Whatever they store is probably replicated between high-availability hosts and datacenters several times, to the point where losing an entire RAID array (or whatever media redundancy scheme they use) is just a small inconvenience.
True, but that’s going to really be pushing your network links just to recover. Realistically, something like ZFS or a RAID-6 with extra hot spares would help reduce the risks, but it’s still a non trivial amount of time. Not to mention the impact to normal usage during that time period.
Yeah I’m running 16s and that’s pushing it imo
It doesn’t really matter, the current limitations are not so much data density at rest, but getting the data in and out at a useful speed. We breached the capacity barrier long ago with disk arrays.
SATA will no longer be improved, we now need u.2 designs for data transport that are designed for storage. This exists, but needs to filter down through industrial application to get to us plebs.
640K ought to be enough for anybody.
I don’t get how a single person would have that much data. I fit my whole life from the first shot I took on a digital camera in 2001… Onto a 4TB drive.
…and even then, two thirds of it is just pirated movies.
Amateur 😀
But seriously I probably have close to 100 TB of music, TV shows, movies, books, audiobooks, pictures, 3d models, magazines, etc.
I need a home for my orphaned podman containers /s
I think this is better targeted to small and medium businesses.
if you run this as a NAS you could easily have all your budd s obsesses files in one place without needing complex networking.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters NAS Network-Attached Storage RAID Redundant Array of Independent Disks for mass storage SATA Serial AT Attachment interface for mass storage ZFS Solaris/Linux filesystem focusing on data integrity
4 acronyms in this thread; the most compressed thread commented on today has 10 acronyms.
[Thread #72 for this comm, first seen 8th Feb 2026, 00:30] [FAQ] [Full list] [Contact] [Source code]
This ONLY works at an insane scale. This will never hit the consumer market.
Also what current consumer level application could require of storage 140TB. That would be some advanced level data hoarding or smth.
The failure rate is going to be absolute INSANE as well.
@Korkki @just_another_person I see 4k HDR blue ray movie rips these days on the order of 50GB (edit: eg, Eddington.2025.MULTi.VFF.2160p.DV.HDR.BluRay.REMUX.HEVC-[BATGirl]: 77.73G).
Which is too rich for my blood (I’m still watching on 1080p screens over here), but for someone with the right kind of home theater… that’s only ~280 movies on a 14TB drive. Lots of movie collections, even in the olden days of physical VHS and DVDs, span 1,000+ movies.
As a result, will be able to offer drives beyond 140 TB in the 2030s.
Um thanks but tell us about 2026?
Shrimp platters.
Whoops, sorry, the oceans are hostile to life now. No more shrimp platters. Try again next time.











