They’re horizontal stabilizers. They serve a crucial aerodynamic role.

https://en.wikipedia.org/wiki/1983_Negev_mid-air_collision
In May 1983, two Israeli Air Force aircraft, an F-15 Eagle and an A-4 Skyhawk, collided in mid-air during a training exercise over the Negev region, in Israel. Notably, the F-15 (with a crew of two) managed to land safely at a nearby airbase, despite having its right wing almost completely sheared off in the collision. The lifting body properties of the F-15, together with its overabundant engine thrust, allowed the pilot to achieve this unique feat.[1]
The F-15 started rolling uncontrollably after the collision and the instructor ordered an ejection. Nedivi, who outranked the instructor, decided not to eject and attempted recovery by engaging the afterburner, and eventually regained control of the aircraft. He was able to maintain control because of the lift generated by the large areas of the fuselage, stabilators, and remaining wing. Diverting to Ramon Airbase,[2] the F-15 landed at twice the normal speed to maintain the necessary lift, and its tailhook was torn off completely during the landing. Nedivi managed to bring his F-15 to a complete stop approximately 20 ft (6 m) from the end of the runway. He later told The History Channel, “it’s highly likely that if I had seen it clearly I would have ejected, because it was obvious you couldn’t really fly an airplane like that.”[4] He added, “Only when McDonnell Douglas later went to analyze it, they said, OK, the F-15 has a very wide [lifting] body; you fly fast enough and you’re like a rocket. You don’t need wings.”[3][4][5]
Sometimes things aren’t as crucial as they might seem!



















I would bet that a lot of the storage that AI companies are picking up isn’t for the model itself, but for storing the huge amount of information that they want to use as their training corpus.
I’d bet that what they do is something like this:
Download data and store in original form, non-destructively. This is probably not used incredibly frequently. When you see bots sucking down the whole Web, this is the sort of thing that is involved.
Have some kind of filtered training corpus. This throws out a lot of stuff that is useless for training. This is generated from #1 by filtering software. It’s probably smaller than #1. Probably a lot smaller.
Probably some sort of scored index is generated at this stage to put an estimate on how useful or reliable the data in step #2 should be considered; I’d assume that this is an input into the training.
The generated model, generated via training.
For the data in stage #1, I’d guess that AI companies might be able to use tapes. That being said, it might make sense to use faster storage if it accelerates the time to iterate on improving the filtering software.
But, yeah, for the later stages, tapes probably aren’t gonna work.