• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 hours ago

    And, should the GenAI market deflate, it will be because all of the big players in the market – the hyperscalers, the cloud builders, the model builders, and other large service providers – believed their own market projections with enough fervor that TSMC will shell out an entire year’s worth of net profits to build out its chip etching and packaging plants.

    The thing is that with some of these guys, the capacity isn’t general.

    So, say you’re OpenAI and you buy a metric shit-ton of Nvidia hardware.

    You are taking on some very real risks here. What you are buying is an extremely large amount of parallel compute hardware with specific performance characteristics. There are scenarios where the value of that hardware could radically change.

    • Say generative AI — even a substantial part of generative AI — shifts hard to something like MoEs, and suddenly it’s desirable to have a higher ratio of memory to compute capacity. Suddenly, the hardware that OpenAI has purchased isn’t optimal for the task at hand.

    • Say it turns out that some researchers discover that we can run expert neural nets that are only lightly connected at a higher level. Then maybe we’re just fine using a bank of consumer GPUs to do computation, rather than one beefy Nvidia chip that excels at dense models.

    • Say models get really large and someone starts putting far-cheaper-than-DRAM NVMe on the parallel compute device to store offloaded expert network model weights. Again, maybe current Nvidia hardware becomes a lot less interesting.

    • Say there’s demand, but not enough to make a return in a couple of years, and everyone else is buying the next generation of Nvidia hardware. That is, the head start that OpenAI bought just isn’t worth what they paid for it.

    • Say it turns out that a researcher figures out a new, highly-effecitve technique for identifying the relevant information about the world, and suddenly, the amount of computation falls way, way off, and doing a lot of generative AI on CPUs becomes a lot more viable. I am very confident that we are nowhere near the ideal here today.

    In all of those cases, OpenAI is left with a lot of expensive hardware that may be much less valuable than one might have expected.

    But…if you’re TSMC, what you’re buying is generalized. You fabricate chips. Yeah, okay, very high-resolution, high-speed chips at a premium price over lower-resolution stuff. But while the current AI boom may generate a lot of demand, all of that capacity can also be used to generate other sorts of chips. If generative AI demand suddenly falls way off, you might not have made an optimal investment, maybe spent more than makes sense on increasing production capacity, but there are probably a lot of people outside the generative AI world who can do things with a high-resolution chip fab.

    • Alphane Moon@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      Good point, TSMC is not just the pick axe seller in the gold rush, they are a generalized “best in class” tools seller.

      To some degree, I don’t think it matter for them what they are baffing, they’ll always have demand as long as they are the clear leader.