• Alphane Moon@lemmy.worldOPM
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 hours ago

    That has become less of an issue now because of a recently announced feature called Windows ML 2.0, which does not distinguish between different NPUs, CPUs, GPUs, and AI chips, O’Donnell said. Microsoft earlier this year also added the Phi and Mu small language models (SLMs) for AI applications to run directly on PCs.

    While the new AI features announced at Ignite also take advantage of the NPUs, Microsoft’s hard requirement for a performant NPU has been dwindling, analysts said. There are signs Intel may be deprioritizing NPUs and switching back to GPUs as a minimum compute standard for AI PCs.

    While my DIY use cases (i.e. not using cloud services) in ML are somewhat limited in their scope (mostly video upscaling, I honestly gave up on local LLMs and local imagine generation), the NPU concept seems like a bad fit for a market that’s still in its early stages (and likely needs a massive corrective decline for us to understand truly valuable use cases).