themachinestops@lemmy.dbzer0.com to Technology@lemmy.worldEnglish · edit-21 day agoDell admits consumers don’t care about AI PCs, Dell is now shifting it focus this year away from being ‘all about the AI PC.’www.theverge.comexternal-linkmessage-square19fedilinkarrow-up1425
arrow-up1425external-linkDell admits consumers don’t care about AI PCs, Dell is now shifting it focus this year away from being ‘all about the AI PC.’www.theverge.comthemachinestops@lemmy.dbzer0.com to Technology@lemmy.worldEnglish · edit-21 day agomessage-square19fedilink
minus-squareRobotToaster@mander.xyzlinkfedilinkEnglisharrow-up1·1 day agoDo NPUs/TPUs even work with ComfyUI? That’s the only “AI PC” I’m interested in.
minus-squareSuspciousCarrot78@lemmy.worldlinkfedilinkEnglisharrow-up1·edit-218 hours agoNPUs yes, TPUs no (or not yet). Rumour has it that Hailo is meant to be releasing a plug in NPU “soon” that accelerates LLM.
minus-squareL_Acacia@lemmy.mllinkfedilinkEnglisharrow-up2·23 hours agoThe support is bad for custom nodes and NPUs are fairly slow compared to GPUs (expect 5x to 10x longer generation time compared to 30xx+ GPUs in best case scenarios) NPUs are good at running small models efficiently, not large LLM / Image models.
minus-squareFermiverse@gehirneimer.delinkfedilinkarrow-up1·1 day agohttps://github.com/patientx/ComfyUI-Zluda Works with the 395+
Do NPUs/TPUs even work with ComfyUI? That’s the only “AI PC” I’m interested in.
NPUs yes, TPUs no (or not yet). Rumour has it that Hailo is meant to be releasing a plug in NPU “soon” that accelerates LLM.
The support is bad for custom nodes and NPUs are fairly slow compared to GPUs (expect 5x to 10x longer generation time compared to 30xx+ GPUs in best case scenarios) NPUs are good at running small models efficiently, not large LLM / Image models.
https://github.com/patientx/ComfyUI-Zluda
Works with the 395+