inari@piefed.zip to Technology@lemmy.worldEnglish · edit-217 hours agoDeepSeek ditches Nvidia for Huawei chips in V4 launchcybernews.comexternal-linkmessage-square44fedilinkarrow-up1221
arrow-up1221external-linkDeepSeek ditches Nvidia for Huawei chips in V4 launchcybernews.cominari@piefed.zip to Technology@lemmy.worldEnglish · edit-217 hours agomessage-square44fedilink
minus-square[object Object]@lemmy.calinkfedilinkEnglisharrow-up8·edit-216 hours agoEven with a bitnet, it’s almost definitely better to train on a high precision float then refine down to bits. I would expect bitnet to require more layers for equivalent quality too.
minus-squarebrucethemoose@lemmy.worldlinkfedilinkEnglisharrow-up3·16 hours agoI just meant for mass inference serving. Yeah, I haven’t seen much in the way of bitnet training savings yet, like regular old QAT. It does appear that Deepseek is finetuning their MoEs in a 4-bit format now, though.
Even with a bitnet, it’s almost definitely better to train on a high precision float then refine down to bits.
I would expect bitnet to require more layers for equivalent quality too.
I just meant for mass inference serving.
Yeah, I haven’t seen much in the way of bitnet training savings yet, like regular old QAT. It does appear that Deepseek is finetuning their MoEs in a 4-bit format now, though.