It ain’t exactly blazing fast, but it does actually work.
(Reasonably fast if you go down to the 2B param model, I can get the 9B param variant working, though this makes Steam Decky very hot and bothered.)
Yeah, you absolutely do not need Nvidia hardware to run an LLM, but we get blasted with their propoganda suggesting otherwise just all the time in the English speaking West.
Because if you don’t need Nvidia, well, then, this whole AI bubble looks a lot more bubbly.
I got Qwen 3.5 running on a Steam Deck.
It ain’t exactly blazing fast, but it does actually work.
(Reasonably fast if you go down to the 2B param model, I can get the 9B param variant working, though this makes Steam Decky very hot and bothered.)
Yeah, you absolutely do not need Nvidia hardware to run an LLM, but we get blasted with their propoganda suggesting otherwise just all the time in the English speaking West.
Because if you don’t need Nvidia, well, then, this whole AI bubble looks a lot more bubbly.
And have to vest consumer trafic card to run llm on the market.
Sorry, I’m not entirely sure what you mean.
Did you mean to say:
“And need to have the best consumer GPU on the market, to run an LLM.”
… likely alluding to an RTX 5090?
So you would be saying that basically it is bullshit, the idea that everyone needs extremely expensive hardware, to run an LLM?