Not sure if this goes here or if this post will be hated upon? but i want to host ai like llms and comfyuis newer models locally but im not sure what type of setup or parts would work best on a possible slim budget? im not sure either if now is the time with inflation and such.

I dont have a price in mind yet but im wondering how much it would cost or what parts i may need?

If you have any questions or concerns please leave a comment.

  • sj_zero@lotide.fbxl.net
    link
    fedilink
    arrow-up
    1
    ·
    2 hours ago

    I’m running a 4B model on one of my machines, an old surface book 1.

    It’s a brutal machine. heat issues, and the GPU doesn’t work in linux. But pick a minimal enough model and it’s good enough for me to have LLM access in my nextcloud if for some reason I wanted it.

    Biggest thing really seems to be memory, most cheaper GPUs don’t have enough to run a big model, and CPUs are dreadfully slow on larger models if you can put enough RAM in one of them.