• lmuel@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Tbf im not sure how much it helps them if you’re using the LLM without an account

          • A Relatively recent gaming-type setup with local-ai or llama.cpp is what I’d recommend.

            I do most of my AI stuff with an rtx3070, but I also have a ryzen 7 3800x with 64gb RAM for heavy models where I don’t so much care how long it takes but need the high parameter count for whatever reason, for example MoE and agentic behavior.

          • GeneralDingus@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 hours ago

            I’m not sure what you mean by ideal. Like, run any model you ever wanted? Probably the latest ai nvidia chips.

            But you can get away with a lot less for smaller models. I have the amd mid range card from 4 years ago (i forget the model at the top of my head) and can run text, 8B sized, models without issue.

            • ptu@sopuli.xyz
              link
              fedilink
              arrow-up
              1
              ·
              6 hours ago

              I’m sorry, I use chatgpt for writing mysql queries and dax-formulas so that would be the use case.