• GeneralDingus@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 hours ago

    I’m not sure what you mean by ideal. Like, run any model you ever wanted? Probably the latest ai nvidia chips.

    But you can get away with a lot less for smaller models. I have the amd mid range card from 4 years ago (i forget the model at the top of my head) and can run text, 8B sized, models without issue.

    • ptu@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      6 hours ago

      I’m sorry, I use chatgpt for writing mysql queries and dax-formulas so that would be the use case.