• Avid Amoeba@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 hours ago

    So for peasants running Chairman Xi’s LLMs on local GPUs, we could try the largest model we can run and have it generate scripts to run instead of having the model do the actual processing of bulk data, to get more out of it.