• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    1 day ago

    It’s possible a member of Blackburn’s staff or a supporter went looking for a libelous hallucination in Google’s models.

    Good to see Ars with some common sense here.

    FYI Gemma3 is Google’s open weights release, for local running and finetuning. It’s pretty neat (especially the QAT version), but also old and small; there’s no reason anyone would pick it over Gemini 2.5 in Google’s dev web app, except for esoteric dev testing. It’s not fast, it doesn’t know much, it’s not great with tooling (like web referencing), its literal purpose is squeezing onto desktop PCs or cheap GPUs.

    …Hence this basically impacts no-one.

    The worst risk is that Google may flinch and neuter future Gemma/Gemini over this, lest some other MAGA screams bloody murder over nothing.

    • filister@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      23 hours ago

      The future is very small models trained to work in a certain domain and able to run on devices.

      Huge foundational models are nice and everything, but they are simply too heavy and expensive to run.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        14 hours ago

        Yeah. You are preaching to the choir here.

        …Still though, I just meant there’s no reason to use Gemma 3 27B (or 12? Whatever they used) unaugmented in AI Studio. The smallest flash seems to be more optimal for TPUs (hence it runs faster).