As Snowden told us, video and audio recording capabilities of your devices are NSA spying vectors. OSS/Linux is a safeguard against such capabilities. The massive datacenter investments in US will be used to classify us all into a patriotic (for Israel)/Oligarchist social credit score, and every mega tech company can increase profits through NSA cooperation, and are legally obligated to cooperate with all government orders.

Speech to text and speech automation are useful tech, though always listening state sponsored terrorists is a non-NSA targeted path for sweeping future social credit classifications of your past life.

Some small LLMs that can be used for speech to text: https://modal.com/blog/open-source-stt

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    21 days ago

    The IGP is more powerful than the NPU on these things anyway. The NPU us more for ‘background’ tasks, like Teams audio processing or whatever its used for on Windows.

    Yeah, in hindsight, AMD should have tasked (and still should task) a few engineers on popular projects (and pushed NPU support harder), but GGML support is good these days. It’s gonna be pretty close to RAM speed-bound for text generation.

    • fonix232@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      21 days ago

      Aye, I was actually hoping to use the NPU for TTS/STT while keeping the LLM systems GPU bound.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        21 days ago

        It still uses memory bandwidth, unfortunately. There’s no way around that, though NPU TTS would still be neat.

        …Also, generally, STT responses can’t be streamed, so you mind as well use the iGPU anyway. TTS can be chunked I guess, but do the major implementations do that?

        • fonix232@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          21 days ago

          Piper does chunking for TTS, and could utilise the NPU with the right drivers.

          And the idea of running them on the NPU is not about memory usage but hardware capacity/parallelism. Although I guess it would have some benefits when I don’t have to constantly load/unload GPU models.

            • fonix232@fedia.io
              link
              fedilink
              arrow-up
              0
              ·
              21 days ago

              I’ve actually been eyeing lemonade, but the lack of Dockerisation is still an issue… guess I’ll just DIY it at one point.

              • brucethemoose@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                edit-2
                21 days ago

                It’s all C++ now, so it doesn’t really need docker! I don’t use docker for any ML stuff, just pip/uv venvs.

                You might consider Arch (dockerless) ROCM soon; it looks like 7.1 is in the staging repo right now.