• IHave69XiBucks@lemmygrad.ml
    link
    fedilink
    arrow-up
    17
    ·
    1 day ago

    Idk why people don’t read the article before commenting.

    Newelle supports interfacing with the Google Gemini API, the OpenAI API, Groq, and also local large language models (LLMs) or ollama instances for powering this AI assistant.

    So you configure it with your prefered model which can include a locally run one. And it seems to be its own package not something built into gnome itself so you an easily uninstall it if you won’t use it.

    Seems fine to me. I probably won’t be using it, but it’s an interesting idea. Being able to run terminal commands seems risky though. What if the AI bricks my system? Hopefully they make you confirm every command before it runs any of them or something.

    • IHave69XiBucks@lemmygrad.ml
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      What I’d like to see which is unclear if it would support is a LAN model. I have run ollama models on a desktop, and remotely interfaced with them via ssh before from another computer on the same network. This would be ideal since you can have your own local model on your own network, put it on a powerful, but energy efficient home server, and let it interface with all devices on your network. Rather than each one running their own local model, or using a corporate model.

      • felsiq@piefed.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Yep, the OpenAI api and/or the ollama one work for this no problem in most projects. You just give it the address and port you want to connect to, and that port can be localhost, lan, another server on another network, whatever.