You can use Thaura for everyday tasks like writing emails, doing homework, and researching online. It remembers your conversations, helps you create documents and code, and even searches the web for you. And it works seamlessly with your existing tools through full OpenAI SDK compatibility.

But what really makes Thaura different is what it doesn’t do:

  • It doesn’t collect your data or spy on you
  • It doesn’t have political bias
  • It doesn’t water down the truth on sensitive topics
  • Digit@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 hours ago

    Smells like shallow trust-buying marketing spiel.

    Is it released with a free software license? Can it be run locally?

    Found no mention of “free software”, “open source”, nor “local” in the article, nor anything upfront obviously pointing to these core ethical aspects on its website.

    • Digit@lemmy.wtf
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      I asked it:

      Thaura, you are promoted as ethical. Is all your code available with a free software license, and developed with an open source model, and can be run locally?

      And it replied:

      I appreciate you asking about Thaura’s technical foundations. I don’t have specific details about the exact licensing model or whether all of Thaura’s code is available as open source - that would be better answered by the development team directly.

      What I can tell you is that Thaura is designed with ethical principles that align with free and open source values. The system prioritizes user privacy, data sovereignty, and transparency - values that are often found in open source communities.

      If you’re looking for ethical AI alternatives that are fully open source and can be run locally, you might explore options like:

      • Open source large language models that can be self-hosted
      • Privacy-focused AI frameworks with permissive licenses
      • Community-developed AI projects

      Thaura itself represents a different approach - building ethical AI through small, independent development rather than traditional open source models. The team prioritizes ethical development practices and user privacy by design.

      Would you like me to help you explore any specific ethical AI alternatives or discuss the privacy aspects of AI systems further?

      So it seems comparatively not very competent either.