so i yes, espeak exists and still sounds terrible even worse than picoTTS (last update 4 yrs ago?). so what else is there? i look at mimic3 and it says they are dead and one should go for piper here: https://github.com/MycroftAI/mimic3 the link to piper followed I get: https://github.com/rhasspy/piper "This repository was archived by the owner on Oct 6, 2025. It is now read-only. "
ok, so coqui? https://github.com/coqui-ai/TTS no update in over 12 months…how bad can it be? https://coqui.ai/ …great it is a page for gambling now.
so, what are you using? gTTS is not offline.
Try alltalk_tts v2. One of the features is you can provide an audio sample and the AI will imitate the voice. The overall quality is pretty good, if you choose a larger model and let it run.
Faster Whisper could be a option, there is various GUI options available as well.
https://github.com/SYSTRAN/faster-whisper
And if you are looking for something that you can “just install”, I recommend balabolka. The voices are natural and you can use some of the windows built in voices to make it more natural.
https://www.cross-plus-a.com/balabolka.htm
Make Azure natural TTS voices accessible to any SAPI 5-compatible application.
Kokoro is your best bet right now. It works wonderfully even in a docker container with no GPU. There are others but I don’t have the list right now. Will throw another update on here when I do.
The rhasspy guy was very invested in Coqui. He built a lot of his own stuff, for his home automation and such. But Coqui was superior, so he started spending time on that.
Unfortunately, the coqui team (based out of Mozilla) was very distracted and didn’t ship a lot of stuff on time or at all. It doesn’t even have basic stuff like SSML support right now, if I recall correctly. So the rhasspy guy also lost steam.
Of course, with the OpenAI model of audio generation, you’re expected to not use SSML at all and just use the black box API to get “good enough” results. That really sucks.
Oh, I just remembered which other one I wanted to mention - someone has built an open source version of NotebookLLM, complete with multi voice support. But it requires GPU, I believe. Do what you will with that. I’ll add a link if I find it.
I prefer kokoro because it’s really solid and works really well on CPU.
https://github.com/marytts/marytts
I’ve used MaryTTS semi-recently. It’s older but works well enough for my cases. I have it running on a server (locally) and my endpoints make a call to it and playback the returned audio file.
On Android, I use SherpaTTS which has good voices, but I’m not aware of a desktop/Linux option. It mentions using voices from Coqui which you linked, so I would guess that would be the way to go for desktop.
SherpaTTS is great on GrapheneOS with OSM for navigation.
+1 for this.
Sherpa links to this page, if anyone wants to preview what the voices sound like
https://huggingface.co/spaces/k2-fsa/text-to-speech
From the ones I’ve tried so far,
csukuangfj/vits-piper-en_US-amy-medium|1 speakersounded the most clear and natural for GPS / driving directions. If someone finds other good ones, I’d appreciate it :)
It really depends on what you want to do with it. I run wyoming-piper as part of my Home Assistant deployment and its been rock solid. The Wyoming protocol is pretty well documented too, so you should be able to integrate with it pretty easily.
Here is the repo for the piper I use, https://github.com/OHF-Voice/piper1-gpl
For my phone, I use this tts engine, https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
What’s your workflow on your phone?
I have kaldi auto updated with my preferred voice when ever there is a new release via obtainium.
I set kaldi as my tts engine, and I disabled the google TTS.
Then anytime I use TTS on my phone it uses kaldi.
Its been really great for my preferred eBook reader (Librera) so I can do chores and read at the same time.
That’s an excellent setup! I’ll try to replicate it when I get home!
Check the README for piper. It moved to https://github.com/OHF-Voice/piper1-gpl
I highly recommend Mlx-audio for anyone doing tts on Apple Silicon. It offers great performance, leverages kokoro-82M, and plays well with streaming frontends like Open webui. The one shot voice cloning feature is also pretty cool.
Kokoro was the one I was going to mention. I played around with it a bit, was very impressed with the speed and quality. And then I realized I had been using it in CPU mode. GPU is incredible.
That looks amazing! Will check it out. Love Kokoro and love how good Apple Silicon is!
Save that post for the next time when someone with too much time on their hands asks what project they should start/contribute to.
Not sure what you’re asking here, but are you talking about the voice part, the TTS pat, or the interaction?
I’m curious how could it be unclear that this post is about the TTS part? Espeak is provided as an example.






