- cross-posted to:
- privacy@programming.dev
- cross-posted to:
- privacy@programming.dev
- In your Gmail app, go to Settings.
- Select your Gmail address.
- Clear the Smart features checkbox.
- Go to Google Workspace smart features.
- Clear the checkboxes for: Smart features in Google Workspace, Smart features in other Google products
- If you have more Gmail accounts, repeat these steps for each one.
- Turning off Gemini in Gmail also disables basic, long-standing features like spellchecking, which predate AI assistants. This design choice discourages opting out and shows how valuable your AI-processed data is for Google.
This has finally gotten me to take steps to deGoogle my email, Fastmail trial underway.


Yeah but lumo is basically just a side gimmick thing that isn’t integrated with the rest of their suite.
It’s basically the equivalent of a self hosted small LLM that you don’t have to fuck around with setting up.
There’s nothing inherently wrong with LLMs as a tool. The problem is the misuse, misapplication and over scaling of them.
If they were all just one off tools like lumo that are basically slightly more advanced digital assistants they would be fine. LLMs are fantastic for quickly searching shit with crap discoverability for example. They routinely are more effective at finding random useful results in say reddit or stack overflow or even some weird forum on the 12th page of Google.
I mean, I get that, but why is Proton offering one? What value do I get from Proton’s LLM that I wouldn’t get from any other company’s LLM? It’s not privacy, because it’s not end to end encrypted. It’s not features, because it’s just a fine tuned version of the free Mistral model (from what I can tell). It’s not integration (thank goodness), because they don’t have access to your data to integrate it with (according to their privacy policy).
I kind of just hate the idea that every tech company is offering an LLM service now. Proton is an email and VPN company. Those things make sense. The calendar and drive stuff too. They have actual selling points that differentiate them from other offerings. But investing engineering time and talent into yet another LLM, especially one that’s worse than the competition, just seems like a waste to me. And especially since it’s not something that fits into their other product offerings.
It truly seems like they just wanted to have something AI related so they wouldn’t be “left behind” in case the hype wasn’t a bubble. I don’t like it when companies do that. It makes me think they don’t really have a clear direction.
Edit: it looks like they use several models, not just one:
- https://proton.me/support/lumo-privacy
I have a laptop with 48GB of VRAM (a Framework with integrated Radeon graphics) that can run all of those models locally, so Proton offers even less value for someone in my position.
Ah; as I recall, it’s because they polled users and there was an overwhelming “yes please”, based on Proton’s privacy stance.
Given proton is hosted in EU, they’re likely quite serious about GDPR and zero data retention.
Lumo is interesting. Architecturally I mean, as a LLM enjoyer. I played around with it a bit, and stole a few ideas from them when I jury rigged my system. Having said that, you could get a ton more with $10 on OpenRouter. Hell, the free models on there are better than lumo and you can choose to only use privacy respecting providers.
I played around with it a lot yesterday, giving it documentation and asking it to write some code based on the API documentation. Just like every single other LLM I’ve ever tried, it just bungled the entire thing. It made up a bunch of functions and syntax that just doesn’t exist. After I told it the code was wrong and gave it the right way to do it, it told me that I got it wrong and converted it back to the incorrect syntax. LLMs are interesting toys, but shouldn’t be used for real work.
???
The AMD GPU in some Frameworks have 8GB of vram.
It’s integrated graphics so it uses up to half of the system RAM. I have 96GB of system ram, so 48GB of VRAM. I bought it last year before the insane price hikes, when it was within reach to normal people like me.
I’ve tried it and it works. I can load huge models. Bigger than 48GB even. The ones bigger than 48GB run really slow, though. Like one token per second. But the ones that can fit in the 48GB are pretty decent. Like 6 tokens per second for the big models, if I’m remembering correctly. Obviously, something like an 8b parameter model would be way faster, but I don’t really have a use for those kinds of models.