

Why would they ask consumers what they want when they can tell consumers what they want. What are you gonna do, move to Linux?


Why would they ask consumers what they want when they can tell consumers what they want. What are you gonna do, move to Linux?


It’s integrated graphics so it uses up to half of the system RAM. I have 96GB of system ram, so 48GB of VRAM. I bought it last year before the insane price hikes, when it was within reach to normal people like me.
I’ve tried it and it works. I can load huge models. Bigger than 48GB even. The ones bigger than 48GB run really slow, though. Like one token per second. But the ones that can fit in the 48GB are pretty decent. Like 6 tokens per second for the big models, if I’m remembering correctly. Obviously, something like an 8b parameter model would be way faster, but I don’t really have a use for those kinds of models.


I wouldn’t trust Microsoft with pictures of my cat, and this is a perfect example of why.


I mean, I get that, but why is Proton offering one? What value do I get from Proton’s LLM that I wouldn’t get from any other company’s LLM? It’s not privacy, because it’s not end to end encrypted. It’s not features, because it’s just a fine tuned version of the free Mistral model (from what I can tell). It’s not integration (thank goodness), because they don’t have access to your data to integrate it with (according to their privacy policy).
I kind of just hate the idea that every tech company is offering an LLM service now. Proton is an email and VPN company. Those things make sense. The calendar and drive stuff too. They have actual selling points that differentiate them from other offerings. But investing engineering time and talent into yet another LLM, especially one that’s worse than the competition, just seems like a waste to me. And especially since it’s not something that fits into their other product offerings.
It truly seems like they just wanted to have something AI related so they wouldn’t be “left behind” in case the hype wasn’t a bubble. I don’t like it when companies do that. It makes me think they don’t really have a clear direction.
Edit: it looks like they use several models, not just one:
Lumo is powered by open-source large language models (LLMs) which have been optimized by Proton to give you the best answer based on the model most capable of dealing with your request. The models we’re using currently are Nemo, OpenHands 32B, OLMO 2 32B, GPT-OSS 120B, Qwen, Ernie 4.5 VL 28B, Apertus, and Kimi K2.
- https://proton.me/support/lumo-privacy
I have a laptop with 48GB of VRAM (a Framework with integrated Radeon graphics) that can run all of those models locally, so Proton offers even less value for someone in my position.


Oh, I completely agree. Using Gmail is the problem here, and no amount of settings fiddling will solve that.
Oh ok, so Van Gogh only has one ear, Beethoven is deaf, why does Scarlet Witch only have audio? Was she blinded?
Edit: another user explained it: she has no Vision.
Alright, somebody’s gonna have to explain it to me, cause I don’t get it.


Yeah, I really should sit down and create a Matrix server for it.


I don’t think they meant Gmail used to be private, but email. Yes, Gmail has never been private. But, that’s why it’s free.


I meant I run it.
But to answer your question, it uses subaddressing really well. When you give your email to a company, you add a label to the address just for that company, then all of their emails go in that label. You can easily toggle things like notifications, mark as read, and show in aggbox (our version of the inbox, since there isn’t really an inbox when everything is sorted already). Then if that company leaks your email, you can block that label.
You can also set up screening labels that are meant for real people, then any new senders get screened to make sure they’re human before you get their mail.


I’d like to invite you to try https://port87.com/
It’s proudly AI free.


Proton has their own AI bullshit:
At least it’s not rummaging around your email though.
And just so you know, it is not end-to-end encrypted like their email is when emailing another Proton user: https://lumo.proton.me/legal/privacy
The only way to have actually private AI is to run it on your own hardware.
If that’s all you use it for, then that’s all that will be in there. Email is as useful as you make it.
I’m rewriting how my ORM, Nymph.js handles access controls. Right now, it stores the access control vars (user, group, permissions) in the same table as all of the other data, which makes the full text search slow because it has to join the tables multiple times. I’m moving those access controls into the entity tables where all the joins start from, so a simple index can handle that before it even joins the FTS tokens table.
The hard part is going to be migrating existing data in my email service that uses Nymph. It’ll be multiple steps: create the new columns, make sure new entities add that data to both the new columns and the old way, migrate all entities to have the data in both, update the queries to use the new columns and stop storing data the old way, then delete all the old data. It’ll be the opposite of fun, but hopefully once I’m done it’ll be way faster.


Hello: The Mister Chef Collective


45s were commonly used in jukeboxes, where the big spindle hole was useful.
I played around with it a lot yesterday, giving it documentation and asking it to write some code based on the API documentation. Just like every single other LLM I’ve ever tried, it just bungled the entire thing. It made up a bunch of functions and syntax that just doesn’t exist. After I told it the code was wrong and gave it the right way to do it, it told me that I got it wrong and converted it back to the incorrect syntax. LLMs are interesting toys, but shouldn’t be used for real work.