“Telegram is not a private messenger. There’s nothing private about it. It’s the opposite. It’s a cloud messenger where every message you’ve ever sent or received is in plain text in a database that Telegram the organization controls and has access to it”
“It’s like a Russian oligarch starting an unencrypted version of WhatsApp, a pixel for pixel clone of WhatsApp. That should be kind of a difficult brand to operate. Somehow, they’ve done a really amazing job of convincing the whole world that this is an encrypted messaging app and that the founder is some kind of Russian dissident, even though he goes there once a month, the whole team lives in Russia, and their families are there.”
" What happened in France is they just chose not to respond to the subpoena. So that’s in violation of the law. And, he gets arrested in France, right? And everyone’s like, oh, France. But I think the key point is they have the data, like they can respond to the subpoenas where as Signal, for instance, doesn’t have access to the data and couldn’t respond to that same request. To me it’s very obvious that Russia would’ve had a much less polite version of that conversation with Pavel Durov and the telegram team before this moment"


but in that chain what you really care about is your phone number that identifies you in the real world to your messages, right?
yes, and the only thing you need to route is the receiver; not the sender
the sender is only used to validate the senders identity, and for rate limiting
sealed sender solves both of these problems whilst not including any sender information in messages… phone number or user id doesn’t matter: those things are not sent along with any of your messages, and that’s verifiable
your phone number and user id is only known by signal when you retrieve a temporary token (this solves rate limiting: the retrieval of the token is the rate limit, and each token has a limited number of messages it can send)… the client then derives a different key from it, which can still be verified as having been signed by the server, but does not contain any information that can be tied back to your phone number or user ID
It doesn’t matter, what matters is that the server has a unique id for you and the person you’re talking to, and that id can then be mapped to the phone number that was initially collected. That’s all the server needs to identify the real identity of the people you communicate with.
It’s not a question of what the server needs minimally, it’s a question of what the server could be doing if it was set up maliciously. The sealed sender does not solve this problem in any way shape of form.
the key point missing in the middle here though is that the IDs aren’t what matters: it’s having the ability to link those IDs
and i agree, being able to link you->your phone number->your user ID->message->recipient ID->recipient phone number->recipient as an individual is a problem
but again, sealed sender break that chain: there is cryptographically no way to link your user ID to the message you’ve sent
it’s literally impossible for them to build a social graph from your messages
again, i agree… kinda… i put that there to show that you don’t actually need the sender to achieve the goal of delivering the message. it was part of the explanation of why sealed sender works, rather than a point to be made by itself
would you be able to explain how it doesn’t?
sealed sender divorces your user ID from any message you send… your messages can not be tied back to your user ID or phone number without having decrypted the message content
so even with a malicious server, because you can verify your client behaviour (that it derives keys, and that you can verify the message payload contains nothing outside the ciphertext which is unexpected), then even a malicious server (without IP information) doesn’t have the information necessary to infer the sender from the sent message
Again, sealed sender has nothing to do with it. If I run a server, I have access to the raw requests coming in. I can do whatever I want with them even outside Signal protocol. You can’t verify that my server is set up to work the way I say it is. You get that right?
You’re confusing what Signal team says their server does, and the open source server implementation they released with what’s actually running. The latter, you have no idea about.
The core issue is trusting the physical infrastructure rather than just the cryptography. The protocol design for sealed sender assumes the server behaves exactly as the published open source code dictates. A malicious operator can simply run modified server software that entirely ignores those privacy protections. Even if the cryptographic payload lacks a sender ID, the server still receives the raw network request and all the metadata attached to it. Your client has to talk to the server and identify itself before any messages are even sent.
When your device connects to send that sealed message, it inevitably reveals your IP address and connection timing to the server. The server also knows your IP address from when you initially registered your phone number or when you requested those temporary rate limiting tokens. By logging the raw incoming requests at the network level, a malicious server can easily correlate the IP address sending the sealed message with the IP address tied to the phone number.
Since the server must know the destination to route the message, it just links your incoming IP address to the recipient ID. Over time this builds a complete social graph of who is talking to whom. The cryptographic token merely proves you are allowed to send a message without explicitly stating who you are inside the payload. It does absolutely nothing to hide the metadata of the network connection itself from the machine receiving the data.
i do, of course… and the information you have in that raw request is limited to the information that’s in the request (including metadata like IP address and other header information in the packets that make it up)
i’m really not… i’m saying it doesn’t matter what their server is doing, which is the only way to actually verify this: the client is the only thing you can trust in the chain, so you should always assume that the server is compromised: maliciously or not
trusting signal doesn’t even have anything to do with it; they could be compromised and not know it
these are things we both agree on
i agree with that too. what information contained within that request do you take issue with?
as i said earlier, your IP address is problematic, but that can be said about any service: you have no way to validate any server software, open source or not… so you have to take measures to protect that information no matter the service you’re connecting too
this is pretty trivially achieved with a trustworthy VPN these days (again, this is unverifiable, but you have to draw the line somewhere: can we agree that IP address privacy is within the realm of personal responsibility since that applies to any service?)
agree
also agree
okay, i can see where your problem is
i can agree that’s definitely a vector they can use to build a social graph, and then tie that social graph back to real identities, and also that’s far from what you want in a private platform
id say that it comes down to trade-offs… signal (says) they require the phone number in order to combat spam, which i can see as a real issue (i’d be happier if they didn’t store the phone number, or at least didn’t link it to your account, but that comes with a whole load of other issues)
services need to have some way of combatting spam, which either boils down to “expensive accounts” so that blocking is a viable option, or spam filters which can be abused by corporate entities like they have with email
if you really care about privacy with signal, you can get a VPN that allows you to frequently rotate your IP… most users won’t do that, so i can agree it’s a sub-optimal solution
but i do think it’s a reasonable trade-off
Sure, you can absolutely decide that it’s a reasonable trade off, but your original claim was that sealed sender addressed the problem. Sounds like you’re now acknowledging that’s not actually the case…
i think it’s a very clever partial solution, but when combined with signals other ethos (making privacy simple so that more people use privacy-centric options), that means people aren’t going to change IPs between temp token and message to solve the last part of the puzzle: thanks for explaining your line of reasoning
i also think that there’s a way forward where messages are sent or tokens are retrieved via a 3rd party proxy to hide IPs (i thought i read something about signal contracting a 3rd party to provide some of those services but i can’t find the reference to that, and also it’s not verifiable so limited in usefulness), which is a complete solution to the problem, as long as said proxies aren’t controlled by signal (thinking about it now, you could also simply route signal traffic through a proxy so many people share an IP, and they do provide proxy functionality separate to the system proxy configuration)
i still think that signal has made a pretty reasonable set of trade-offs in order to balance privacy and usability in order to have a large impact on global privacy
*edit: actually, adding to the proxy point, turns out EFF run a public proxy
and there’s a big list of public proxies available(not a big list to avoid censorship, but still a good resource)and they also have support for tapping a link to configure the proxy, so very quick and easy
It’s not really a partial solution, it’s just sophistry to obscure the problem. The fact that I’ve had this same discussion with many people now, and it always takes effort to explain why sealed sender doesn’t actually address the problem leads me to believe the the actual problem it’s solving is not of making the platform more secure. The complete and obvious solution to the problem is to not collect personally identifying information in the first place.
You have a very charitable view of Signal making the base assumption that people running it are good actors. Yet, given that it has direct ties to the US government, that it’s operated in the US on a central server, and the team won’t even release the app outside proprietary platforms, that base assumption does not seem well founded to me. I do not trust the people operating this service, and I think it’s a very dangerous assumption to think that they have your best interests in mind.