but it’s not a 1000000x return!
i took the phrase
You don’t need to understand why they struggle, just accept that they do.
to mean that you shouldn’t assume someone is lying. they just might have different circumstance or needs. that doesn’t invalidate their experience, just that you’re solving different problems (which may not have been well communicated, and also may not even be technical problems).
if you’re trying to solve their problems, then sure that’s a discussing… but 99% of tech conversations on the internet like this are people berating others for “not understanding” the “simple” way it’s done because it works fine for them
slight disagree: proud version is actually when you become so disillusioned with your old code that you throw it all out and start again
sell at $100… got it! thanks future me! will do!


you can’t, and shouldn’t… lemmy never claimed to be, nor has the architecture to enable it to be a private service. lemmy instances are run by arbitrary people on the internet, and some of them do run forked versions of the codebase (eg blahaj)… we have no way of verifying what’s running on the server
but interaction on lemmy doesn’t require trust. i don’t think anyone is expecting lemmy to be private


It’s not really a partial solution
disagree, and that’s fine… STEM is full of partial solutions that become complete solutions as additional pieces are added (and as i said with the proxy, imo the proxy makes it a complete solution)
The complete and obvious solution to the problem is to not collect personally identifying information in the first place.
but that creates other problems… for example, with spam and usability
it’s all trade-offs, and signal has done a lot of global privacy when compared to alternatives exactly because of the compromises they’ve made
You have a very charitable view of Signal making the base assumption that people running it are good actors
i don’t consider it charity… they’re making a lot of right moves, and are explaining their compromises. they’ve given me no reason not to trust them, and plenty of reasons to say they’re a good compromise that will have the greatest impact to global privacy
are there better privacy solutions? sure… will they ever take off? personally, i doubt it… not letting perfect be the enemy of better or good enough is important: a solution that keeps people who don’t care about privacy relatively safe is important, including for the privacy of people who do care about their privacy because it allows everyone to blend in with the crowd
Yet, given that it has direct ties to the US government, that it’s operated in the US on a central server, and the team won’t even release the app outside proprietary platforms
imo the fact that it’s hosted in the US is pretty irrelevant… as you’ve pointed out: it shouldn’t be a matter of trust… validation of the client is the only thing you can rely on, so even if the NSA hosted the servers you should still theoretically be able to “trust” the platform (outside of the fact that you couldn’t ever trust that they’re using encryption that they don’t have a secret back door in or something)
I do not trust the people operating this service, and I think it’s a very dangerous assumption to think that they have your best interests in mind.
i trust them as much as i trust anyone running any other privacy service


i think it’s a very clever partial solution, but when combined with signals other ethos (making privacy simple so that more people use privacy-centric options), that means people aren’t going to change IPs between temp token and message to solve the last part of the puzzle: thanks for explaining your line of reasoning
i also think that there’s a way forward where messages are sent or tokens are retrieved via a 3rd party proxy to hide IPs (i thought i read something about signal contracting a 3rd party to provide some of those services but i can’t find the reference to that, and also it’s not verifiable so limited in usefulness), which is a complete solution to the problem, as long as said proxies aren’t controlled by signal (thinking about it now, you could also simply route signal traffic through a proxy so many people share an IP, and they do provide proxy functionality separate to the system proxy configuration)
i still think that signal has made a pretty reasonable set of trade-offs in order to balance privacy and usability in order to have a large impact on global privacy
*edit: actually, adding to the proxy point, turns out EFF run a public proxy
and there’s a big list of public proxies available (not a big list to avoid censorship, but still a good resource)
and they also have support for tapping a link to configure the proxy, so very quick and easy


yeah, bad choice of words on my part… and i think the verification doesn’t have to be identity-based… it just has to be some limited resource (which identity is, and guarantees fairness because it’s n per identity)
it’s all compromises, and i don’t think there’s a perfect solution… what we want is the largest impact on general privacy the world over, and options that allow verifiable perfect privacy when needed - but understanding that that requires compromise in things like usability simply because it’s more complex to set up things like trust networks than to … just not


they do, but that information is disconnected from your messages by sealed sender: that’s the point… your sender identity is cryptographically shielded from the signal servers
they know who you are, but they have no ability to connect that identity with who you message (which you can verify using only your client)
*edit: i will say, because i’m interested in conversation and understanding not just winning an internet argument, that my conversation with yogthos here has underscored i think a place where this could still be improved: your IP address across the entire sealed sender process can be used to tie things together, if it remains unchanged (but you can change your IP address between receiving your sender token and sending messages)


that comes down to a difference in philosophy i think… signal have detailed their reasoning for not making signals servers decentralised and self hostable, and i don’t disagree with some of them… i think everything is a trade-off, and decentralisation has scaling and usability issues
signal has done a pretty good job of creating a platform that’s much much better than alternatives in a package that’s consumable by the general public
i’m not sure that something that’s more like matrix, or xmpp, etc could do that
it might be theoretically and technically not quite as perfect, but its impact on increased privacy across the globe has been far larger because they’ve made some of those compromises


replied to your other msg, so i wont duplicate it here and we can continue there if you’d like


the whole point is you don’t need to trust them… you can never trust any server: your client is the only thing you can trust. you can verify using your message payloads that your sender information is not ever sent to the signal servers along with your messages


that’s reasonable. perhaps the best service is one with both options: you can somehow have a verified account that lets you msg people you haven’t connected with (perhaps they have an “allow from verified” contact option), and join groups without verification, but that you can also have unlimited anonymous accounts that are assumed spammy


If I run a server, I have access to the raw requests coming in. I can do whatever I want with them even outside Signal protocol. You can’t verify that my server is set up to work the way I say it is. You get that right?
i do, of course… and the information you have in that raw request is limited to the information that’s in the request (including metadata like IP address and other header information in the packets that make it up)
You’re confusing what Signal team says their server does, and the open source server implementation they released with what’s actually running. The latter, you have no idea about.
i’m really not… i’m saying it doesn’t matter what their server is doing, which is the only way to actually verify this: the client is the only thing you can trust in the chain, so you should always assume that the server is compromised: maliciously or not
trusting signal doesn’t even have anything to do with it; they could be compromised and not know it
these are things we both agree on
the server still receives the raw network request and all the metadata attached to it. Your client has to talk to the server and identify itself before any messages are even sent.
i agree with that too. what information contained within that request do you take issue with?
as i said earlier, your IP address is problematic, but that can be said about any service: you have no way to validate any server software, open source or not… so you have to take measures to protect that information no matter the service you’re connecting too
this is pretty trivially achieved with a trustworthy VPN these days (again, this is unverifiable, but you have to draw the line somewhere: can we agree that IP address privacy is within the realm of personal responsibility since that applies to any service?)
When your device connects to send that sealed message, it inevitably reveals your IP address and connection timing to the server.
agree
The server also knows your IP address from when you initially registered your phone number or when you requested those temporary rate limiting tokens.
also agree
By logging the raw incoming requests at the network level, a malicious server can easily correlate the IP address sending the sealed message with the IP address tied to the phone number.
okay, i can see where your problem is
i can agree that’s definitely a vector they can use to build a social graph, and then tie that social graph back to real identities, and also that’s far from what you want in a private platform
id say that it comes down to trade-offs… signal (says) they require the phone number in order to combat spam, which i can see as a real issue (i’d be happier if they didn’t store the phone number, or at least didn’t link it to your account, but that comes with a whole load of other issues)
services need to have some way of combatting spam, which either boils down to “expensive accounts” so that blocking is a viable option, or spam filters which can be abused by corporate entities like they have with email
if you really care about privacy with signal, you can get a VPN that allows you to frequently rotate your IP… most users won’t do that, so i can agree it’s a sub-optimal solution
but i do think it’s a reasonable trade-off


what matters is that the server has a unique id for you and the person you’re talking to, and that id can then be mapped to the phone number that was initially collected. That’s all the server needs to identify the real identity of the people you communicate with
the key point missing in the middle here though is that the IDs aren’t what matters: it’s having the ability to link those IDs
and i agree, being able to link you->your phone number->your user ID->message->recipient ID->recipient phone number->recipient as an individual is a problem
but again, sealed sender break that chain: there is cryptographically no way to link your user ID to the message you’ve sent
it’s literally impossible for them to build a social graph from your messages
It’s not a question of what the server needs minimally
again, i agree… kinda… i put that there to show that you don’t actually need the sender to achieve the goal of delivering the message. it was part of the explanation of why sealed sender works, rather than a point to be made by itself
what the server could be doing if it was set up maliciously. The sealed sender does not solve this problem in any way shape of form
would you be able to explain how it doesn’t?
sealed sender divorces your user ID from any message you send… your messages can not be tied back to your user ID or phone number without having decrypted the message content
so even with a malicious server, because you can verify your client behaviour (that it derives keys, and that you can verify the message payload contains nothing outside the ciphertext which is unexpected), then even a malicious server (without IP information) doesn’t have the information necessary to infer the sender from the sent message


that as the case may be, sending signals is still good. you don’t have to continue for very long, but a flood of support after making a moral decision will make it more likely that they, and others will make similar decisions in the future
the worst thing would be for google for example to see the fallout from this and think “well we don’t want to be them! better start building autonomous weapons”


i’d agree that for privacy alone simplex is probably better, but until it scales i’m not sure we can say that it will be able to scale. i have my doubts, simply because if you can have unlimited anonymous profiles, when it becomes a high value target then spam becomes a real problem, and then there’s only 2 major solutions that i can think of:
ENS runs on Etherium which no longer uses POW, thus doesn’t waste power in the same way as eg bitcoin
also I2P and onion don’t have name systems per se, so that doesn’t actually address the problem… AA could just use an IP address or something, but they require (for human usability) well known domain names
I2P and onion still require systems for discovery. they solve different problems to name systems