ANOM wasn’t until it was, and then it shut down. I recommend the Darknet Diaries episode to hear the story.
ANOM wasn’t until it was, and then it shut down. I recommend the Darknet Diaries episode to hear the story.
We secure your account against SIM swaps…with modern cryptography protocols.
This just dosent make ANY sense. Sim swaps are done via social engeneering.
See this for details. Their tech support people do not have the access necessary to move a line so there’s nobody to social engineer. Only the customer can start the process to move a line after cryptographic authentication using BIP-39.
proprietary signaling protection
If they wanted to be private, it would be Open source.
I’m really tired of this trope in the privacy community. Open source does not mean private. Nobody is capable of reviewing the massive amount of code used by a modern system as complex as a phone operating system and cellular network. There’s no way to audit the network to know that it’s all running the reciewed open source code either.
Voicemails can hold sensitive information like 2FA codes.
Since when do people send 2fa codes via voicemail? The fuck? Just use signal.
There are many 2FA systems that offer to call your number so the system can tell you your 2FA code.
The part where I share your reaction to Cape is about identifying customers. This page goes into detail about these aspects, and it has a lot of things that are indeed better than any other carrier out there.
But it’s a long distance short of being private. They’re a “heavy MVNO”. This means their customers’ phones are still using other carriers’ cell towers, and those can still collect and log IMSI and device location information. Privacy researchers have demonstrated that it is quite easy to deanonymize someone with very little location information.
On top of that, every call or text goes to another device. If it goes through another core network, most call metadata is still collected, logged, and sold.
If we accept all of Cape’s claims, it’s significantly better than any other carrier I’m aware of, but it’s still far from what most people in this community would consider private.
Part of that is the responsibility of the app developer, since they define the payload that appears in the APNs push message. It’s possible for them to design it such that the push message really just says “time to ping your app server because something changed”. That minimizes the amount of data exposed to Apple, and therefore to law enforcement.
For instance the MDM protocol uses APNS. It tells the device that it’s time to reach out to the MDM server for new commands. The body of the message does not contain the commands.
That still necessarily reveals some metadata, like the fact that a message was sent to a device at a particular time. Often metadata is all that law enforcement wants for fishing expeditions. I think we should be pushing back on law enforcement’s use of broad requests (warrants?) for server data. We can and should minimize the data that servers have, but there’s limits. If servers can hold nothing, then we no longer have a functional Internet. Law enforcement shouldn’t feel entitled to all server data.
Side note: Any decent kid tracker thingies that respect privacy?
Apple Watch works well as a kid tracker if they’re old enough to wear it safely, and I think the privacy aspects are very good. It uses the FindMy network, and Apple can’t see the location. There’s a bunch of specifics here. Apple Watch used to require an iPhone, but Apple made it so you can add a kid’s watch to the family so it uses a parent’s iPhone instead.
This article is 10 years old.
This is too techno-utopian. There’s also a place for governments. Comprehensive privacy legislation would also change the world for the better. Ignoring that is exactly what the largest invaders of privacy want.
On iPhones and iPads there are several technologies available for monitoring and filtering network traffic. Filter network traffic from the Apple Deployment Guide has an overview of the technologies and their trade-offs.
Also delete your expired certificate if you have one (for example after a year)
This is likely a bad mistake. Keep the old cert around.
There’s two possibilities:
The first possibility is that Actalis uses the same key pair for the new cert. This is not a great approach because it doesn’t defend against a leaked key or key overuse. After all, if the key can be trusted longer than a year, the first cert they issued should be valid for longer.
The second, and much worse possibility, is that renewing the cert gets a different private key. This can case data loss. Deleting the old identity means you lose the ability to decrypt any messages that were encrypted using that key! Even if your mail client stores the previously encrypted emails in decrypted form, you may receive a new email from a sender who does not yet have your new cert.
Actalis sends you your private key. This means they have access to your private key, and theoretically could use it to sign and decrypt your emails. A more secure but somewhat more complex system would use a certificate signing request (CSR) instead. In that case, you are the only person who ever has your private key, so only you can sign or decrypt your email.
Stingray phone trackers and similar IMSI catchers are a kind of honeypot.