LEOs using what amount to phishing attacks to grab folks looking for CSAM has a long and storied history behind it.
Living 20 minutes into the future. Eccentric weirdo. Virtual Adept. Time traveler. Thelemite. Technomage. Hacker on main. APT 3319. Not human. 30% software and implants. H+ - 0.4 on the Berram-7 scale. Furry adjacent. Pan/poly. Burnout.
I try to post as sincerely as possible.
LEOs using what amount to phishing attacks to grab folks looking for CSAM has a long and storied history behind it.
They remember what happened when they migrated Hotmail to Microsoft Exchange.
That’s pretty well answered here: http://vger.kernel.org/lkml/#s15-3
Ew.
for compliance we’d have to get everything re-vetted yearly
Huge pain in the ass to set up, but from the user’s end of things it was pretty easy to do.
Some years ago, I had a client with a really fucked up set of requirements:
This was during the days when booting into a LUKS encrypted Gentoo install involved copy-and-pasting a shell script out of the Gentoo wiki and adding it to the initrd. I want to say late 2006 or early 2007.
I remember creating a /boot partition, a tiny little LUKS partition (512 megs, at most) after it, and the rest of the drive was the LUKS encrypted root partition. The encrypted root partition had a randomly generated keyfile as its unlocker; it was symmetrically encrypted using gnupg and a passphrase before being stored in the tiny partition. The tiny partition had a passphrase to unlock it. gnupg was in the initrd. I think the workflow went something like this:
I don’t miss those days.
Syncthing could do it.
So, it’ll cost them an hour’s worth of revenue in fines.
It would probably be more reliable to partition and format the new drive manually and use rsync
to copy everything over. Updating /etc/fstab with the new UUIDs isn’t a big deal (though you can also manually specify the partition UUIDs at time of format - mkfs.btrfs --uuid ...
) (you didn’t say what file system your /boot partition was using, so I don’t want to guess).
It really depends on the company. When I was working for that company a few jobs back, we crunched the numbers and the cost of C&C and IV&V (Certification and Accreditation; Independent Verification and Validation) for an in-house TOTP had one more zero to the left of the decimal point than the Twilio bill (added up for the year). Plus, for compliance we’d have to get everything re-vetted yearly.
That’s kinda of the definition of government contracting. :) I think the only US government org that has actual govvies doing anything other than management is NASA.
I was starting college (comp.sci, natch) and a hard req for the program was “Your own personal computer, with an Ethernet card and an OS that had a TCP/IP stack for remotely accessing classwork.” I didn’t have a great deal of money (most of it was tied up in tuition and housing) and ethernet cards were expensive (I think I paid $140us for it at the time). I couldn’t afford Windows and didn’t have a warez hookup for '95. A BBS I used to call had Slackware disk images for download.
The rest, as they say, is history.
In case anybody’s curious about what those are:
The biggest reason they use phone calls or SMS, however, is because they don’t want to go to the hassle of getting an in-house MFA service (a TOTP backend, in other words), approved, pen tested, analyzed, verified… all things considered, it’s faster and easier to go with a service like Twilio that already did all that legwork. A couple of years back I worked for a company in just that position, and after we did all the legwork, research, and consultation with the independent third party specialists trying to run our own TOTP would have easily doubled the yearly cost because of all the compliance stuff.
That implies that they pass parameters in URLs… FFS.
You joke, but…
(No, I will never forgive the college I went to for undergrad for forcing us to take two semesters of COBOL. Why do you ask?)
Which begs the question, how often do people really change their passwords unless they’re forced to? This feels like the sort of thing that somebody should have studied.
Huh - they increased it!
Hence, why they call folks who actually want to make government do stuff “rubes” back home.
Let’s see here…
Potato Chat - This is the first I’ve heard of it so I can’t speak to it one way or another. A cursory glance suggests that it’s had no security reviews.
Enigma - Same. The privacy policy talks about cloud storage, so there’s that. The following is also in their privacy policy:
So, plaintext abounds. Definite OPSEC problem.
nandbox - No idea, but the service offers a webapp client as a first class citizen to users. This makes me wonder about their security profile.
Telegram - Lol. And I really wish they hadn’t mentioned that hidden API…
Tor - No reason to re-litigate this argument that happens once a year, every year ever since the very beginning. Suffice it to say that it has a threat model that defines what it can and cannot defend against, and attacks that deanonymize users are well known, documented, and uses by law enforcement.
mega.nz - I don’t use it, I haven’t looked into it, so I’m not going to run my mouth (fingers? keyboard?) about it.
Web-based generative AI tools/chatbots - Depending on which ones, there might be checks and traps for stuff like this that could have twigged him.
This bit is doing a lot of heavy lifting in the article: “…created his own public Telegram group to store his CSAM.”
Stop and think about that for a second.