I’ve gotten a new phone and setting it up for the past few days - a Fairphone 5 with Android installed. So obviously, this means I can’t escape Googles clutches. Sure, whatever.
I have been VERY adamant about pressing “No” on all prompts, that try to get me to try something out or use some dumb service. I do not want any AI tool or similar to go through my files.
Yet, while perousing the depths of my system settings, I realized Google Photos was using a suspicous amount of storage. Somehow, it had “synchronized” ALL my locally saved pictures - this included pictures of my vacations, my drivers license, private pictures I would have rather not shared, and so on…
And while checking the Google Photos App for the damage done, obviously it had already automatically generated “previews” and “albums” for me, neatly organized.
IT HAD AUTOMATICALLY ANALYSED MY DRIVERS LICENSE AND SAVED IT INTO AN ALBUM CALLED “Identity-related”
How the fuck is this legal? I am so mad at myself right now. I’m usually so fuckin cautious about denying any sort of pop-up and setting all settings as strictly as possible.
So obviously I just had to spent 2 hours figuring out how to turn this “synchronization” off, and how to delete all photos in google photos - spoiler alert: There is no “Delete All” button. You have to manually select every single fucking image.
Sorry for the rant, I hope it’s not too off-topic. I’m just so mad right now.
Yes, this happens. Even if you turn off all the syncing etc, they will shoot an update and all your settings will revert to default. This has happened with my father’s phone a lot.
And even if you keep all these settings off, they are still scanning all photos to check for CSAM.
I highly recommend deGoogling your phone. If you cannot install a custom ROM, check out Universal Android Debloater. There are many sources for degoogling your life. Check out c/degoogle on Lemmy (I forgot the instance name, just search for it). Or if you want we have small group on Signal for deGoogling related talks, DM me and I can share the link to join. (Signal does require a phone number to register, but since usernames are a thing your phone number will not be available publicly.)
could you DM me this signal group link, please?
I appreciate your comment, replying here for reference, perhaps I’d like to join that signal channel. Being staunchly anti-google I feel I’m on top of things, but my Gmail is used across many of my logins. With the scanning for CSAM issue, many people don’t realize that Google installs a hidden app called safetycore, for me it gets reinstalled on every update.
https://allthings.how/what-is-android-system-safetycore-and-why-did-it-appear-on-your-phone-2/
There not even checking for CSAM
That would be near impossIble considering the tech. Even on a normal portrait is hard to judge the age on. Let alone fotos with more complex perspectives and only some body parts visible.
What they are doing is using hashes of specific real pictures that the police know are commonly shared.
Theoretically it could catch some careless content consuming offenders. The worst offenders, that produce new material, are beyond the scope.
But also, obvious what google gets is just the hashcodes and not the actual pics. If the police gave google a hash to target for pics of vances bald head or (trans-positive) memes who would know?
There was a news some time ago, that a man was arrested for clicking nude pictures of children, later it was found out that he was sending pictures of his child to a doctor for diagnosis. How did that happen?
I’ll link the source if I find it.
Update:
NYTimes - https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html
Paywall removed - https://removepaywalls.com/https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html
That must be some other system indeed.
They don’t really provide much information from how the images were actually shared though.
Maybe there is a machine learning algorithm that is trained to detect specific features in a random photo but i cant imagine it being accurate without frequent false possibles.
Could be that if you have a certain amount of “plausible” hits then a google employee has to review them manually and they quickly Judged it wrongly?
Though that technically implies your Private medical picture is now seen and possibly covertly copied by a (rogue) employee.
I’ve heard about this too.