The European parliament has blocked the extension of a law that permits big tech firms to scan for child sexual exploitation on their platforms, creating a legal gap that child safety experts say will lead to crimes going undetected.
The regulatory gap has created uncertainty for big tech companies, because while scanning for harms on their platforms is now illegal, they still remain liable to remove any illegal content hosted on their platforms under a different law, the Digital Services Act. Google, Meta, Snap and Microsoft said they would continue to voluntarily scan their platforms for CSAM, in a joint statement posted on a Google blog.
Privacy advocates argue that big tech scanning messages for child abuse threatens fundamental privacy rights and data security for EU citizens, equating these measures to “chat control” that could lead to mass surveillance and false positives.
“There are claims of surveillance or infringement of privacy,” Swirsky said. “Blocking CSAM is not an evasion of privacy. Free speech does not include sexual abuse of children.”


I think this is a difficult dilemma. My immediate instinct is that blocking illegal material is obviously an invasion of privacy. It is impossible to block one type of message without first reading all messages and classifying them.
But on the other hand, we’re talking about other people’s servers here. They shouldn’t have to host illegal material. In fact, it is illegal for them to do so. So it is their right to know what they’re hosting and clean it out.
Should we really have any expectation of privacy on big tech platforms? If you’re sending obviously illegal material in plain bytes to a Microsoft server, what do you think is going to happen?
At the same time, can’t bad actors just encrypt whatever they want to share, so that it can’t be scanned?
In the end, regular users get the short stick.