The European Union is considering controversial proposals to mass scan private communications on encrypted messaging apps for child sex abuse material.
Under the proposed legislation, photos, videos, and URLs sent on popular apps such as WhatsApp and Signal would be scanned by an artificial intelligence-powered algorithm against a government database of known abuse material.
The Council of the EU, one of the bloc’s two legislative bodies, is due to vote on the legislation, popularly known as Chat Control 2.0, on Thursday.
If passed by the council, which represents the governments of the bloc’s 27 member states, the proposals will move forward to the next legislative phase and negotiations on the exact terms of the law.
While EU officials have argued that Chat Control 2.0 will help prevent child sex exploitation, encrypted messaging platforms and privacy advocates have fiercely opposed the proposals, likening them to the mass surveillance of George Orwell’s 1984.
Why are the EU’s plans so controversial?
Critics argue that Chat Control 2.0 is incompatible with end-to-end encryption, which ensures that messages can be read only by the sender and the intended recipient.
While the proposed “upload moderation” regime would scan messages before they are sent, critics have slammed the measures as a “backdoor” by another name that would leave everyone’s communications vulnerable to potential hacking or interference by third parties.
“We can call it a backdoor, a front door, or ‘upload moderation.’ But whatever we call it, each one of these approaches creates a vulnerability that can be exploited by hackers and hostile nation states, removing the protection of unbreakable math and putting in its place a high-value vulnerability,” Meredith Whittaker, the president of Signal, said this week in a statement.
Opponents also say the proposals would hand enormous power to private companies, many of them based in the United States, to engage in the mass surveillance of European citizens.
Once a backdoor exists, it could be used to scan for more than just child sex abuse material, according to Matthew Green, an expert on applied cryptography at Johns Hopkins University.
“People think Chat Control is about specific crimes. No, that’s not what’s at stake. What’s being made is an architecture decision for how private messaging systems work: if it passes, by law these systems will be wired for mass surveillance. This can be used for any purpose,” Green said in a post on X.
Member of European Parliament Patrick Breyer, from the Pirate Party Germany, has likened the proposals to adding government spyware to every device in the EU.
“We’re on the brink of a surveillance regime as extreme as we witness nowhere else in the free world. Not even Russia and China have managed to implement bugs in our pocket the way the EU is intending to,” Breyer said in a statement.
Who supports the law?
Proposals to scan private communications en masse for child sex abuse material were first introduced by European Commissioner for Home Affairs Ylva Johansson, who is Swedish, in 2022.
Belgium, the current head of the council, proposed the latest version of the legislation as a compromise after more invasive proposals received pushback from the European Parliament.
Under the latest iteration, scans would be limited to photos, videos, and URLs and users would have to consent to the scan.
Anyone who did not consent would be prevented from uploading or sharing photos and videos.
Supporters say the proposals are necessary to fight the scourge of child exploitation, which officials say is being facilitated by encrypted platforms and the emergence of AI-powered image generation software.
In 2022, the US National Center for Missing & Exploited Children said 68 percent of the record 32 million cases of child exploitation material reported by service providers were from “chats, messaging, or email services” within the EU.
The United Kingdom-based Internet Watch Foundation reported similar findings, identifying the EU as the source of two-thirds of abuse material.
Law enforcement and intelligence agencies have frequently expressed concern about criminals using encrypted messaging apps to avoid detection.
Telegram and Signal have both been used by armed groups ranging from ISIL (ISIS) to the Oath Keepers.
Intelligence agencies, militaries, police, and some EU ministries would be exempt from the measures, according to leaked documents obtained by French media organisation Contexte.
Who opposes the law?
Among EU member states, only Germany, Luxembourg, the Netherlands, Austria and Poland have taken a clear stance against the proposals, according to Breyer, while Italy, Finland, Sweden, Greece and Portugal, among others, have yet to make their position clear.
Individual MEPs in countries including Germany, Luxembourg, the Netherlands, and Austria have also expressed concerns, some of them arguing that surveillance should only be directed towards specific individuals based on probable cause as determined by a judge.
In November, the EU Parliament, which must approve most EU laws, voted to oppose “indiscriminate chat control” in favour of targeted surveillance.
Tech companies and digital rights groups opposed to the proposals include Mozilla, Signal, Proton, the Electronic Frontier Foundation, European Digital Rights, the Internet Freedom Foundation, and the Irish Council for Civil Liberties.
US National Security Agency (NSA) whistleblower Edward Snowden on Wednesday described the proposals as a “terrifying mass surveillance measure”.
How would Chat Control 2.0 work in practice?
Even if Chat Control 2.0 moves forward, experts say the current version of the law supported by Belgium would be very difficult, if not impossible, to enforce with end-to-end encryption.
In the UK, which passed the similarly-themed Online Safety Bill, the government has admitted that the technology does not yet exist to scan encrypted messages without compromising security generally.
Tech platforms such as Signal and WhatsApp, which had threatened to pull out of the UK, considered this a partial victory.
Critics also say targeting messaging apps will be ineffective at stopping child abuse material given the existence of private networks and the dark web.
AI-powered algorithms have also shown themselves prone to making mistakes, raising the possibility of innocent people being reported to law enforcement.
The New York Times reported in 2022 that Google’s AI tool for detecting child abuse material wrongly flagged a stay-at-home dad in San Francisco as an abuser after he sent a photo of his son’s penis to the doctor.