Instagram can now read all users’ private messages. Will this make kids safer or just boost ad targeting?

As of May 8 end-to-end encryption is no longer available on direct messages on Instagram.
Meta, in announcing the policy reversal, said it had done so because few people used the feature. But this has raised questions about its impact on user privacy and whether it will improve child safety on the platform.
Instagram has long been a focal point for discussion about online safety – whether in relation to body image concerns, cyberbullying or sexual extortion. This policy change by Meta directly affects how safety and moderation are implemented in private messages.
This is important considering research has found that perpetrators first contacted roughly 23% of Australian sexual extortion victims on Instagram, the second most frequent method of contact, behind Snapchat (at 50%).
What is end-to-end encryption?
End-to-end encryption is a way of scrambling a message so only the sender’s and recipient’s devices can read it. The platform carrying the message, in this case Instagram, can’t access it.
This same technology is present by default on WhatsApp, Signal, iMessage, and (since late 2023) Facebook Messenger.
Meta’s CEO Mark Zuckerberg first promised to bring end-to-end encryption across Meta’s messaging products back in 2019, under the slogan “the future is private”.
Instagram tested encrypted direct messages in 2021. It rolled them out as an opt-in feature in 2023.
End-to-end encrypted direct messages never became the default, and the low adoption rate of opting in to use the feature is Meta’s justification for removing it. As a spokesperson told The Guardian:
Very few people were opting in to end-to-end encrypted messaging in DMs, so we’re removing this option from Instagram.
There is a circular logic to this: Meta has killed off a feature it buried so deep that most users never knew it existed, then cited low usage as the reason for its removal.
What does this mean for Instagram users?
In practical terms, every message you send on Instagram now travels in a form Meta can read.
Meta’s privacy policy lists the content of messages users send and receive among the data it collects. In principle, this enables the company to use this data to personalise features, train artificial intelligence (AI) models, and deliver targeted advertising.
While Meta has publicly committed not to train its AI models on private messages unless users actively share them with Meta AI, it has made no equivalent public commitment about advertising.
That leaves open the possibility that Meta could use unencrypted Instagram direct messages for ad targeting. And without encryption, Meta’s AI commitment is now backed by policy alone, not by the technology itself.
A clear reversal
This reads as a clear reversal of Meta’s privacy-first posture which Zuckerberg announced seven years ago.
Meta has been under sustained pressure from law enforcement, regulators and child protection organisations who argue end-to-end encryption creates spaces where platforms can’t detect child sexual exploitation and grooming. Australia’s eSafety Commissioner has been clear that the deployment of end-to-end encryption “does not absolve services of responsibility for hosting or facilitating online abuse or the sharing of illegal content”.
This argument deserves to be taken seriously. The harms are real and disproportionately fall on young people.
However, sexual extortion research shows perpetrators don’t tend to stay on the platform where they make first contact, with more than 50% of sexual extortion victims saying perpetrators asked them to switch platforms.
Meta still uses end-to-end encryption on its other platforms, such as WhatsApp and Facebook Messenger, and it needs to apply a consistent approach to child safety. Predators routinely ask victims to switch platforms, so the company’s safety approach needs to work for Instagram and their end-to-end encrypted services.
A false choice
Meta and privacy advocates often frame this as a choice between end-to-end encryption or child safety. But that’s a false choice. It’s not an “either-or” situation, even if they make it sound like one.
The technology already exists to detect harmful content while keeping messages encrypted in transit. It just has to run in the right place: on the user’s device, before the device encrypts and sends the message, or after it receives and decrypts it.
On-device approaches have a contested history, and any deployment must be genuinely privacy-preserving by design. But technology companies must weigh the objection against the harms that continue to occur. A safety by design approach is needed.
On-device safety measures have been demonstrated at scale with Apple’s on-device nudity detection for images sent or received via Messages, AirDrop and FaceTime. A 2025 study demonstrated high-accuracy grooming detection using Meta’s AI model designed specifically for on-device deployment on mobile phones.
Recently, both Apple and Google have started to take measures towards app store–based age verification in some jurisdictions.
The highest-profile real-world deployment of these is Apple enabling device-level privacy-preserving age verification in the UK.
Social media and private messaging companies, along with operating system vendors (Microsoft, Apple, and Google), all have a role to play in ensuring harmful content is detected, whether or not end-to-end encryption is used. Progress has been slow. But we, as a community, need to demand more from these companies.
Joel Scanlan is the academic co-lead of the CSAM Deterrence Centre, which is a partnership between the University of Tasmania and Jesuit Social Services, who operate Stop It Now (Australia), a therapeutic service providing support to people who are concerned with their own, or someone else's, feelings towards children. He has received funding from the Australian Research Council, Australian Institute of Criminology, the eSafety Commissioner, Lucy Faithfull Foundation and the Internet Watch Foundation.