LORRAINE FINLAY: Social media ban fails to address the real data breach issue that is waiting to happen

Safety bans sound straightforward: if something is causing harm, we can simply prohibit it and stop people from accessing what is hurting them.
That’s the logic behind the social media ban. We all want children to be safe online, so we restrict their access to social media platforms.
But while this seems simple, the reality is far more complex. The ban doesn’t only impact under-16s. It requires big tech companies to subject all of us to increased surveillance and data collection.
Sign up to The Nightly's newsletters.
Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.
By continuing you agree to our Terms and Privacy Policy.To comply with the law, social media platforms are going to have to take “reasonable steps” to figure out who’s under 16. That means some level of age assurance will have to be applied across all social media accounts.
Some of us will be asked to upload ID or submit a selfie to allow a facial scan. Others will be verified by way of “age signals” which infer your age from things like when you log in, your location, who you follow and what you post.
But regardless of what method applies to you, your account is necessarily going to be assessed.
For decades, online spaces have allowed us a degree of anonymity. This ban flips that on its head.
It normalises broad-based age checks — whether through ID verification or invisible profiling — and creates vast new datasets about how we live and interact, all just to prove we’re old enough to be on social media.
Yes, tech companies have already been collecting our data for years. Mostly to sell ads. But this goes one step further. The Federal Government is now mandating a level of surveillance as the price of holding a social media account.
It’s a shift from data for advertising to data for permission, a move that risks normalising surveillance as the price of entry.
And if you think handing over ID is risk-free, think again. Just last month, a data breach at Discord exposed copies of government identification documents collected for age verification. More than 68,000 Australians were affected.

That’s the reality of expanding sensitive data collection — every new database becomes a target. Under this ban, large numbers of Australians will be asked to upload ID or biometric data, creating high-value troves of personal information.
One breach is all it takes for that data to fall into the wrong hands, and once it’s out, you can’t get it back. The eSafety Commissioner’s guidance tries to reassure us: “No, not every account holder will go through an age check process if the platform has other accurate data.” But that doesn’t actually mean you escape scrutiny.
It just means that platforms will use what they already know about you to make the call. That’s the real shift that is happening here. We’re moving to a world where the law requires you to be profiled in order to participate.
This isn’t a one-off check, either. The rules require platforms to take proactive steps to detect accounts held by age-restricted users on an ongoing basis, not just at sign-up.
So even after you’ve proved your age, systems will continue checking for signs that an account might belong to someone under 16. Behavioural signals like when you log in or who you follow will feed systems designed to keep checking.
Still figuring things out
And it’s all happening at breakneck speed. The ban was rushed through the Parliament and the details are still being worked out.
The list of platforms captured by the ban continues to expand and at the National Press Club last week the Minister for Communications suggested that she may consider extending the ban to older children down the track.
The ban is about to start taking effect and both kids and parents are still trying to figure out how this is going to work and what it means for them.
One thing that it doesn’t mean is that your kids will, necessarily, be safer online.
Take YouTube: under-16s will no longer be able to hold a standard YouTube account, but they will still be able to watch YouTube without signing in.
This means that any content settings or additional parental controls that you may have attached to your child’s YouTube account will no longer apply when they access the platform.
Ironically, the social media ban could actually make it harder for parents to guide their children’s online experience on YouTube. YouTube themselves have expressed concern that the new law “will, in fact, make Australian kids less safe on YouTube”.
How online harm should be addressed
None of this means we should ignore the real harms kids face online: cyberbullying, predatory behaviour, addictive design.
But a blanket ban on holding social media accounts enforced through mass data collection risks undermining privacy for everyone while failing to tackle the root problem: unsafe platforms and products.
A “digital duty of care” is one way to create safer spaces for children online by shifting responsibility onto platforms to design out risks and build accountability into their systems. Education matters too.
We need to be teaching kids how to navigate online spaces safely and giving parents the tools to support them.
Keeping children safe online is a goal we all share. But we shouldn’t have to give up our privacy to achieve it.
If we let this ban set the precedent, we risk creating a digital world where surveillance is the norm and we trade privacy for participation. That’s not the digital world that I want for my kids — or for any of us.
Lorraine Finlay is the Human Rights Commissioner at the Australian Human Rights Commission
