Is Australia ready for a teen social media ban? UNSW experts weigh in

2025-08-13T09:50:00+10:00

Boy using a smartphone at night in bed. 10 year old boy playing with mobile phone before going to sleep. Concept of children using the smartphone in their room alone without control. Copy space

Children's safety is at risk, but does the law address the real issue?

Image of Kate Newton mentoring postgraduate business student
Kate Newton,

The government will soon enforce a controversial policy to prevent children under 16 from accessing social media in Australia. 

World-first social media legislation, due to come into effect in December, has drawn both praise and criticism from experts. This week, one of the world's leading authorities on child sexual exploitation and abuse, Professor Michael Salter from UNSW Arts, Design and Architecture, will chair a panel discussion on the ban in Sydney.

“Social media was made by adults, for adults, and aggressively marketed to children. A social media ‘ban’ is no different to the age ‘bans’ that we apply to alcohol, cigarettes or driving a car,” he says.

Associate Professor Katharine Kemp, a data privacy and consumer protection law expert from UNSW Law & Justice, is critical of the legislation’s inadequate expert consultation on key factors, such as psychology, suicide prevention and children’s rights.

“The law goes too far regarding its excessive impact on beneficial uses of some social media by children and the privacy of all Australian internet users,” she says.

“But it also fails to get to the heart of the matter, which is the deeply unsafe design of online spaces, whether kids get around the ban, look to worse sites online, or simply turn sixteen.”

A fatal risk for children online

Prof. Salter says that given the statistics on child sexual exploitation and the rapid rise of online sexual extortion, he doesn’t see how we can justify continuing to let children onto social media sites.   

Childlight, a global child safety group, found that 300 million children experience some form of online child sexual abuse each year. Prof. Salter, who leads Childlight’s Australasian chapter, says much of the sexual abuse takes place on social media.

“Overseas organised crime networks use glaring safety loopholes in social media platforms, pretending to be a teenage girl online to elicit compromising images from teenage boys. They then blackmail the boys for money,” says Prof. Salter.

“We have suicides in Australia and overseas associated with online sexual extortion. It’s an online crime that has been rapidly increasing for years, but the response of social media companies has been extraordinarily lacklustre.”

Tech company ethics and private data security

The new legislation also states that the platforms can’t use government-issued identification, including digital IDs, for age verification. A/Prof. Kemp says some verification alternatives can be wildly inaccurate or present security concerns.

“For example, an algorithm that sifts through all our content or messages, or uses biometric information such as face scans, creates a major privacy concern for all internet users, including children,” says A/Prof. Kemp.

“We have seen tech companies essentially get a slap on the wrist for serious privacy infringements—if they even face litigation. We know very little about what they are doing with the data collected on all of us through constant surveillance.”

A/Prof. Kemp highlights one example from 2017, when investigative journalists revealed a leaked Facebook document addressed to advertisers. It boasted that Facebook could monitor the mood shifts of millions of Australians as young as high schoolers. This included when kids felt “useless”, “stupid” or “a failure”. Facebook claimed the document was an “internal error”.

No such thing as a completely safe internet

A/Prof. Kemp says that Australia’s privacy laws are outdated and inadequate in protecting children and adults, and Prof. Salter says perpetrators can easily target children when social media companies pursue profit above safety. While government and tech superpowers address the risks for people, what can people do to help themselves?

Dr Jake Renzella from the UNSW School of Computer Science and Engineering is an expert in computer science education, AI and software engineering. He says that just like society broadly accepts the risk-value benefit of vehicles on roads, we must accept an inherent risk when using the internet.

“There are many tools, behaviours, and training that people can use to make the internet safer. Technology, internet, mental health, and AI literacy programs are some of the best tools we have to ensure society can access the internet safely,” he says.

For example, the eSafety Commissioner has produced an online guide for Keeping children safe online in communities. It includes guidance on recognising when someone is experiencing online abuse and what to do if you suspect that it is happening.  

When it comes to protecting privacy online, people in the United Kingdom are reportedly turning to Virtual Private Networks (VPN) in response to a new age verification law for adult websites. VPNs bypass geographical restrictions by routing a user’s internet activity and traffic through a dedicated server, essentially telling websites that the user is in a different country.

Dr Renzella says that although VPNs are theoretically safe, not all are equal.

“Risks do exist, for example, when using poorly configured VPNs, or VPNs provided by bad actors who can then capture all traffic that a user may route through the VPN. It is always important to know that your VPN provider can view or log any data that flows through the VPN,” he says.

If the content mentioned in this article has caused distress, don't hesitate to get in touch with Lifeline (tel: 13 11 14) or the Kids Helpline (tel: 1800 55 1800). 

Media enquiries

Kate Newton
UNSW Law & Justice
News & Content Coordinator
Tel: +61 2 9348 3173
Email: k.newton@unsw.edu.au