Ask an Expert: Why the social media ban for kids is an ethical turning point
UNSW Canberra sheds light on how Australia’s decision sets a global benchmark for online child safety.
UNSW Canberra sheds light on how Australia’s decision sets a global benchmark for online child safety.
The Wild West era of social platforms is coming to an end, with the ban of under-16s from social media set to come into effect on Wednesday 10 December.
What may seem like a sudden decision has been in play since 2021 when the House Select Committee consultations on Social Media and Online Safety had children’s safety front of mind.
Dr Sabrina Caldwell, an expert in ethics in technology from UNSW Canberra and an active participant in the consultation, says the Australian Government’s decision sets a clear standard for what’s acceptable online for children.
Most of us would agree that it is ethical for a society to exert an effort to look after its children. This entails ensuring they are restricted from engaging with some behaviours and activities until they are old enough. These determinations must also be balanced with other needs, such as freedom of expression for individuals and the right to conduct business by industry.
At present, social media is simply not safe enough for young children. These platforms are rife with mis- and disinformation, with seemingly real contacts who might not be who they say, or might not even be real people at all. With the advent of image manipulation tools and recently, artificial intelligence, we can’t trust the images we see and even videos can now be ‘deep fakes’.
We hear stories regularly of the impacts of the dark side of social media, ranging from cyberbullying to radicalisation, financial scamming, and exploitation, often leading to social problems, mental health problems, and sometimes even self-harm. Social media is not a wholly safe landscape for anyone, and especially not for inexperienced young people who have less experience in dealing with the kinds of dangers social media can pose at times.
It is interesting to consider that the onus is on social media platforms and individuals to mutually ascertain that the individual is 16 years or older, rather than to prove that they are under 16. This means that while the ban only directly affects under 16s, it has the side effect of making it necessary for the remainder of us to prove we are not under 16. This is a daunting problem, but perhaps less impactful than it might have been in previous decades. This is because, unfortunately, some information, like how old we are, is already severely compromised. Even if our exact age isn’t in the datafiles of any number of tracking companies or hacked information databases, our online behaviour and applied AI algorithms can infer how old we are – the ads served to us by the platforms should tell us that. But being forced to prove our identity and age through data entry or facial recognition techniques is also a step too far for most of us.
Families are the first line of defence for children, faced with the challenge of encouraging their children to grow and explore their world, but to balance that with keeping them safe. This is not always easy, and some children do not have that robust nurturing environment. That is why we need to ensure that the systems of society help, rather than hinder.
It is likely that in the case of social media platforms this line will have to be drawn not just by the Government with decisions such as the under-16s ban, but by the marketplace under pressure from the ban. These platforms will be negatively impacted by the loss of the under-16 users. It is reasonable to assume they will seek to develop child-safe platforms. If for example Meta created a spin-off Facebook platform designed for under 16s, they could monitor that platform rigorously and create the safe environment needed for this age range, occupied only by young people up to the age of 15. They could then provide an offering that enables them to transition to the more mainstream version of Facebook when they reach the age of 16.
The dynamics of industry, government and family are certainly set to change. A step like this has implications for all three. For tech companies, losing the under 16 sector represents not only a loss of current customers but also the pipeline for the future, and they will need to engineer new ways to attract children as they reach the age of 16. Companies who advertise directly to children (toys, video games, music, etc) will have to change their ads to target the parents instead, or pull their advertising dollars from the social media platform. Governments may put more accountability onto social media platforms to ensure they achieve effective lockout of under 16s. Parents will have to contend with unhappy children who don’t understand why they can no longer do the things they used to do on social media. New laws may be needed to enforce the ban. All of these changes are non-trivial and we’ll have to see how they can accommodated.
It won’t work perfectly, but it can work imperfectly. Some young people will find ways to circumvent the restrictions. However, even if they find a way to sneak online, they will not find most of their peers there, and this will detract significantly from the social media experience.
It is worthwhile to note that young people who do access these banned platforms will know that they are circumventing laws meant to protect them. This is quite different from being given carte blanche by society to wander freely through the light and dark of social media and possibly be placed in harm’s way.
Australia’s ground-breaking initiatives may lead the way for other countries; Malaysia announced in November that it will be implementing a similar ban to take effect in 2026.
In the meantime, the rest of the world is watching.