Email

Australia’s social media ban for kids under 16 just became law. How it will work remains a mystery

Australia’s social media ban for kids under 16 just became law. Kampus Production/Pexels

Yoopya with The Conversation

The federal parliament has passed legislation to ban people under 16 from having an account with some social media platforms.

In doing so, it has ignored advice from a chorus of experts – and from the Australian Human Rights Commission, which said the government rushed the legislation through parliament “without taking the time to get the details right. Or even knowing how the ban will work in practice.”

The ban is, however, backed by 77% of Australians, according to a new poll. It won’t take effect for at least 12 months.

So what will happen before then?

What’s in the final bill?

The legislation amends the current Online Safety Act 2021 and defines an “age-restricted user” as a person under age 16. However, it does not name specific platforms that will be subject to the ban.

Instead, the legislation defines an “age-restricted social media platform” as including services where:

  1. the “sole purpose, or a significant purpose” is to enable “online social interaction” between people
  2. people can “link to, or interact with” others on the service
  3. people can “post material”, or
  4. it falls under other conditions as set out in the legislation.

The legislation does note that some services are “excluded”, but does not name specific platforms. For example, while services providing “online social interaction” would be included in the ban, this would not include “online business interaction”.

While it remains unclear exactly which social media platforms will be subject to the ban, those that are will face fines of up to A$50 million if they don’t take “reasonable steps” to stop under 16s from having accounts.

While there are reports YouTube will be exempt, the government has not explicitly confirmed this. What is clear at the moment is that people under 16 will still be able to view the content of many platforms online – just without an account.

The legislation does not mention messaging apps (such as WhatsApp and Messenger) or gaming platforms (such as Minecraft), specifically. However, news reports have quoted the government as saying these would be excluded, along with “services with the primary purpose of supporting the health and education of end-users”. It is unclear what platforms would be excluded in these cases.

In passing the final legislation, the government included additional amendments to its original proposal. For example, tech companies cannot collect government-issued identification such as passports and drivers licenses “as the only means” of confirming someone’s age. They can, however, collect government-issued identification “if other alternative age assurance methods have been provided to users”.

There must also be an “independent review” after two years to consider the “adequacy” of privacy protections and other issues.

What now for the tech companies?

As well as having to verify the age of people wanting to create an account, tech companies will also need to verify the age of existing account holders – regardless of their age. This will be a significant logistical challenge. Will there be a single day when every Australian with a social media account has to sign in and prove their age?

An even bigger concern is how tech companies will be able to verify a user’s age. The legislation provides little clarity about this.

There are a few options social media platforms might pursue.

One option might be for them to check someone’s age using credit cards as a proxy linked to a person’s app store account. Communications Minister Michelle Rowland said previously that this strategy would be included in the age verification trials that are currently underway. YouTube, for example, has previously enabled users to gain access to age-restricted content using a credit card.

However, this approach would exclude access for people who meet the age requirement of being over 16, but do not hold credit cards.

Another option is to use facial recognition technology. This technology is among the various strategies being trialled for the government to restrict age for both social media platforms (for ages under 16) and online pornography (for ages under 18). The trial is being run by a consortium led by Age Check Certification Scheme, based in the United Kingdom. The results won’t be known until mid-2025.

However, there is already evidence that facial recognition systems contain significant biases and inaccuracies.

For example, commercially available facial recognition systems have an error rate of 0.8% for light-skinned men, compared to nearly 35% for dark-skinned women. Even some of the best performing systems in use currently, such as Yoti (which Meta currently offers to Australian users ahead of a global rollout) has an average error of almost two years for people aged 13 to 16 years old.

What about the digital duty of care?

Earlier this month the government promised to impose a “digital duty of care” on tech companies.

This would require the companies to regularly conduct thorough risk assessments of the content on their platforms. And, companies would need to respond to consumer complaints, resulting in the removal of potentially harmful content.

This duty of care is backed by experts – including myself – and by the Human Rights Law Centre. A parliamentary inquiry into the social media ban legislation also recommended the government legislate this.

It remains unclear exactly when the government will fulfil its promise to do just that.

But even if the duty of care is legislated, that doesn’t preclude the need for more investment in digital literacy. Parents, teachers and children need support to understand how to navigate social media platforms safely.

In the end, social media platforms should be safe spaces for all users. They provide valuable information and community engagement opportunities to people of all ages. The onus is now on the tech companies to restrict access for youth under 16.

However, the work needed to keep all of us safe, and to hold the tech companies accountable for the content they provide, is only just beginning.

Author:

Lisa M. Given | Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University

Related posts

Interpol clamps down on cybercrime and arrests over 1,000 suspects in Africa

Social media has complex effects on adolescent wellbeing, and policymakers must take note

Their DNA survives in diverse populations across the world – but who were the Denisovans?