November 22, 2024 ↘︎

Australia’s Online Safety Amendment Bill: Safety or Overreach?

Loading the Elevenlabs Text to Speech AudioNative Player...

The Australian Government’s Online Safety Amendment (Social Media Minimum Age) Bill 2024 is a landmark piece of legislation aiming to protect children and teenagers from the harms of social media. By establishing a minimum age for social media accounts and introducing stringent privacy protections, it seeks to make platforms safer for young users. However, the Bill’s broad measures, lack of clarity on implementation, and the disparity in how children are treated across different areas of policy raise questions about its practicality and fairness.

Key Features of the Bill

The Bill amends the Online Safety Act 2021 and introduces several key provisions:

  1. A Minimum Age for Social Media
    Platforms must take “reasonable steps” to ensure users under 16 cannot create accounts. This applies to “age-restricted social media platforms”, defined broadly as services where enabling social interaction is a significant purpose. The goal is to reduce children’s exposure to harmful content and addictive features.
  2. Age Verification Systems
    Platforms will need to implement age assurance technologies. However, specifics on what constitutes “reasonable steps” are not prescribed, leaving platforms to navigate how to comply.
  3. Delayed Implementation
    The age verification requirements will not come into effect until at least 12 months after the Bill receives Royal Assent. This delay reflects the reality that no effective, privacy-preserving method of verifying age exists yet.
  4. Privacy Protections
    Data collected for age verification can only be used for this purpose unless explicit consent is obtained. Information must be securely destroyed once it is no longer needed.
  5. Severe Penalties for Non-Compliance
    Platforms face penalties of up to $49.5 million for failing to comply with these measures.

Protecting Kids or Oversimplifying Childhood?

The Bill treats all individuals under 16 as equally vulnerable, grouping 15-year-olds with 5-year-olds. While the intention is to safeguard children, this approach risks oversimplifying the complexities of adolescence. A teenager on the cusp of adulthood engages with social media differently from a young child, yet both are subject to identical restrictions under this law.

This contrasts starkly with Victoria’s recent debate over the minimum age of criminal responsibility. The Victorian Government failed to pass the Youth Justice Bill in August 2024, which sought to raise the minimum age to 12. As it stands, children as young as 10 can still be detained and face criminal charges. The inconsistency is glaring: while a 10-year-old can be jailed for their actions, a 15-year-old is considered incapable of safely managing a social media account.

Such contradictions highlight the uneven treatment of children across Australian law. Critics argue that these policies reflect a lack of coherent thinking about what it means to protect children and how society can best support their development.

What Do We Lose When Kids Are Excluded?

Social media is often associated with risks, but it also provides a platform for young people to innovate, inform, and advocate. The Bill risks stifling the voices of young individuals who have used social media to drive meaningful change and create opportunities:

  • 6 News Australia: Leonardo Puglisi founded this independent news platform at just 11 years old in 2019. His reporting gained national attention in 2022 during the federal election campaign, when he interviewed then-Prime Minister Scott Morrison and Opposition Leader Anthony Albanese. Social media was pivotal in amplifying his work and connecting him with a broad audience.
  • Greta Thunberg: At just 15, Thunberg began her global climate strikes, leveraging platforms like Twitter and Instagram to mobilise millions worldwide. Her efforts have made her one of the most recognisable activists of our time, sparking debates, protests, and policy changes.

Young people are not just passive consumers of social media; they are active contributors. By restricting access, the Bill risks silencing the next generation of leaders, innovators, and changemakers.

The Honeypot Problem of Age Verification

One of the Bill’s most significant challenges lies in its reliance on age verification systems. While the requirement to verify age is intended to protect children, it inadvertently creates a significant privacy risk for all users, including adults.

To comply, platforms must verify the ages of every user, not just those under 16. This means collecting proof of age data – such as government-issued IDs or other sensitive personal information – from millions of Australians. The consequences of such a system are far-reaching:

  1. Massive Data Collection
    Platforms would amass vast repositories of Personally Identifiable Information (PII), including names, birthdates, and even images or scans of IDs. This creates a centralised honeypot of valuable data ripe for hackers to target.
  2. Increased Privacy Risks
    While the Bill mandates the destruction of data once it is no longer needed, even temporary storage increases the risk of breaches. Cybercriminals are highly incentivised to target such repositories for identity theft, financial fraud, and social engineering attacks.
  3. A Universal Burden
    Adults who are not the focus of the Bill’s protective measures must also undergo verification, adding friction to the user experience and raising questions about proportionality. Vulnerable populations, such as those without government-issued ID, may find themselves excluded from online platforms entirely.

Ironically, the very mechanisms designed to protect children’s privacy could undermine the privacy and security of the broader population. The Bill’s focus on strict verification measures risks creating a system that could cause more harm than it prevents.

Ambiguity Around Data Retention

The Bill requires platforms to destroy proof of age data “when it is no longer required,” but this phrase is left undefined. Platforms are left to interpret when such data ceases to be necessary, which raises several concerns:

  1. Retention Periods:
    The Bill does not specify whether platforms must retain proof of age data for a minimum period to respond to potential audits or complaints. Without clear guidance, platforms may feel pressured to over-retain sensitive data to avoid regulatory scrutiny.
  2. Government Oversight:
    The eSafety Commissioner has broad powers to request information from platforms to verify compliance. For example, if a parent complains that their under-16 child has an account, the platform may be required to provide proof of age verification. If the data has already been destroyed, the platform could struggle to demonstrate compliance.
  3. Audit Challenges:
    Regulatory audits may demand evidence of age verification processes, requiring platforms to maintain some form of logs or metadata. However, retaining such records could conflict with the Privacy Act’s requirement to destroy unnecessary data.

Broader Implications: Lessons from the Digital ID Debate

The challenges surrounding age verification closely mirror criticisms of digital identity systems in Australia. Centralised systems designed to verify identity or age raise similar issues of:

  1. Exclusion of Vulnerable Populations:
    Just as digital ID policies risk excluding those without access to technology or government-issued credentials, age verification systems could disproportionately impact economically disadvantaged users, teenagers without ID, and those from marginalized communities.
  2. Erosion of Trust:
    The Bill’s lack of clarity around data retention and enforcement mechanisms could undermine public trust in the government’s ability to implement protective measures without creating additional risks.
  3. Implementation Complexity:
    Both digital ID systems and age verification frameworks face criticism for introducing unnecessary friction into user experiences while failing to guarantee that they will work as intended.

These parallels highlight a broader policy issue: Australia’s tendency to address complex digital challenges with narrowly focused measures that fail to account for their wider implications. Without clear safeguards, the Bill risks becoming another example of well-meaning policy that falters in execution.

The Online Safety Amendment (Social Media Minimum Age) Bill 2024 represents a bold attempt to enhance online safety and privacy. However, its reliance on untested age verification technologies and the risks posed by mass data collection could undermine its stated goals. The Bill’s vague requirements around data retention and lack of clarity on enforcement mirror broader criticisms of digital identity policies, raising significant questions about its practicality and potential unintended consequences.

If Australia hopes to lead in online safety, it must reconcile these contradictions, ensuring that efforts to protect children do not come at the cost of broader privacy and security. Without careful adjustment, the Bill risks becoming yet another example of well-intentioned policy that solves one problem while creating another.

Richard Taylor

(He/Him)

Managing Director & Strategy Lead

DB logo
DB logo
DB logo