November 27, 2024 ↘︎

Unfair and impractical: Online safety bill to cause more harm than it prevents

After applying his data and privacy lens to the Online Safety Amendment Bill, Richard Taylor, managing director at Digital Balance, concludes it is unfair, impractical and yet another example of a well-intentioned policy that solves one problem while creating a myriad of others.
Loading the Elevenlabs Text to Speech AudioNative Player...

The Australian Government’s Online Safety Amendment (Social Media Minimum Age) Bill 2024 is a landmark piece of legislation aiming to protect children and teenagers from the harms of social media.

However, its broad measures, lack of clarity on implementation and the disparity in how children are treated compared to other government policies make it unfair and impractical.

For starters, the Bill treats anyone under the age of 16 as equally vulnerable, grouping 15-year-olds with 5-year-olds. A teenager on the cusp of adulthood engages with social media differently from a young child, yet both are subject to identical restrictions.

More alarmingly, this contrasts starkly with Victoria’s recent debate over the minimum age of criminal responsibility. Earlier this year the Victorian Government failed to pass the Youth Justice Bill which sought to raise the minimum age of criminal culpability to 12. As it stands, children as young as 10 can be detained and face criminal charges. While a 10-year-old can be jailed for their actions, a 15-year-old is considered incapable of safely managing a social media account. The inconsistency is glaring.

But it’s not just children who will be put at risk by this bill. It leaves many more people vulnerable to online risks.

The honeypot problem of age verification

One of the Bill’s most significant challenges lies in its reliance on age verification systems. While age verification is intended to protect children, it inadvertently creates a major privacy risk for all users, including adults.

To comply, platforms must verify the ages of every user, not just those under 16. This means collecting proof of age data – such as government-issued IDs or other sensitive personal information – from millions of Australians. The consequences are far-reaching.

Platforms would amass vast repositories of Personally Identifiable Information (PII), including names, birthdates, and even images or scans of IDs creating a centralised honeypot of valuable data ripe for
hackers to target.

While the Bill mandates the destruction of data once it is no longer needed, even temporary storage increases the risk of breaches. Cybercriminals are highly incentivised to target repositories like this for identity theft, financial fraud and social engineering attacks.

Vulnerable populations, such as people without government-issued ID, may find themselves excluded from online platforms entirely.

Ambiguous data retention

The Bill requires platforms to destroy proof of age data “when it is no longer required,” a phrase left undefined. Platforms are left to interpret when this data ceases to be necessary, which raises several concerns.

Without clear guidance, platforms may feel pressured to over-retain sensitive data to avoid regulatory scrutiny in the face of potential audits or complaints.

The eSafety Commissioner has broad powers to request information from platforms to verify compliance. For example, if a parent complains their under-16 child has an account, the platform may be required to provide proof of age verification. If the data has already been destroyed, the platform could struggle to demonstrate compliance.

Regulatory audits may demand evidence of age verification processes, requiring platforms to maintain
some form of logs or metadata. However, retaining such records could conflict with the Privacy Act’s requirement to destroy unnecessary data.

Ironically, the very mechanisms designed to protect children’s privacy could undermine the privacy and security of the broader population. The Bill’s focus on strict verification measures risks creating a system that could cause more harm than it prevents.

The Bill is just another example of Australia’s tendency to address complex digital challenges with narrowly focused measures that fail to account for their wider implications. Without clear safeguards, this risks becoming another example of a well-meaning policy that falters in execution.

It is a bold attempt to enhance online safety and privacy. But the reliance on untested age verification technologies, and the risks posed by mass data collection, are likely to undermine its goals. Vague requirements around data retention and lack of clarity on enforcement raise significant questions about its practicality and potential unintended consequences. If Australia hopes to lead in online safety, it must reconcile these contradictions, ensuring efforts to protect children do not come at the cost of broader privacy and security.

Without careful adjustment, the Bill risks becoming yet another example of a well-intentioned policy that solves one problem while creating a myriad of others.

Richard Taylor

(He/Him)

Managing Director & Strategy Lead

DB logo
DB logo
DB logo