December 24, 2024

New Australian standards hinder AI in safeguarding online safety

3 min read

The standards aim to address generative AI’s misuse potential, but Microsoft argues that its ability to identify problematic content could also be compromised

Tech firms argue that new Australian safety standards will inadvertently impede generative AI systems’ ability to detect and prevent online child abuse and pro-terrorism material.

Julie Inman Grant, the eSafety Commissioner, proposed two mandatory standards aimed at child safety, released in draft form last year. These standards require providers to detect and remove child abuse and pro-terrorism material “where technically feasible,” as well as to disrupt and deter new material of that nature.

The standards apply to various technologies, including websites, cloud storage services, text messages, and chat apps. They also encompass high-impact generative AI services and open-source machine learning models.

WeProtect Global Alliance, a non-profit consortium of over 100 governments and 70 companies focusing on combating child sexual exploitation and abuse online, emphasized the issue eSafety is addressing in a submission to the consultation on the standards published on Thursday. They stated that open-source AI is currently used to create child abuse material and deepfakes, and the proposed standards appropriately target the relevant platforms and services.

By emphasizing the potential for misuse, the threshold acknowledges that machine learning and artificial intelligence models, even those with limited exposure to sensitive or illicit datasets, can still be abused to generate illegal content, such as ‘synthetic’ child sexual abuse material and sexual deepfakes.

However, tech companies like Microsoft, Meta, and Stability AI stated that they were developing their technologies with safeguards to prevent misuse.

Microsoft cautioned that the drafted standards might restrict the effectiveness of AI safety models used to detect and flag child abuse or pro-terrorism material.

Microsoft stated that to train AI models and safety systems, such as classifiers, to detect and flag such content, the AI needs to be exposed to such content, and evaluation processes need to be implemented to measure and mitigate risks.

They also mentioned that using entirely ‘clean’ training data could reduce the effectiveness of such tools and decrease their ability to operate with precision and nuance.

Microsoft highlighted that one of the most promising aspects of AI tooling for content moderation is advanced AI’s capability to assess context. Without training data that enables such nuanced assessment, there is a risk of losing the benefits of such innovation.

Stability AI also cautioned that AI would have a significant role in online moderation, and overly broad definitions could complicate the identification of content necessary to comply with the proposed standards.

Meta, Facebook’s parent company, noted that while its Llama 2 model included safety tools and responsible use guidelines, enforcing safeguards would be challenging once the tool is downloaded.

“We cannot suspend the provision of Llama 2 once it has been downloaded, terminate an account, or deter, disrupt, detect, report, or remove content from downloaded models,” the company stated.

Google suggested excluding AI from the standards and instead incorporating it entirely into the current government review of the Online Safety Act and the Basic Online Safety Expectations.

The tech companies also reiterated Apple’s recent comments, stating that the standards must explicitly specify that proposals to scan cloud and messaging services “where technically feasible” will not compromise encryption. They emphasized that technical feasibility should encompass more than just the cost to a company to develop such technology.

In a statement, Inman Grant clarified that the standards would not compel the industry to compromise or weaken encryption, monitor texts, or conduct indiscriminate scanning of large volumes of personal data. She mentioned that the commissioner is currently contemplating potential amendments to make this point clearer.

“Essentially, eSafety believes that the industry should not be exempt from the responsibility of addressing illegal content that is freely hosted and shared on their platforms. eSafety acknowledges that some major end-to-end encrypted messaging services are already taking measures to detect this harmful content,” she stated.

Inman Grant mentioned that the final versions of the standards will be presented in parliament for consideration later this year.

Leave a Reply

Copyright © All Rights Reserved | Techgadgetexpert |