
Australian Researchers Develop New Technique to Block Unauthorized AI Training on Photos
CSIRO Australian scientists have created an avant-garde method that prevents artificial intelligence models from learning from personal photos shared online. This innovative technique subtly alters the images, invisible to humans but confusing AI systems, effectively stopping unauthorized training and the creation of Defake.
Protecting Privacy and Copyright in the Age of AI
The new approach was developed in partnership with the Cyber Security Cooperative Research Centre and the University of Chicago. It adds a protective layer to photos that mathematically limits what AI models can learn from them, even against attempts to retrain or circumvent the protection. This breakthrough has strong implications for social media users, artists, and organizations who want to safeguard their content and personal data.
For example, social media platforms could apply this protective technology at scale so every uploaded image is shielded from AI training algorithms. This would greatly reduce the risks of identity misuse, copyright theft, and deepfake production, empowering users to maintain control over their digital content.
Towards Broader Applications and Industry Collaboration
Currently validated in controlled lab settings, the technique is still theoretical but open for academic research, with code available on GitHub. Researchers plan to extend the method to other media types like text, music, and video. The CSIRO team is actively seeking partners across AI safety, cybersecurity, defense, and ethics to further develop and deploy this technology.
This important advance, recognized with a Distinguished Paper Award at the 2025 Network and Distributed System Security Symposium, marks a significant step in boosting privacy and copyright protection in an era of rapidly advancing AI technologies.