Is someone checking your browsing?
This website will appear in your browser history. If you're concerned someone may be monitoring your internet use, consider using a trusted friend's device, a library computer, or your browser's private/incognito mode. You can press Quick Exit or hit Escape at any time to leave this site quickly.
Learn more about staying safe online
⚠
AI-generated intimate images are treated the same as real intimate images under Australian law.
Creating Fake Images or Videos of You
Someone uses AI or editing tools to create realistic fake images or videos of you — often sexual content — using your face or likeness placed onto other bodies or in fabricated scenarios.
What You Might Notice
Realistic images or videos of you that you know are fake surface online
Content showing you in situations that never happened.
The other person threatens to create or share deepfake content
They reference AI tools or tell you they can 'make' images of you.
What You Can Do
Report to the eSafety Commissioner
Deepfake intimate images are within eSafety's remit for removal.
Report to police
Creating and distributing deepfake intimate images is a criminal offence.
Preserve evidence of the fake content and any threats
Screenshot everything including the source and any messages about it.
Important: This resource provides general information, not personal advice. Every situation is different. The actions suggested here may not be safe in your specific circumstances — particularly if the person causing harm could notice changes to your devices or accounts. Always consider your physical safety first.
If you need personalised support, contact 1800RESPECT (1800 737 732) or your local specialist domestic violence service. If you are in immediate danger, call 000.
Creating synthetic intimate, compromising, or damaging media of victim using AI-powered deepfake technology. Generated content may be indistinguishable from real imagery and can be used for extortion, reputational harm, or psychological abuse. Includes face-swapping onto pornographic content, synthetic voice generation, or fabricated video.
Synthetic Content Reporting Report deepfake intimate content to regulatory body and police. Deepfakes treated equivalently to real NCII.
SAFE-M-0099
Source Image Restriction Restrict access to facial photos on social media to limit deepfake source material.
Detection Indicators
ID
Detection Indicator
SAFE-D-0087
Synthetic Content Discovery Intimate content surfaces that depicts events that never occurred. May show AI artifacts on close inspection.
SAFE-D-0088
Fabricated Content Threats Adversary threatens distribution of content depicting situations that never happened.
The TFA Matrix is a research framework under active development. Technique classifications, detection methods, and mitigations reflect current understanding and are subject to revision. This framework does not constitute forensic methodology, legal evidence standards, or clinical diagnostic criteria. Practitioners should apply professional judgement appropriate to their discipline and jurisdiction.