Is someone checking your browsing?
This website will appear in your browser history. If you're concerned someone may be monitoring your internet use, consider using a trusted friend's device, a library computer, or your browser's private/incognito mode. You can press Quick Exit or hit Escape at any time to leave this site quickly.
Learn more about staying safe online
Someone uses AI tools to generate harassing content — fake images, fake audio of your voice, mass-produced abusive messages, or AI-written defamatory content about you.
What You Might Notice
Harassing content that seems AI-generated or mass-produced
Messages, images, or posts that feel formulaic or are produced in large volumes.
Fake images or audio that look or sound like you
AI-generated content using your likeness or voice.
Abusive content that's unusually varied or persistent
The volume and variety suggests automated generation.
What You Can Do
Report AI-generated harassment to platforms
Many platforms are developing specific policies for AI-generated abusive content.
Preserve the content as evidence
AI-generated abuse is still abuse and can be reported to police.
Report to the eSafety Commissioner
AI-generated intimate images or targeted harassment falls within eSafety's remit.
Important: This resource provides general information, not personal advice. Every situation is different. The actions suggested here may not be safe in your specific circumstances — particularly if the person causing harm could notice changes to your devices or accounts. Always consider your physical safety first.
If you need personalised support, contact 1800RESPECT (1800 737 732) or your local specialist domestic violence service. If you are in immediate danger, call 000.
Using AI tools including large language models, voice cloning, or text generation to create and distribute harassing content at scale. AI enables perpetrators to generate personalized, varied harassment content more quickly and in greater volume than manual methods, and may disguise the perpetrator's authorship.
AI Content Detection Use AI detection tools to identify synthetic content for evidence purposes.
SAFE-M-0088
Platform-Level Reporting Report automated abuse patterns to platforms rather than responding to individual items.
Detection Indicators
ID
Detection Indicator
SAFE-D-0079
Synthetic Language Patterns Messages that are formulaic, vary slightly while making same points, or arrive at unrealistic volumes.
SAFE-D-0080
AI-Generated Media Indicators Media with artifacts characteristic of synthetic generation.
The TFA Matrix is a research framework under active development. Technique classifications, detection methods, and mitigations reflect current understanding and are subject to revision. This framework does not constitute forensic methodology, legal evidence standards, or clinical diagnostic criteria. Practitioners should apply professional judgement appropriate to their discipline and jurisdiction.