Is someone checking your browsing?
This website will appear in your browser history. If you're concerned someone may be monitoring your internet use, consider using a trusted friend's device, a library computer, or your browser's private/incognito mode. You can press Quick Exit or hit Escape at any time to leave this site quickly.
Learn more about staying safe online
Someone may be using AI to create fake evidence against you
AI can now generate convincing fake text message screenshots, emails, documents, and even audio recordings. Someone may use these fakes in court, with your employer, with child protection, or to damage your reputation.
What You Might Notice
Evidence is presented that shows conversations you never had
Screenshots of messages you didn't send, emails you didn't write, documents you didn't create.
The evidence seems too perfect
Real conversations are messy. Fabricated ones can be unnaturally clean, perfectly supporting the other person's narrative.
Metadata doesn't match
A forensic examiner can check whether screenshots, documents, or recordings have been generated or manipulated.
What You Can Do
Preserve your own records
Keep your actual message history, emails, and documents. Your real records are the best defence against fabricated ones.
Request forensic analysis of suspicious evidence
AI-generated content often has detectable artefacts. A digital forensic expert can identify fabrication.
Alert your lawyer to the possibility of AI fabrication
Courts are increasingly aware of this issue but your legal team needs to know to challenge suspect evidence.
Important: This resource provides general information, not personal advice. Every situation is different. The actions suggested here may not be safe in your specific circumstances — particularly if the person causing harm could notice changes to your devices or accounts. Always consider your physical safety first.
If you need personalised support, contact 1800RESPECT (1800 737 732) or your local specialist domestic violence service. If you are in immediate danger, call 000.
Using AI to create fake text message screenshots, fabricated legal documents, forged emails, synthetic audio recordings, or manipulated metadata for use in legal proceedings, custody disputes, or reputation attacks. Distinct from Deepfake Creation (visual media) — this is documentary evidence fabrication. AI lowers the skill barrier for creating convincing fakes that can fool courts, employers, and support services.
Mitigations for this technique are under development. If you have suggestions on how to improve this content, please submit a pattern.
Detection Indicators
ID
Detection Indicator
SAFE-D-0001
Anomalous Battery Consumption Device battery depletes faster than baseline due to continuous background data transmission.
SAFE-D-0002
Unexplained Data Usage Increased mobile data consumption without corresponding user activity. Monitor per-app data usage for unknown processes.
SAFE-D-0003
Device Temperature Anomalies Device runs hot during idle periods indicating background process activity.
SAFE-D-0004
Information Leakage Indicators Adversary demonstrates knowledge of private communications, locations, or activities accessible only through device monitoring.
SAFE-D-0005
Unknown Applications or Profiles Presence of unrecognised apps, device administrator privileges, or configuration profiles.
The TFA Matrix is a research framework under active development. Technique classifications, detection methods, and mitigations reflect current understanding and are subject to revision. This framework does not constitute forensic methodology, legal evidence standards, or clinical diagnostic criteria. Practitioners should apply professional judgement appropriate to their discipline and jurisdiction.