Is someone checking your browsing?
This website will appear in your browser history. If you're concerned someone may be monitoring your internet use, consider using a trusted friend's device, a library computer, or your browser's private/incognito mode. You can press Quick Exit or hit Escape at any time to leave this site quickly.
Learn more about staying safe online
The person you're talking to might be a chatbot — or managed by AI
Scam operations now use AI to have convincing conversations with many people at once. The messages feel personal and genuine, but they're generated by software designed to build trust and extract money or compliance.
What You Might Notice
Responses come at unusual hours with no delay
AI doesn't sleep. If someone always responds instantly, regardless of time zone, that's worth noticing.
Their writing style is very consistent and polished
Real humans have typos, moods, and variations. AI-generated messages can be unnaturally consistent.
They never make mistakes or reference real-world spontaneous events
AI conversations can feel perfect but lack the messiness of real life.
They deflect or change subject when you ask specific questions
AI-managed conversations struggle with unexpected, specific, or off-script questions.
What You Can Do
Ask unexpected, specific questions
What's the weather like where you are right now? What did you have for breakfast? Real people answer easily. AI-managed conversations stumble.
Insist on a live video call
This is the single most effective way to verify you're talking to a real person. If they won't or can't, be very cautious.
If they ask for money, it's almost certainly a scam
No matter how real the relationship feels, if money is requested — it's the extraction phase.
Important: This resource provides general information, not personal advice. Every situation is different. The actions suggested here may not be safe in your specific circumstances — particularly if the person causing harm could notice changes to your devices or accounts. Always consider your physical safety first.
If you need personalised support, contact 1800RESPECT (1800 737 732) or your local specialist domestic violence service. If you are in immediate danger, call 000.
Using AI chatbots, LLMs, or automated messaging to maintain multiple grooming relationships simultaneously at scale. AI enables personalised, varied manipulation conversations that adapt to each target's responses. The 'pig butchering' industrial model uses AI to manage hundreds of simultaneous relationships, with human operators stepping in for high-value interactions.
Mitigations for this technique are under development. If you have suggestions on how to improve this content, please submit a pattern.
Detection Indicators
ID
Detection Indicator
SAFE-D-0001
Anomalous Battery Consumption Device battery depletes faster than baseline due to continuous background data transmission.
SAFE-D-0002
Unexplained Data Usage Increased mobile data consumption without corresponding user activity. Monitor per-app data usage for unknown processes.
SAFE-D-0003
Device Temperature Anomalies Device runs hot during idle periods indicating background process activity.
SAFE-D-0004
Information Leakage Indicators Adversary demonstrates knowledge of private communications, locations, or activities accessible only through device monitoring.
SAFE-D-0005
Unknown Applications or Profiles Presence of unrecognised apps, device administrator privileges, or configuration profiles.
The TFA Matrix is a research framework under active development. Technique classifications, detection methods, and mitigations reflect current understanding and are subject to revision. This framework does not constitute forensic methodology, legal evidence standards, or clinical diagnostic criteria. Practitioners should apply professional judgement appropriate to their discipline and jurisdiction.