Skip to main content
TopAIThreats home TOP AI THREATS

AI Threat Guides

Practical how-to guides, checklists, and curated resources for understanding and defending against AI threats.

How-To Guides

AI Security Best Practices: How to Secure LLM Applications

How-To

Ten security best practices for LLM applications, mapped to OWASP LLM Top 10. Covers model layer, application layer, data layer, and agentic AI security—including a scannable implementation checklist.

How to Assess AI Threat Risk: Bias, Fairness, and Harm Evaluation

How-To

A 4-step methodology for detecting AI bias and assessing fairness in AI systems—covering data audits, fairness criterion selection, disparate impact testing, and production monitoring. Includes tools comparison and the fairness impossibility theorem.

How to Build an AI Incident Response Plan

How-To

A 5-phase AI incident response framework covering detection, containment, investigation, remediation, and regulatory reporting—including EU AI Act Article 62 obligations and AIID submission guidance.

How to Detect Adversarial Inputs: A Practitioner Checklist

How-To

Step-by-step workflow for identifying adversarial inputs targeting AI systems, including input validation, transformation testing, behavioral monitoring, and response procedures for security and ML teams.

How to Detect AI Bias: A Practitioner Checklist

How-To

Step-by-step workflow for auditing AI systems for discriminatory outcomes, including fairness metric selection, disaggregated evaluation, data auditing, and regulatory compliance guidance.

How to Detect AI Phishing: A Practitioner Checklist

How-To

Step-by-step workflow for identifying AI-generated phishing emails and messages. Quick-reference checklists for email authentication, behavioral indicators, automated analysis, and organizational response.

How to Detect AI-Generated Text: A Practitioner Checklist

How-To

Step-by-step workflow for evaluating whether text was written by a human or generated by an AI system. Covers manual indicators, automated detection tools, stylometric analysis, and responsible decision-making.

How to Detect Data Poisoning: A Practitioner Checklist

How-To

Step-by-step workflow for identifying and responding to data poisoning attacks on AI training data, fine-tuning corpora, and RAG knowledge bases. Covers pre-training inspection, during-training monitoring, post-deployment detection, and remediation.

How to Detect Deepfakes: A Practitioner Checklist

How-To

Step-by-step workflow for evaluating suspected deepfake video, audio, or images. Quick-reference checklists for visual inspection, audio analysis, provenance verification, and escalation guidance.

How to Detect Voice Cloning: A Practitioner Checklist

How-To

Step-by-step workflow for evaluating suspected AI-cloned voice audio. Quick-reference checklists for audio analysis, prosodic inspection, automated detection, out-of-band verification, and escalation guidance.

How to Prevent Prompt Injection: Implementation Checklist

How-To

Six layered architectural controls for defending LLM applications against prompt injection. Implementation-ready checklist with code examples, OWASP mapping, and multi-tenant guidance.

How to Protect Against AI Threats: A Practical Framework

How-To

A 7-step framework for protecting organizations against AI threats—covering threat surface identification, governance controls, technical hardening, red teaming, monitoring, and incident response.

How to Red Team AI Systems: Methodology, Tools, and Process

How-To

AI red teaming is the adversarial evaluation of LLMs and agentic AI systems before deployment, testing for jailbreaks, prompt injection, harmful outputs, and bias. A 4-phase methodology with tools comparison.

How to Secure Your AI Supply Chain: A Practitioner Checklist

How-To

Step-by-step workflow for securing AI model supply chains, including model provenance verification, dependency scanning, data source authentication, third-party tool security, and ongoing supply chain monitoring.