Skip to main content
TopAIThreats home TOP AI THREATS
Checklist

AI Deployment Checklist: Pre- and Post-Deployment Verification

A phased checklist for safe AI deployment covering security configuration, governance sign-off, testing gates, and post-deployment monitoring. Tied to real incident types from the topaithreats database and EU AI Act Article 9 requirements.

Last updated: 2026-03-15

Who this is for: ML engineers, DevOps/MLOps teams, product owners, and risk officers responsible for deploying AI systems. Applies to both initial deployments and significant updates to existing systems.

AI deployment failures are a documented cause of AI incidents. Misconfigured deployment and insufficient safety testing together account for a significant share of entries in the topaithreats incident database—including cases where default settings left systems open to exploitation, where excessive permissions enabled unauthorized data access, and where untested edge cases caused harm at scale in production. This checklist operationalizes the controls that prevent these failures across three phases: pre-deployment security and safety, pre-deployment governance and compliance, and post-deployment verification.

Why Deployment Configuration Causes Incidents

Default settings in AI systems are optimized for ease of use, not for security. Out-of-the-box configurations typically include: verbose error messages that expose system internals, permissive CORS policies, no rate limiting, default API key scopes that grant more access than necessary, and no content filtering.

Each of these defaults has been the proximate cause of documented AI incidents:

  • Verbose error messages exposing system prompts in production
  • Absent rate limiting enabling prompt injection probing at high volume
  • Default embedding API scopes granting access to all documents regardless of tenant
  • No content filtering allowing policy-violating outputs to reach users

The checklist below treats defaults as guilty until proven innocent: every default setting should be reviewed and explicitly configured for production.

Phase 1: Security and Safety (Pre-Deployment)

Threat model and red team

Prompt injection and input security

Output and content safety

Access control and permissions

Default settings audit

Secrets management

Phase 2: Governance and Compliance (Pre-Deployment)

Risk classification

Bias and fairness

Documentation

Regulatory compliance

Sign-off

Phase 3: Deployment Configuration

Deployment execution

Rollback readiness

Phase 4: Post-Deployment Verification

Smoke tests

Monitoring activation

Incident response readiness

Phase 5: Ongoing Maintenance

After deployment, re-run relevant checklist sections when:

ChangeSections to Re-run
Model version update (provider-side)Phase 1 security + Phase 4 smoke tests
Fine-tuning or model retrainingPhase 1 full + Phase 2 bias testing
System prompt changePhase 1 injection security + red team targeted scope
New tool or data source integrationPhase 1 full + Phase 4 full
New deployment region / regulatory contextPhase 2 full
Significant traffic increasePhase 3 (capacity and rate limiting) + Phase 4 monitoring

For public-facing systems with high-risk capabilities, run a full red team quarterly regardless of whether changes have occurred. New attack techniques emerge continuously.

Common Deployment Failures and Their Causes

FailureRoot CauseChecklist Gate That Prevents It
System prompt exposed via API errorDefault verbose error messagesPhase 1: Default settings audit
Prompt injection via RAG documentRAG scanning only at query timePhase 1: RAG pipeline injection scanning at index stage
Cross-tenant data exposureApplication-layer-only tenant filteringPhase 1: Tenant-scoped retrieval at DB level
Agent exfiltrates data via emailExcessive agent permissionsPhase 1: Agent tool permissions (least privilege)
Discriminatory hiring decisionsBias testing skipped or done after deploymentPhase 2: Disparate impact testing
No rollback after model regressionModel version not pinnedPhase 3: Model version pinned