Back to Insights
AI Security & Governance9 min readMay 6, 2026

AI Risk Assessment Template for Enterprise Compliance Teams

Enterprise risk management frameworks were not designed for AI systems. The standard risk assessment templates that compliance teams have refined over decades -- built around deterministic software, well-defined inputs and outputs, and predictable failure modes -- break down when applied to large language models, machine learning pipelines, and agentic AI systems. The outputs are probabilistic. The failure modes are novel. The attack surfaces are different. And the regulatory landscape is evolving faster than most organizations can update their control libraries.

This does not mean that AI systems cannot be rigorously assessed for risk. It means that compliance teams need a purpose-built assessment framework that accounts for AI-specific characteristics while integrating with existing enterprise risk management processes. What follows is a structured template for AI risk assessment, along with the reasoning behind each section and guidance for implementation.

Why Standard Risk Assessments Fall Short for AI

Traditional IT risk assessments evaluate systems based on confidentiality, integrity, and availability -- the CIA triad. For AI systems, these dimensions remain relevant but insufficient. AI introduces risk categories that the CIA triad does not capture.

Output reliability risk reflects the fact that AI systems produce probabilistic outputs that can be incorrect, misleading, or inconsistent across identical inputs. A traditional application either returns the correct result or throws an error. An AI system can return a confidently stated but factually wrong answer, and the user may have no way to distinguish it from a correct one.

Bias and fairness risk arises because AI models trained on historical data can perpetuate and amplify the biases present in that data. A loan underwriting model, a hiring screening tool, or a customer service prioritization system can produce discriminatory outcomes without any explicit discriminatory logic in the code.

Explainability risk reflects the difficulty of understanding why an AI system produced a specific output. For regulated decisions -- credit approvals, insurance underwriting, clinical recommendations -- the inability to explain a decision may itself be a compliance violation.

Model drift risk captures the reality that AI model performance degrades over time as the distribution of real-world inputs diverges from training data. Unlike traditional software that remains functionally stable until modified, AI systems can silently deteriorate.

Adversarial risk is specific to AI: prompt injection, data poisoning, model extraction, and evasion attacks represent threat vectors that do not exist for traditional software systems.

A risk assessment template that does not address these AI-specific dimensions will produce an incomplete picture of the risk posture, giving leadership false confidence and leaving the organization exposed to threats it has not evaluated.

Template Structure: Eight Core Sections

The following template structure addresses the full spectrum of AI risk while remaining practical for compliance teams to complete. Each section serves a specific purpose in building a comprehensive risk picture.

Section 1: System Description

Document the AI system in terms that non-technical stakeholders can understand. Include the system name and version, a plain-language description of what the system does, the business process it supports, the users who interact with it (both internal and external), the decisions or actions the system influences, and whether the system operates autonomously or with human oversight. Specify whether the AI is advisory (provides recommendations that humans act on) or automated (takes actions without human review). This distinction fundamentally changes the risk profile.

Section 2: Data Classification

Catalog the data the AI system processes, categorized by sensitivity. Document the data used for model training, the data used as inference inputs, the data produced as outputs, and any data retained in logs, caches, or feedback loops. For each data category, record the classification level (public, internal, confidential, restricted), the regulatory regime that applies (GDPR, HIPAA, CCPA, GLBA, sector- specific regulations), retention requirements, and geographic restrictions on storage and processing. Pay specific attention to whether personal data or protected health information flows through the model during inference. Many organizations overlook the fact that inference inputs are processed by the model and may be retained in logs or used for model improvement by third-party providers.

Section 3: Model Type and Capability

Describe the technical characteristics of the AI model. Record the model architecture (transformer-based LLM, classification model, regression model, ensemble), the model source (commercially licensed, open-source, internally developed), the parameter count and deployment configuration, the training data sources and time period, any fine-tuning performed on enterprise data, and the model's capability boundaries -- what it is designed to do and what it is explicitly not designed for. This section establishes the technical baseline that assessors need to evaluate risk accurately.

Section 4: Threat Analysis

Identify threats specific to the AI system across multiple threat categories. Input manipulation threats include prompt injection (direct and indirect), adversarial examples designed to cause misclassification, and data poisoning of training or fine-tuning data. Output threats include hallucination or confabulation, generation of harmful or inappropriate content, and information leakage through model outputs that reveal training data. Infrastructure threats include unauthorized model access, model exfiltration, and side-channel attacks that infer model architecture or training data. Supply chain threats include compromised model weights, backdoored dependencies in the inference stack, and unauthorized model modifications. For each threat, assess the likelihood (based on exposure, attacker motivation, and existing controls) and the potential impact on the business.

Section 5: Impact Assessment

Evaluate the consequences of risk materialization across multiple impact dimensions. Financial impact includes direct costs (regulatory fines, legal liability, remediation costs) and indirect costs (customer attrition, market share loss, increased insurance premiums). Operational impact assesses the effect on business process continuity if the AI system fails, produces incorrect outputs, or must be taken offline. Reputational impact considers the public and stakeholder reaction to AI-related incidents. Regulatory impact evaluates the consequences of non-compliance with applicable AI regulations, including enforcement actions, consent orders, and enhanced supervisory scrutiny. Assign impact ratings on a consistent scale and document the rationale for each rating.

Section 6: Mitigation Controls

For each identified threat, document the controls in place to mitigate the risk. Controls should be categorized as preventive (controls that stop the threat from materializing), detective (controls that identify when a threat has materialized), and corrective (controls that limit damage and restore normal operation). For AI systems, common controls include input validation and sanitization to defend against prompt injection, output filtering to prevent harmful or non-compliant content, human-in-the-loop review for high-stakes decisions, model monitoring for performance drift and anomalous behavior, access controls restricting who can query the model and with what data, and audit logging of all model inputs, outputs, and configuration changes. Map each control to the specific threat it addresses and assess the control's effectiveness. A control that exists on paper but is not consistently enforced provides false assurance.

Section 7: Residual Risk

After accounting for mitigation controls, document the residual risk for each identified threat. Residual risk reflects the risk that remains after controls are applied. Calculate residual risk as a function of remaining likelihood and impact after control effectiveness is factored in. For each residual risk, document whether it is accepted (within risk appetite), transferred (covered by insurance or contractual arrangements), or requires additional mitigation. Residual risk acceptance must be approved at the appropriate organizational level. High residual risks should require approval from the Chief Risk Officer or equivalent.

Section 8: Review Schedule

AI risk assessments are not static documents. Define a review schedule that triggers reassessment based on both calendar events and change events. Calendar-based reviews should occur at least annually for low-risk systems and quarterly for high-risk systems. Change-triggered reviews should occur when the model is updated or retrained, when the system is applied to new use cases, when the data inputs change significantly, when new regulations or guidance are issued, and when a security incident or near-miss occurs involving the AI system. Document who is responsible for conducting each review and the escalation path for newly identified risks.

Scoring Methodology

A consistent scoring methodology allows comparison across AI systems and aggregation into enterprise risk dashboards. Use a five-point scale for both likelihood and impact, with clearly defined criteria for each level. Likelihood ratings should range from rare (the event has not occurred in peer organizations and requires highly sophisticated attack capability) to almost certain (the event is occurring regularly in the industry and requires minimal attack sophistication). Impact ratings should range from negligible (no material financial, operational, or reputational consequence) to critical (existential threat to business operations, regulatory standing, or organizational viability). The risk score is the product of likelihood and impact, producing a scale from 1 to 25. Define risk tolerance thresholds: risks scoring above a defined threshold require executive acceptance, risks above a higher threshold require Board awareness, and risks above the highest threshold are unacceptable and require immediate remediation.

Regulatory Mapping

Your AI risk assessment should map directly to applicable regulatory frameworks. Two frameworks are particularly relevant for most enterprises.

The EU AI Act classifies AI systems into risk tiers: unacceptable risk (prohibited uses), high risk (subject to mandatory requirements including conformity assessments, risk management systems, data governance, transparency, and human oversight), limited risk (subject to transparency obligations), and minimal risk (no specific obligations). Your risk assessment should include a determination of which EU AI Act risk tier applies to each AI system and document the specific obligations that apply at that tier. Even for organizations not currently subject to the EU AI Act, the framework provides a useful structure for thinking about AI risk categorization.

The NIST AI Risk Management Framework (AI RMF)provides a voluntary framework organized around four functions: Govern (establishing accountability and oversight structures), Map (identifying and cataloging AI systems and their contexts), Measure (assessing and tracking AI risks), and Manage (prioritizing and responding to AI risks). Mapping your risk assessment outputs to AI RMF categories facilitates reporting to regulators and stakeholders who reference this framework and ensures comprehensive coverage of AI risk dimensions.

Integrating AI Risk into Existing ERM Frameworks

AI risk assessments should not exist as standalone documents disconnected from the enterprise risk management program. The goal is integration, not duplication.

Map AI risk categories to your existing risk taxonomy. If your ERM framework uses categories like operational risk, technology risk, compliance risk, and strategic risk, map AI-specific risks into those categories with clear notation that they carry AI-specific characteristics requiring specialized assessment. Aggregate AI risk scores into your existing risk dashboards so that leadership sees AI risk alongside other enterprise risks, not in a separate report that competes for attention.

Establish clear ownership for AI risk. In many organizations, AI risk falls between existing risk ownership boundaries -- the technology risk team may lack AI expertise, the data science team may lack risk management expertise, and the compliance team may lack technical depth in AI systems. Define explicit RACI assignments for AI risk assessment, monitoring, and escalation. Consider establishing an AI risk committee that brings together technology, risk, legal, and business representatives to provide integrated oversight.

Ensure that AI risk assessment findings flow into your existing risk reporting cadence. The Board risk committee should receive AI risk updates in the same format and frequency as other risk categories. Normalizing AI risk reporting -- rather than treating it as a special topic -- signals organizational maturity and ensures that AI risk receives appropriate governance attention.

Template Walkthrough: An Applied Example

Consider an enterprise deploying an LLM-powered system that reviews customer contracts and extracts key terms, obligations, and risk flags. Walking through the template: the system description identifies it as an advisory tool (human reviewers make final decisions), the data classification captures that it processes confidential commercial contracts subject to NDA protections, the model section documents a fine-tuned open-source model running on private infrastructure, and the threat analysis identifies prompt injection through malicious contract text, hallucinated contract terms that do not appear in the original document, and data leakage if model outputs inadvertently reveal information from other contracts the model was fine-tuned on.

The impact assessment rates hallucinated contract terms as high impact (incorrect risk flags could lead to unfavorable contract acceptance or unnecessary rejection) and data leakage as critical (disclosure of confidential contract terms from other clients). Mitigation controls include mandatory human review of all extracted terms, output validation that cross-references extracted terms against source document text, data isolation ensuring fine-tuning data is partitioned by client, and prompt injection filtering on all input documents. Residual risk for hallucination is rated medium (human review catches most errors but adds latency and cost), and residual risk for data leakage is rated low (data isolation and output validation provide strong protection).


An AI risk assessment template is not a compliance checkbox. It is an analytical tool that forces systematic thinking about the risks that AI systems introduce to the enterprise. The template structure presented here is designed to be adapted to your organization's specific context, risk appetite, and regulatory obligations. Start with the highest-risk AI systems, refine the template through practice, and integrate the results into your existing risk management ecosystem. The organizations that build this capability now will be prepared for the regulatory requirements and stakeholder expectations that are arriving rapidly.

Free: Enterprise AI Readiness Playbook

40+ pages of frameworks, checklists, and templates. Covers AI maturity assessment, use case prioritization, governance, and building your roadmap.

Ready to put these insights into action?