AI Governance Framework for IT Teams: A Practical Guide

A practical AI governance framework for IT teams covering tool approval, data governance, security review, vendor evaluation, and shadow AI policies.


TLDR: AI governance for IT teams boils down to five pillars: a tool approval process, data governance rules, security review procedures, vendor evaluation criteria, and a shadow AI policy. You do not need a 50-page policy document. You need a lightweight framework that people actually follow. This guide provides the templates and checklists to build one.

Why IT Teams Need AI Governance Now

The AI governance conversation in most enterprises is happening in legal and compliance. The actual governance problem lives in IT. Engineering teams are embedding AI APIs into production systems. Marketing is using generative AI tools with customer data. Sales ops is feeding CRM data into third-party AI scoring platforms. Finance is experimenting with AI-assisted forecasting.

IT sits at the intersection of all of this. Without a governance framework, you end up with:

  • Sensitive data flowing to AI providers you have never vetted
  • AI-generated code in production without review standards
  • Vendor contracts that grant training rights over your data
  • Duplicate tool subscriptions across departments
  • No audit trail for AI-assisted decisions that affect customers

A practical governance framework does not prevent AI adoption. It channels it through reviewable, repeatable processes so that adoption is fast and responsible.

Pillar 1: AI Tool Approval Process

The goal is a lightweight process that handles 80% of requests in under a week while routing high-risk requests through deeper review.

Tier-Based Approval

Not every AI tool requires the same level of scrutiny. Use a tiered system:

TierDescriptionExamplesApproval ProcessTimeline
Tier 1: Low RiskNo company data exposure. Individual productivity tools.GitHub Copilot (with telemetry off), Grammarly Business, AI note-taking (audio only, no data export)Manager approval + IT registration1-2 days
Tier 2: Medium RiskCompany data involved but not sensitive. Internal use only.AI analytics on anonymized data, AI-assisted internal docs, chatbots for internal knowledge basesIT security review + data classification check3-5 days
Tier 3: High RiskSensitive data, customer-facing, or regulatory implications.AI in customer support (accessing customer records), AI in hiring decisions, AI processing financial dataFull security review + legal review + DPA verification2-4 weeks

The Approval Workflow

1. Requester submits AI Tool Request form
   (tool name, use case, data involved, users, budget)
        |
        v
2. IT classifies risk tier based on data sensitivity matrix
        |
        v
3. [Tier 1] Manager signs off, IT registers the tool
   [Tier 2] IT security reviews, checks data handling
   [Tier 3] Full review: security, legal, procurement, DPA
        |
        v
4. Approved tools added to "Approved AI Tools" registry
        |
        v
5. Quarterly review of all approved tools

AI Tool Request Form Template

Every request should capture:

  1. Tool name and vendor
  2. Business justification: What problem does this solve? What is the alternative without the tool?
  3. Data classification: What data will the tool access? (Public, Internal, Confidential, Restricted)
  4. User scope: Individual, team, department, or company-wide?
  5. Integration points: Does it connect to other systems? Which ones?
  6. Budget: Per-user cost, total annual cost
  7. Existing alternatives: Is there an approved tool that does this already?

Earned insight: The single most common governance failure I see is not a missing policy but an approval process that takes too long. If your Tier 1 approval takes three weeks, people will just use the tool without approval. They will expense it on a corporate card or use a personal account. A fast, tier-based process reduces shadow AI more effectively than any ban.

Pillar 2: Data Governance for AI

Data governance for AI builds on your existing data classification framework. If you do not have one, start there before tackling AI governance.

Data Classification Matrix for AI Use

Data ClassDefinitionAI Tool Restrictions
PublicInformation intended for public consumptionNo restrictions. Any AI tool can process.
InternalNon-sensitive business informationApproved AI tools only. No free-tier consumer AI tools.
ConfidentialCustomer data, employee PII, financial dataTier 2+ approved tools only. Data Processing Agreement (DPA) required. No model training on this data.
RestrictedTrade secrets, regulated data (HIPAA, SOX), credentialsTier 3 approved tools only. On-premise or dedicated instance. No cloud AI processing without explicit legal and CISO approval.

Key Data Governance Rules

Rule 1: No confidential or restricted data in general-purpose AI chatbots. This includes ChatGPT, Claude (consumer versions), Gemini, and any AI tool where the data handling terms do not explicitly exclude model training. Enterprise agreements with training opt-out clauses are acceptable for confidential data.

Rule 2: All AI tools processing company data must have a Data Processing Agreement (DPA). The DPA must specify:

  • Data is not used for model training
  • Data residency requirements are met
  • Data deletion procedures are defined
  • Subprocessor list is available and change notification is provided

Rule 3: AI outputs based on company data are company property. Establish clear ownership of AI-generated content, code, and analysis. This matters for IP protection and for liability.

Rule 4: Data minimization applies to AI. Send only the data the AI tool needs. Do not dump an entire customer database into an analytics tool when it only needs aggregate statistics.

Watch for training data clauses. Many AI vendors bury training data rights in their Terms of Service. The marketing page says “your data is private.” The ToS says “you grant us a license to use your data to improve our services.” These are not the same thing. Read the ToS and the DPA. If there is no DPA, that is a red flag for anything above public data.

Data Lineage for AI Systems

For AI tools that make or inform business decisions, maintain data lineage documentation:

  • What data goes into the AI system?
  • Where does that data originate?
  • How is the data transformed before AI processing?
  • What does the AI output?
  • Where does the AI output go?
  • Who acts on the AI output?

This lineage is essential for compliance (especially under emerging AI regulations like the EU AI Act) and for debugging when AI outputs are wrong.

Pillar 3: Security Review for AI Tools

Your existing vendor security review process needs AI-specific extensions. Here is what to add.

AI-Specific Security Review Checklist

Model and Data Security:

  • Does the vendor provide a model card or system documentation?
  • Is customer data used for model training? (Must be “no” for Tier 2+)
  • Where is data processed? (Region/country)
  • Is data encrypted in transit and at rest?
  • What is the data retention policy?
  • Can you request data deletion?

Access and Authentication:

  • Does the tool support SSO (SAML/OIDC)?
  • Does it support SCIM for user provisioning?
  • Are there role-based access controls?
  • Is there an audit log of all API calls and user actions?

Integration Security:

  • What API permissions does the tool require?
  • Does it follow least-privilege principles?
  • How are API keys/tokens stored?
  • Is there a webhook validation mechanism?

AI-Specific Risks:

  • Is there a risk of prompt injection affecting business logic?
  • Can the AI tool be manipulated to expose data from other tenants?
  • Does the vendor have an AI incident response plan?
  • What are the vendor’s practices around model updates (could a model change break your integration)?

Compliance:

  • SOC 2 Type II report available?
  • GDPR compliance (for EU data)?
  • Industry-specific compliance (HIPAA BAA, PCI DSS)?
  • AI-specific certifications or adherence to frameworks (NIST AI RMF, ISO 42001)?

Penetration Testing and Red Teaming

For Tier 3 AI tools that are customer-facing or process sensitive data, add AI-specific testing:

  • Prompt injection testing: Can users manipulate the AI to bypass intended behavior?
  • Data extraction testing: Can the AI be tricked into revealing training data or other users’ data?
  • Jailbreak testing: Can the AI be manipulated to produce harmful or off-brand content?

This testing can be performed internally or through specialized AI security firms like HiddenLayer or Robust Intelligence.

Pillar 4: Vendor Evaluation Criteria

When evaluating AI vendors, use these weighted criteria:

CriterionWeightWhat to Evaluate
Data handling and privacy25%Training data policy, DPA, data residency, encryption
Security posture20%SOC 2, SSO, RBAC, audit logs, incident response
Model transparency15%Model cards, explainability, bias testing, versioning
Integration capability15%API quality, SSO/SCIM, webhook support, existing connectors
Vendor stability10%Funding, revenue, customer base, product roadmap
Cost structure10%Pricing model, hidden costs, scaling economics
Support and SLA5%Response times, dedicated support, uptime guarantees

Red Flags in AI Vendor Evaluation

  • No DPA available: Walk away for anything above public data.
  • Training on customer data by default with no opt-out: Walk away.
  • No SOC 2 Type II: Acceptable for Tier 1 only.
  • Black-box pricing: Common in AI but risky. Get written pricing commitments for your projected usage.
  • No versioning for models or APIs: A model update could break your integration without warning.
  • Startup with less than 18 months of runway: You will be migrating sooner than you think.

Negotiation tip: AI vendors are more willing to negotiate on data handling terms than on price. If the standard ToS includes a training data clause, ask for an enterprise addendum that explicitly excludes your data. Most vendors with an enterprise tier will accommodate this.

Pillar 5: Shadow AI Policy

Shadow AI is the use of unauthorized AI tools by employees. It is the AI equivalent of shadow IT, and it is pervasive. A 2025 survey by Salesforce found that over 50% of enterprise employees use AI tools that IT has not approved.

Banning AI outright does not work. Employees will use personal devices and accounts. A shadow AI policy should redirect this behavior, not suppress it.

The Shadow AI Policy Framework

Acknowledge the reality: State explicitly that the company supports AI adoption and that the goal of the policy is safe and effective use, not prevention.

Provide approved alternatives: For every common shadow AI use case, provide an approved tool:

Use CaseCommon Shadow AI ToolApproved Alternative
Writing assistanceChatGPT (personal account)[Your approved enterprise AI writing tool]
Code generationCopilot (personal GitHub)GitHub Copilot Business (with org settings)
Image generationMidjourney (personal)[Your approved tool] or “request access”
Data analysisUploading CSVs to ChatGPT[Your approved analytics AI tool]
Meeting summariesOtter.ai (personal)[Your approved transcription tool]

Define what is never acceptable: Regardless of which tool, certain actions are always prohibited:

  • Uploading customer data to any unapproved AI tool
  • Using AI to process regulated data (HIPAA, financial) without approval
  • Using AI to make automated decisions about employees (hiring, performance) without HR and legal review
  • Bypassing the AI tool approval process for tools that integrate with company systems

Create a fast-track approval path: For individual productivity tools (Tier 1), make approval friction-free. A web form, manager approval over Slack, and same-day registration in the IT tool catalog.

Monitor and measure: Use your CASB (Cloud Access Security Broker) or endpoint monitoring to detect unauthorized AI tool usage. Treat detections as a signal to improve your approved tools catalog, not as a disciplinary issue (unless sensitive data is involved).

Communication Plan

Roll out the shadow AI policy with clear, non-threatening communication:

  1. Announce approved AI tools first. Lead with enablement, not restriction.
  2. Explain the risks of unapproved tools concisely (data leakage, IP issues, compliance).
  3. Provide a simple request process for tools not yet approved.
  4. Set a grace period (30 days) for employees currently using unapproved tools to switch or request approval.
  5. Follow up with department-specific guidance for teams with unique AI needs.

Governance Checklist: Your Quick-Start Template

Use this checklist to assess your governance readiness:

Foundation

  • Data classification framework exists and is current
  • Vendor security review process exists
  • IT asset management tracks software subscriptions
  • CASB or endpoint monitoring is deployed

AI Tool Approval

  • Tier-based approval process defined
  • AI tool request form created
  • Approved AI tools registry established
  • Quarterly review cadence set

Data Governance

  • Data classification matrix for AI use defined
  • DPA requirements documented by data class
  • Data lineage documentation template created
  • Training data opt-out verification process exists

Security Review

  • AI-specific security checklist added to vendor review
  • Prompt injection testing procedures defined (for Tier 3)
  • AI incident response plan drafted
  • Model versioning requirements documented

Vendor Evaluation

  • Weighted evaluation criteria established
  • Red flag checklist documented
  • Contract requirements for AI vendors defined
  • Vendor risk reassessment schedule set (annual minimum)

Shadow AI

  • Shadow AI policy drafted and approved by legal
  • Approved alternatives identified for common use cases
  • Fast-track approval path for Tier 1 tools operational
  • Monitoring for unauthorized AI tool usage active
  • Communication plan executed

Maintaining the Framework

Governance frameworks rot faster than the technology they govern. AI capabilities change quarterly. New tools emerge monthly. Your framework needs a maintenance cadence:

Monthly: Review and approve/reject pending AI tool requests. Update the approved tools registry.

Quarterly: Review approved tools for continued compliance (check for ToS changes, security updates, vendor stability). Review shadow AI monitoring data. Update data classification matrix if new data types have emerged.

Annually: Full framework review. Update vendor evaluation criteria. Reassess risk tiers. Review and update the shadow AI policy. Conduct training for IT staff on updated procedures.

Practical reality check: The best governance framework is one that is 80% complete and actually used, not one that is 100% complete and sitting in a Confluence page nobody reads. Ship your v1 framework in two weeks, iterate monthly, and measure adoption (how many AI tools go through the approval process vs. how many are detected as shadow AI). The ratio tells you whether the framework is working.

Bottom Line

AI governance for IT teams is not about building a bureaucracy around AI adoption. It is about creating a fast, clear path for approved use that is easier than going rogue. The five pillars (tool approval, data governance, security review, vendor evaluation, and shadow AI policy) provide structure without paralysis. Start with the approval process and shadow AI policy (these have the most immediate impact), then layer in data governance and security review as you scale. The checklist in this guide gives you a 2-week implementation target. Start there.