Questions to Ask AI Technology Service Providers Before Signing
Procurement decisions involving AI technology services carry contractual, operational, and regulatory consequences that extend well beyond the initial deployment. This page covers the structured inquiry framework that organizations apply before executing agreements with AI service providers — spanning data governance, model accountability, compliance obligations, and exit rights. The questions are organized into functional categories that correspond to distinct contract risk zones, helping procurement teams move from vendor selection toward binding terms with documented clarity.
Definition and scope
A pre-signature inquiry framework for AI services is a structured set of questions directed at candidate vendors to surface information that is material to contract negotiation, regulatory compliance, and operational continuity. Unlike generic IT procurement checklists, AI-specific due diligence must address concerns that do not arise in conventional software procurement: model opacity, training data provenance, output liability, and algorithmic drift.
The scope of these questions spans the full lifecycle of AI technology services as defined — from initial consulting engagements through managed AI services, model training, and integration services. Organizations procuring services in regulated sectors — healthcare, financial services, federal contracting — face additional disclosure requirements layered on top of standard commercial due diligence.
The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF 1.0) defines four core functions — Govern, Map, Measure, Manage — that directly correspond to the categories of questions a buyer should pose before signature. Each function surfaces a class of provider obligation that should be reflected in contract language.
How it works
The inquiry process operates in three sequential phases, each producing artifacts that feed into contract drafting or vendor elimination.
Phase 1 — Pre-RFP Discovery
Before issuing a formal request for proposal, the procuring organization identifies the AI system category (generative, predictive, automation, NLP, computer vision), the data types involved, and the applicable regulatory regime. This classification determines which question categories carry highest weight.
Phase 2 — Structured Vendor Questionnaire
A written questionnaire is issued to shortlisted vendors. Questions are grouped by risk domain. Responses become exhibits to the final agreement or grounds for disqualification.
Phase 3 — Contract Review and Gap Closure
Legal and technical reviewers compare vendor responses against proposed contract language. Gaps between verbal representations and written terms are negotiated or escalate to disqualification. The AI Technology Services Contracts framework provides the structural counterpart to this question set.
Core question domains (numbered breakdown)
- Data governance — Who owns training data? Is client data used to retrain shared models? What data residency and deletion obligations exist?
- Model accountability — What version control and change notification procedures govern model updates? How are accuracy benchmarks defined and measured?
- Compliance and certification — What certifications does the provider hold (SOC 2 Type II, ISO/IEC 42001, FedRAMP)? Are audit rights granted to the client?
- Explainability and bias — What methods does the provider use for bias testing? Can the model produce human-readable explanations for consequential decisions?
- Security — How are model endpoints secured? What incident response SLAs apply to AI-specific threats (model inversion, prompt injection)?
- Liability and indemnification — Who bears liability for harmful model outputs? Are IP indemnities explicitly extended to AI-generated work product?
- Exit and portability — What are the data return timelines? Can trained model weights be exported, and under what licensing terms?
- Pricing and change control — How are inference compute costs invoiced? What triggers price adjustments? The AI Technology Services Pricing Models page details the common structures.
Common scenarios
Healthcare procurement — A hospital system evaluating a diagnostic AI tool must ask questions aligned with the Health Insurance Portability and Accountability Act (HIPAA) Security Rule (45 CFR Part 164) and the FDA's evolving framework for Software as a Medical Device (SaMD). Key questions address PHI handling, audit log retention periods, and whether the model has received FDA 510(k) clearance or De Novo authorization.
Federal contracting — Agencies and prime contractors must address FedRAMP authorization status, FAR clause incorporation, and alignment with OMB Memorandum M-24-10 governing agency AI use. A provider without a documented AI governance policy that maps to NIST AI RMF functions represents a disqualifying gap for most federal procurement offices.
Financial services — Organizations subject to the Equal Credit Opportunity Act (15 U.S.C. § 1691) must obtain from the provider a clear account of how adverse action explanations are generated for credit decisions — a question that standard IT contracts do not address.
Startup and SMB contexts — Smaller organizations with limited legal resources concentrate questions on 3 high-leverage areas: data ownership, exit portability, and SLA remedies. The AI Technology Services for Small Business page provides context for scope-calibrated procurement.
Decision boundaries
When a vendor's answers are sufficient vs. disqualifying — A provider that can supply SOC 2 Type II attestation, a documented model change notification process, and a written data deletion schedule meets a minimum baseline for non-regulated commercial use. A provider that cannot articulate training data provenance, claims no audit rights are available, or excludes AI-generated output liability entirely should be disqualified from procurements where output accuracy is consequential.
Negotiated vs. non-negotiable terms — Pricing structures, SLA tiers, and support windows are negotiable. Data processing addendum (DPA) terms that conflict with GDPR Article 28 or CCPA requirements are not negotiable if the organization is subject to those regimes — the provider must conform or be eliminated.
Proprietary model vs. open-weight model providers — Proprietary model providers (where model weights are not accessible) require stronger contractual protections around continuity, since a provider shutdown eliminates the capability with no portability path. Open-weight model providers shift risk toward the client's internal infrastructure but provide full exit optionality. This contrast is explored further at Evaluating AI Technology Service Providers.
Pilot programs as risk gates — Running a structured pilot before full contract execution, as described in the AI Technology Services Pilot Programs framework, allows organizations to validate vendor responses against observable behavior — measuring actual output accuracy, latency, and explainability against the representations made during due diligence.
References
- NIST AI Risk Management Framework (AI RMF 1.0)
- OMB Memorandum M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence
- 45 CFR Part 164 — HIPAA Security Rule (eCFR)
- 15 U.S.C. § 1691 — Equal Credit Opportunity Act (House.gov)
- ISO/IEC 42001 — AI Management System Standard (ISO)
- FedRAMP Authorization Program (GSA)
- FDA Software as a Medical Device (SaMD) Policy Framework (FDA)