AI Technology Services Glossary: Key Terms and Definitions

The vocabulary surrounding AI technology services has expanded rapidly alongside the market itself, creating real confusion for procurement teams, legal reviewers, and technical leads who need precise shared definitions. This glossary covers the foundational terms used across AI technology services, from model architecture concepts to service delivery and governance language. Definitions align with established sources including NIST, ISO, and IEEE where applicable. Understanding these terms is a prerequisite for evaluating contracts, scoping engagements, and assessing AI service provider certifications accurately.


Definition and scope

An AI technology services glossary organizes the working terminology used across the full lifecycle of acquiring, deploying, and governing artificial intelligence systems as externally delivered services. The scope spans technical vocabulary (model types, training methods, inference infrastructure), commercial vocabulary (service delivery models, SLAs, pricing structures), and governance vocabulary (bias, explainability, audit trails).

NIST AI 100-1 — the Artificial Intelligence Risk Management Framework published in January 2023 — provides authoritative definitions for terms including "AI system," "trustworthiness," "bias," and "explainability" that are increasingly referenced in US federal procurement and state-level AI legislation. The glossary on this page draws on that framework, alongside ISO/IEC 22989:2022 (Artificial Intelligence Concepts and Terminology) and IEEE Std 2801™.

Scope boundaries: Terms here apply to commercially delivered AI services rather than open-source tooling or in-house research contexts. Where a term carries different meanings across regulatory contexts — for example, "AI model" under the EU AI Act versus NIST usage — those distinctions are flagged.


How it works

AI technology services terminology is organized across five functional layers. Each layer corresponds to a phase or dimension of service delivery.

  1. Infrastructure layer — vocabulary for the compute, storage, and networking substrate on which AI runs, including GPU cluster, inference endpoint, latency SLA, and model serving.
  2. Model layer — terminology for the AI artifacts themselves: foundation model, fine-tuned model, large language model (LLM), multimodal model, embedding, and weights.
  3. Data layer — terms governing data inputs and outputs: training data, ground truth, data labeling, annotation schema, data lineage, and synthetic data.
  4. Application layer — vocabulary for how AI capabilities are surfaced: API endpoint, retrieval-augmented generation (RAG), prompt engineering, agent, and orchestration.
  5. Governance layer — terms covering accountability and risk: explainability, algorithmic bias, model drift, audit log, red-teaming, and AI incident.

Understanding which layer a term belongs to clarifies which stakeholder owns the definition in any given contract. Infrastructure terms typically sit in master service agreements; governance terms appear in AI-specific addenda or ethics policies referenced in AI technology services compliance frameworks.


Common scenarios

The following terms appear most frequently in procurement, contract review, and technical scoping for AI services. Each entry includes a plain-language definition and the source standard where applicable.

Foundation Model — A large-scale AI model trained on broad datasets and designed to be adapted to downstream tasks through fine-tuning or prompting. Stanford HAI introduced this term in 2021 in the paper "On the Opportunities and Risks of Foundation Models" (Bommasani et al.).

Inference — The process by which a trained model generates outputs from new input data. Distinguished from training, which updates model weights. Inference latency — measured in milliseconds — is a key SLA metric in AI managed services contracts.

RAG (Retrieval-Augmented Generation) — An architecture that combines a language model with a real-time document retrieval step, allowing responses to reflect information not contained in the model's training data. Defined operationally in NIST AI 600-1 (Draft) on generative AI risk.

Model Drift — Degradation in model accuracy over time as real-world input distributions diverge from training data distributions. AI testing and validation services typically include drift monitoring as a deliverable.

Explainability vs. Interpretability — These terms are related but distinct. Explainability refers to the ability to provide post-hoc reasons for model outputs to external stakeholders. Interpretability refers to the degree to which a model's internal mechanics can be understood by technical reviewers. NIST AI 100-1 uses "explainability" as the governing term in its trustworthiness framework.

SLA (Service Level Agreement) — A contractual commitment defining measurable performance standards such as uptime (e.g., 99.9%), inference response time, and incident response windows. SLA structures vary significantly across AI technology services delivery models.

Fine-Tuning — The process of continuing to train a pre-existing model on a narrower, domain-specific dataset to improve task performance. Fine-tuning is central to AI model training services engagements and typically requires client-owned labeled data.

Prompt Injection — An adversarial technique in which malicious content embedded in user input manipulates an LLM's behavior. Classified as a critical risk in OWASP's LLM Top 10 (2023 edition), which lists it as LLM01.


Decision boundaries

Precise term usage determines legal exposure, procurement scope, and audit readiness. Three comparison pairs where ambiguity creates operational risk:

AI Model vs. AI System — Under NIST AI 100-1, an AI system includes the model plus its surrounding data inputs, outputs, interfaces, and human oversight mechanisms. A contract scoped to an "AI model" may exclude critical system components from the vendor's liability. This distinction matters directly when reviewing AI technology services contracts.

Automation vs. AutonomyAutomation describes systems that execute predefined rules without learning. Autonomy describes systems that adapt behavior based on environmental feedback. AI automation services may not involve any machine learning; conflating the two inflates expected capabilities.

Bias vs. FairnessBias (statistical) refers to systematic error in model outputs relative to a reference. Fairness is a normative standard applied to outcomes across demographic groups. ISO/IEC TR 24027:2021 (Bias in AI Systems and AI Aided Decision Making) distinguishes 14 distinct bias types. A vendor commitment to "reduce bias" without specifying which bias definition and which measurement methodology is not an enforceable standard.

Procurement teams reviewing questions to ask AI service providers should request written definitions of any governance term used in an SOW before contract execution.


References

📜 1 regulatory citation referenced  ·  ✅ Citations verified Feb 25, 2026  ·  View update log

Explore This Site