AIwire

AI Glossary

Business AI terms explained in plain language. From AI agents to vector databases — if you’re new to AI or just need a refresher, start here.

20 terms

A

AI Agent

A software system that can autonomously plan, decide, and act to achieve a goal within defined boundaries, often using an LLM as its reasoning engine.

Autonomous Agent

An AI agent that operates independently without continuous human intervention, making decisions and taking actions based on its programming and environment.

Agentic Workflow

A business process where one or more AI agents execute tasks autonomously — researching, deciding, and acting within guardrails defined by humans.

AI Orchestrator

A system or agent that coordinates multiple AI agents, routing tasks, managing handoffs, and ensuring the overall workflow completes successfully.

C

Context Window

The maximum number of tokens an LLM can process in a single request (input + output). Larger windows allow processing more documents at once. Current models offer 128K to 1M tokens.

Related:tokenllm

Chain-of-Thought

A prompting technique where the LLM is instructed to reason step-by-step before giving a final answer. Improves accuracy on complex reasoning tasks.

E

Embedding

A numerical vector representation of text or data that captures semantic meaning. Used for search, similarity matching, and as input to RAG systems.

F

Fine-Tuning

The process of further training a pre-trained LLM on domain-specific data to improve its performance for a particular task or industry.

Function Calling

An LLM capability to invoke external tools, APIs, or functions during generation. Enables agents to take real actions (send emails, query databases, make API calls) rather than just generate text.

Foundation Model

A large AI model trained on broad data that serves as a base for many downstream applications. GPT-5, Claude, and Gemini are foundation models that can be used directly or fine-tuned for specific tasks.

H

Hallucination

When an LLM generates plausible-sounding but factually incorrect information. A critical risk in business applications, mitigated by RAG, fact-checking, and human oversight.

Related:raginference

I

Inference

The process of running data through a trained AI model to generate a prediction or response. In LLMs, inference is what happens every time you send a prompt and receive a response.

Related:llmtoken

L

LLM

Large Language Model — an AI model trained on vast text datasets that can generate, understand, and reason about human language. Examples: GPT-5, Claude, Gemini.

M

Multi-Agent System

A system where multiple AI agents collaborate, each with specialised roles, to accomplish complex tasks that a single agent couldn't handle alone.

Model Deprecation

When an LLM provider announces that a specific model version will be retired and its API access disabled. Enterprises must migrate to replacement models before the sunset date.

Related:llminference

P

Prompt Engineering

The practice of designing and refining input prompts to elicit desired outputs from an LLM. Includes techniques like chain-of-thought, few-shot learning, and system prompts.

R

RAG

Retrieval-Augmented Generation — a technique where an LLM retrieves relevant documents from a knowledge base before generating a response, improving accuracy and reducing hallucinations.

Retrieval-Augmented Generation

See RAG. The full term for the technique of augmenting LLM responses with retrieved documents from a knowledge base.

T

Token

The basic unit of text that an LLM processes. Roughly ¾ of a word in English. LLM pricing is calculated per token, and context windows are measured in tokens.

V

Vector Database

A database optimised for storing and searching embeddings (vector representations). Used in RAG systems to find semantically similar documents. Examples: Pinecone, Weaviate, Qdrant.

Related:embeddingrag