Glossary

Core agent, MCP, and LLM concepts.

MCP

An open standard developed by Anthropic that allows AI models to securely connect to external tools and data sources. Adopted by OpenAI and Google in 2025, MCP has become the dominant protocol for giving AI agents access to capabilities beyond their training.

Tool Calling

The ability of an LLM to invoke external functions or APIs during a conversation. The model outputs a structured call (name + arguments), the host executes it, and returns the result to the model. This is what makes agents actually *do* things rather than just talk.

AI Agent

An AI system that can perceive its environment, make decisions, and take actions autonomously to achieve goals — without requiring a human to approve every step. Unlike chatbots, agents run in loops, use tools, maintain memory, and can operate for hours or days on complex tasks.

MCP Server

A server that implements the MCP protocol to expose a specific capability (tool, resource, or prompt) to AI models. Anyone can build and publish an MCP server — there are now 10,000+ on GitHub. They range from simple (search web) to complex (full DeFi integrations).

Skill

A packaged capability that can be installed into an agent framework. Skills define what an agent can do — from checking the weather to executing trades. In OpenClaw, skills are defined in SKILL.md files. In ElizaOS, they're plugins. In MCP, they're servers.

Plugin

The ElizaOS term for a packaged agent capability — equivalent to a 'skill' in OpenClaw or a 'server' in MCP. ElizaOS has 341+ plugins on GitHub covering social media, DeFi, gaming, and more.

RAG

A technique that enhances LLM responses by first retrieving relevant documents from a knowledge base, then including them in the prompt. RAG gives agents access to custom knowledge without retraining the model — critical for agents that need to reason over large document sets.

Agent Memory

The mechanisms by which agents store and retrieve information across sessions. Memory types: short-term (within a session), long-term (persisted to files or DB), episodic (specific events), and semantic (general knowledge). Without memory, agents start fresh every conversation.

Token

The basic unit of text processed by LLMs. Roughly 1 token ≈ 0.75 words in English. LLM pricing and context limits are measured in tokens. For agents running complex tasks with large contexts, token optimization is critical for cost control.

Orchestrator

An agent (or agent layer) that coordinates other agents — assigning tasks, managing results, and making high-level decisions. In a multi-agent system, the orchestrator keeps the main context lean by delegating work to specialized sub-agents.

System Prompt

The initial instructions given to an LLM that define its identity, capabilities, constraints, and behavior. In agents, the system prompt is assembled from workspace files (SOUL.md, AGENTS.md, skills, etc.) and defines who the agent 'is' and what it can do.

SOUL.md

A workspace file in OpenClaw that defines the agent's identity, personality, values, and communication style. It's the equivalent of a character brief — telling the agent *who it is* rather than what it can do. Popularized by @aaronjmars, it's become a community standard.

AGENTS.md

The core workspace file in OpenClaw that defines operational rules, memory strategy, coding standards, heartbeat behavior, and tool usage guidelines. Every OpenClaw agent reads AGENTS.md on each session — it's the agent's rulebook.

Autonomous Agent

An agent that operates independently over extended periods without human approval for each action. Autonomy exists on a spectrum: from 'ask before every action' to 'act freely within defined boundaries.' True autonomy requires memory, tool-use, self-correction, and a reliable execution environment.

Heartbeat

A periodic wake-up call in OpenClaw that triggers the agent to check in, review pending tasks, and act proactively. Heartbeats are what make agents feel 'alive' — they check emails, monitor conditions, and surface important information without being asked.

Cron Job

A scheduled task that runs at defined intervals. In agent systems, cron jobs enable true autonomy — agents that wake up at specific times to execute tasks. In OpenClaw, the Gateway scheduler manages cron jobs with isolated sessions for each run.

ReAct

A prompting technique where the LLM alternates between Reasoning ('I need to find the current ETH price') and Acting (calling a tool), creating a loop of thought + action + observation. Most modern agent frameworks implement ReAct under the hood.

Multi-Agent System

A system where multiple specialized agents collaborate — each with different skills, memory, and responsibilities — coordinated by an orchestrator. Enables parallelism, specialization, and tasks that exceed a single agent's context window.

LLM

A neural network trained on massive text datasets that can understand and generate human language. LLMs are the 'brain' of AI agents — they reason, plan, and decide. Popular LLMs for agents include Claude (Anthropic), GPT-4o (OpenAI), Gemini (Google), and local models like Llama and Qwen.

Context Window

The maximum amount of text (measured in tokens) an LLM can process in a single call — including both input (system prompt + history + tools) and output. Larger context windows allow agents to reason over more information at once, but cost more and can reduce quality at the edges.

Inference

The process of running an LLM to generate a response — as opposed to training. When your agent 'thinks,' it's performing inference. Inference speed, cost, and reliability are key metrics for agent builders choosing an LLM provider.

Embedding

A numerical representation of text as a high-dimensional vector, enabling semantic similarity search. Embeddings power RAG systems — allowing agents to find relevant documents by meaning, not just keywords.

Function Calling

OpenAI's original name for tool-calling — the ability for GPT models to output structured JSON matching a function signature. Now standardized across most major LLMs. MCP extends this concept into a full protocol for server-based tools.

ClawHub

The skill marketplace for OpenClaw agents — like npm for OpenClaw. Developers publish skills (SKILL.md bundles) to ClawHub, and users install them with `openclaw skill install [name]`. agtr.app is building a web-first discovery layer on top of ClawHub.

ElizaOS

A TypeScript framework for building social and DeFi AI agents, originally by Shaw. Known for its plugin ecosystem (341+ plugins) and strong crypto integrations. Powers many Twitter-native AI agents including AIXBT.

OpenClaw

A personal AI agent platform that runs locally or on a VPS, with persistent memory, a skill marketplace (ClawHub), MCP support, and multi-channel messaging (Telegram, WhatsApp, Discord). Often described as 'the npm for AI agents' — but with a runtime, not just a registry.

AGTR

The native token of the agtr.app ecosystem — used for micropayments when installing skills, tipping builders, accessing premium content, and powering The Fund. AGTR creates a real economy around agent tools and knowledge.

Base

An Ethereum L2 blockchain built by Coinbase, known for low fees and strong developer tooling. agtr.app is deployed on Base — enabling micropayments for skill installs at near-zero cost. Base has become the dominant chain for crypto-native AI agent applications.

Context Compaction

The process of summarizing or pruning a conversation history to stay within the LLM's context window while preserving essential information. Critical for long-running agents — without compaction, sessions hit the context limit and lose coherence.