Why AI Plugins Matter

AI plugins allow large language models (LLMs) to call external APIs as tools — turning a chat interface into an agent that can search databases, fetch live data, submit forms, or interact with your service. As the ecosystem around LLM tool-use matures, understanding how to build an AI-ready API is becoming an essential developer skill.

The Foundation: OpenAPI 3.x

Most AI plugin systems — including those used by OpenAI, Anthropic's tool-use, and open frameworks like LangChain — rely on OpenAPI 3.x specifications to describe what an API can do. The LLM reads this spec to understand available operations, parameters, and expected responses.

A well-written OpenAPI spec is your plugin's contract with the AI. Descriptions matter enormously — the model uses them to decide when and how to call your API.

Anatomy of an AI-Ready OpenAPI Spec

1. Info Block

The info section should contain a clear, descriptive description of what your API does — written in plain English that an LLM can reason about, not just marketing copy.

2. Operation Descriptions

Every endpoint needs a summary and description. These are the primary signals the model uses to route tool calls. Be specific about when to use an endpoint, not just what it does.

3. Parameter Descriptions

Document every parameter with a description that explains valid values, formats, and edge cases. Use enum for constrained values whenever possible — this reduces hallucinated inputs.

4. Response Schemas

Define your response schemas precisely using JSON Schema within the OpenAPI spec. This helps the model parse and reason about the output correctly.

The Plugin Manifest (ai-plugin.json)

Historically, ChatGPT plugins used a manifest file at /.well-known/ai-plugin.json. While the ChatGPT plugin store has evolved, the pattern remains influential. A typical manifest includes:

  • name_for_human and name_for_model — display name vs. the identifier the LLM uses
  • description_for_human and description_for_model — again, the model-facing description should be functional and precise
  • api — pointer to your OpenAPI spec URL
  • auth — authentication type (none, user_http, oauth)
  • logo_url, contact_email, legal_info_url

Authentication Patterns for AI Plugins

Auth TypeHow It WorksBest For
NonePublic, unauthenticated APIRead-only public data
Service HTTPStatic bearer token in headerDev/testing, trusted contexts
User HTTPUser provides their own API keyPersonal API integrations
OAuth 2.0Full OAuth flow, per-user tokensProduction, multi-user services

Best Practices for LLM-Friendly APIs

  • Keep operations atomic: One endpoint should do one clear thing. Multipurpose endpoints confuse tool-selection logic.
  • Return structured JSON: Avoid free-form HTML or Markdown in responses — the model needs structured data it can reason about.
  • Provide sensible defaults: Fewer required parameters means fewer hallucinated values.
  • Version your spec: Use semantic versioning and host your spec at a stable URL.
  • Rate limiting and safety: Even if the user authorizes your plugin, implement rate limits and validate inputs server-side — the LLM is not a trusted actor.

Testing Your Plugin

Use tools like Swagger UI or Redoc to visualize your OpenAPI spec before connecting it to any LLM. Then test with a local LLM framework (LangChain, LlamaIndex, or Semantic Kernel) to verify the model interprets and calls your endpoints correctly. Always test adversarial inputs — LLMs can generate unexpected parameter combinations.

Building an AI plugin is ultimately about writing good documentation that a machine can act on. The discipline will make your API better for human developers too.