AI Agent Integration

openapi-mcp is designed with unique AI optimization features that make any OpenAPI-documented API truly accessible to AI coding agents, LLMs, and automation tools. This page explains these optimizations and how to integrate openapi-mcp with various AI agents and platforms.

Unique AI Optimizations

openapi-mcp implements several unique features specifically designed to enhance AI agent interactions:

  • Structured outputs - All responses include consistent format with type information, making parsing reliable across different APIs
  • Rich schema information - Detailed parameter constraints and examples that enable agents to construct valid calls without prior knowledge of the API
  • Actionable errors - Error responses include details and suggestions for resolution, allowing agents to self-correct without human intervention
  • Self-describing API - The describe tool provides comprehensive, machine-readable documentation that agents can analyze
  • Minimal verbosity - Default output is concise and focused on data, optimized for machine consumption rather than human readability
  • Standardized confirmation - Consistent pattern for handling dangerous operations that integrates seamlessly with agent decision-making workflows
  • Smart parameter handling - Automatic conversion between OpenAPI parameter types and MCP tool parameters, including proper handling of arrays, objects, and complex nested structures
  • Contextual examples - Every tool includes context-aware examples based on the OpenAPI specification, giving agents concrete usage patterns
  • Intelligent default values - Sensible defaults are provided whenever possible to simplify API usage
  • Type-aware validation - Input validation that provides clear, actionable feedback on exactly what needs to be corrected

Integration Patterns

AI Code Editor Integration

Many AI code editors like Roo Code support MCP tools configuration. You can add openapi-mcp to your editor's configuration:

{
    "fastly": {
        "command": "/opt/bin/openapi-mcp",
        "args": [
            "-api-key",
            "YOUR_API_KEY",
            "/opt/etc/openapi/fastly-openapi-mcp.yaml"
        ]
    }
}

This configuration provides the AI assistant with direct access to the API. The assistant can discover and use API operations without additional setup.

Stdio Integration (Recommended)

The simplest and most reliable way to integrate openapi-mcp with AI agents is using stdio mode:

bin/openapi-mcp examples/fastly-openapi-mcp.yaml

In this mode, the AI agent can launch openapi-mcp as a subprocess and communicate with it via stdin/stdout using the MCP (Model Context Protocol) format.

HTTP Integration

For web-based AI agents or when stdio isn't an option, openapi-mcp can run as an HTTP server:

bin/openapi-mcp --http=:8080 examples/fastly-openapi-mcp.yaml

The AI agent can then make HTTP requests to the MCP server at http://localhost:8080.

Integration with Popular AI Platforms

OpenAI Function Calling

openapi-mcp's tool schemas can be converted to OpenAI function definitions for use with function calling capabilities:

// Example of converting MCP tools to OpenAI functions
const tools = await fetchMcpTools();
const openAiFunctions = tools.map(tool => ({
  type: "function",
  function: {
    name: tool.name,
    description: tool.description,
    parameters: tool.inputSchema
  }
}));

// Use with OpenAI API
const response = await openai.chat.completions.create({
  model: "gpt-4-0125-preview",
  messages: [{ role: "user", content: "List all services" }],
  tools: openAiFunctions,
  tool_choice: "auto"
});

Anthropic Claude

openapi-mcp can be used with Anthropic Claude's tool use capability:

// Example of using MCP tools with Claude
const tools = await fetchMcpTools();
const claudeTools = tools.map(tool => ({
  name: tool.name,
  description: tool.description,
  input_schema: tool.inputSchema
}));

// Use with Claude API
const response = await anthropic.messages.create({
  model: "claude-3-opus-20240229",
  messages: [{ role: "user", content: "List all services" }],
  tools: claudeTools
});

GitHub Copilot

For GitHub Copilot, you can use openapi-mcp as a subprocess that Copilot can interact with via commands:

// Example Copilot plugin integration
async function callMcpTool(toolName, args) {
  const process = spawn('bin/mcp-client', ['bin/openapi-mcp', 'openapi.yaml']);
  
  // Send the command to the mcp-client
  process.stdin.write(`call ${toolName} ${JSON.stringify(args)}\n`);
  
  // Read the response
  const response = await new Promise((resolve) => {
    let output = '';
    process.stdout.on('data', (data) => {
      output += data.toString();
    });
    
    process.stdout.on('end', () => {
      resolve(output);
    });
  });
  
  return JSON.parse(response);
}

Tool Discovery for Agents

AI agents need to discover the available tools and their capabilities. openapi-mcp provides several ways for agents to explore the API:

List Available Tools

To get a list of all available tools:

// Request
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/list"
}

// Response
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": ["listServices", "createService", "getService", "updateService", "deleteService", ...]
}

Get Tool Schema

To get the schema for a specific tool:

// Request
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/schema",
  "params": { "name": "createService" }
}

// Response
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "name": "createService",
    "description": "Creates a new service configuration.",
    "inputSchema": {
      "type": "object",
      "properties": {
        "name": {
          "type": "string",
          "description": "The name of the service."
        },
        "comment": {
          "type": "string",
          "description": "A freeform descriptive note."
        },
        "type": {
          "type": "string",
          "enum": ["vcl", "wasm"],
          "description": "The type of this service."
        }
      },
      "required": ["name", "type"]
    }
  }
}

Get Complete API Documentation

To get comprehensive documentation for all tools:

// Request
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": { "name": "describe", "arguments": {} }
}

// Response (truncated for brevity)
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "OutputFormat": "structured",
    "OutputType": "json",
    "type": "api_docs",
    "tools": [
      {
        "name": "listServices",
        "description": "Retrieves a list of all services configured in the user's account.",
        "inputSchema": { /* schema details */ },
        "outputSchema": { /* schema details */ },
        "examples": [
          {
            "input": {},
            "output": { /* example output */ }
          }
        ]
      },
      // Other tools...
    ]
  }
}

Handling Dangerous Operations

For dangerous operations (PUT, POST, DELETE), openapi-mcp requires confirmation. AI agents should be designed to handle this confirmation workflow:

Confirmation Flow

// Initial request
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "deleteService",
    "arguments": {
      "service_id": "SU1Z0isxPaozGVKXdv0eY"
    }
  }
}

// Confirmation request response
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "OutputFormat": "structured",
    "OutputType": "json",
    "type": "confirmation_request",
    "confirmation_required": true,
    "message": "This action is irreversible. Proceed?",
    "action": "delete_resource"
  }
}

// Confirmation request with __confirmed
{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "tools/call",
  "params": {
    "name": "deleteService",
    "arguments": {
      "service_id": "SU1Z0isxPaozGVKXdv0eY",
      "__confirmed": true
    }
  }
}

AI agents should detect the confirmation_request response type and either:

  • Ask the user for confirmation before proceeding
  • Automatically confirm if they have permission to do so
  • Abort the operation if confirmation isn't possible

Best Practices for AI Integration

1. Default to Agent-Friendly Output

Do not use the --extended flag for machine/agent/CI usage. The default output is minimal and agent-friendly.

2. Use the describe Tool for Discovery

Always use the describe tool to get up-to-date schemas and examples, rather than hardcoding knowledge about the API.

3. Handle Structured Errors

Design your agent to parse and respond to structured error responses:

{
  "OutputFormat": "structured",
  "OutputType": "json",
  "type": "error",
  "error": {
    "code": "validation_error",
    "message": "Invalid parameter",
    "details": {
      "field": "name",
      "reason": "required field missing"
    },
    "suggestions": [
      "Provide a name parameter"
    ]
  }
}

4. Implement Proper Confirmation Handling

Build a robust confirmation workflow that respects the safety requirements of dangerous operations.

5. Use Retry Logic for Transient Errors

Implement retry logic with backoff for network-related errors or rate limiting.

Example: Complete AI Agent Integration

// Example of a complete AI agent integration in JavaScript
async function setupMcpTools() {
  // Start openapi-mcp as a subprocess
  const mcpProcess = spawn('bin/openapi-mcp', ['examples/fastly-openapi-mcp.yaml']);
  
  // Use the describe tool to get all tool definitions
  const toolDefs = await callMcpTool('describe', {});
  
  return {
    process: mcpProcess,
    tools: toolDefs.tools
  };
}

async function callMcpTool(name, args, confirmed = false) {
  // Construct the MCP request
  const request = {
    jsonrpc: "2.0",
    id: Date.now(),
    method: "tools/call",
    params: {
      name: name,
      arguments: confirmed ? { ...args, __confirmed: true } : args
    }
  };
  
  // Send the request and get the response
  const response = await sendMcpRequest(request);
  
  // Handle confirmation requests
  if (response.result && response.result.type === "confirmation_request") {
    const userConfirmed = await askUserForConfirmation(response.result.message);
    
    if (userConfirmed) {
      return callMcpTool(name, args, true);
    } else {
      return { cancelled: true, message: "Operation cancelled by user" };
    }
  }
  
  return response.result;
}

// Example of an AI agent using the tools
async function aiAgentWorkflow() {
  const { tools } = await setupMcpTools();
  
  // List all services
  const services = await callMcpTool('listServices', {});
  
  // Process the services and make decisions
  for (const service of services.data) {
    console.log(`Processing service: ${service.name}`);
    
    // Get service details
    const details = await callMcpTool('getService', { service_id: service.id });
    
    // Make decisions based on the data
    if (details.data.updated_at < olderThan30Days) {
      console.log(`Service ${service.name} needs updating`);
      
      // This will trigger a confirmation workflow
      const updateResult = await callMcpTool('updateService', {
        service_id: service.id,
        name: service.name,
        comment: "Updated by AI agent"
      });
      
      console.log(`Update result:`, updateResult);
    }
  }
}

Next Steps

Now that you understand how to integrate with AI agents, you can: