Skip to main content

MCP Integration

AgentMark supports calling Model Context Protocol (MCP) tools directly from your prompts.

What is MCP?

The Model Context Protocol (MCP) is an open standard that allows AI applications to connect to external tools and data sources. Instead of hardcoding tool implementations, MCP lets you:
  • Connect to tool servers — Use pre-built MCP servers for filesystems, databases, APIs, and more
  • Standardize tool interfaces — All MCP tools follow the same protocol, making them interchangeable
  • Share tools across projects — One MCP server can serve multiple AI applications
Think of MCP like USB for AI tools. Just as USB provides a standard way to connect peripherals to computers, MCP provides a standard way to connect tools to AI applications.

How MCP works with AgentMark

  1. You configure MCP servers in your AgentMark client (either a local process or a remote URL)
  2. You reference MCP tools in your prompt frontmatter using mcp://{server}/{tool} syntax
  3. At runtime, AgentMark connects to the server and makes those tools available to the AI model
┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│  Your Prompt    │────▶│  AgentMark      │────▶│  MCP Server     │
│  mcp://fs/read  │     │  Client         │     │  (filesystem)   │
└─────────────────┘     └─────────────────┘     └─────────────────┘

What you’ll learn

  • Configure MCP servers (local process or remote URL)
  • Reference MCP tools in prompts using mcp:// URIs
  • Combine MCP tools with inline tool definitions
  • Use environment variable interpolation for secrets

MCP Server Types

AgentMark supports two types of MCP servers:
TypeUse CaseConfiguration
stdioLocal tools that run as a subprocesscommand, args, cwd, env
URL/SSERemote tools accessed over HTTPurl, headers

stdio servers (local process)

The server runs as a child process on your machine. AgentMark communicates with it via stdin/stdout.
{
  filesystem: {
    command: "npx",
    args: ["-y", "@modelcontextprotocol/server-filesystem", "./data"],
    cwd: "/path/to/project",  // optional working directory
    env: { NODE_ENV: "production" },  // optional environment
  }
}
When to use: Local development, accessing local files, running custom tools.

URL servers (remote HTTP)

The server runs remotely and accepts requests over HTTP with Server-Sent Events (SSE).
{
  docs: {
    url: "https://docs.example.com/mcp",
    headers: { Authorization: "Bearer your-token" },  // optional auth
  }
}
When to use: Shared team tools, cloud-hosted services, production deployments.

1) Configure MCP servers

Define servers when creating your AgentMark client. You can mix both server types. Use env("VAR_NAME") to interpolate environment variables anywhere in the config — this keeps secrets out of your code.
import { createAgentMarkClient, VercelAIModelRegistry, VercelAIToolRegistry, McpServerRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";
import { ApiLoader } from "@agentmark-ai/loader-api";

const loader = new ApiLoader({ apiKey: process.env.AGENTMARK_API_KEY! });
const modelRegistry = new VercelAIModelRegistry();
modelRegistry.registerModels(["gpt-4o-mini"], (name) => ({ name } as any));

const toolRegistry = new VercelAIToolRegistry({
  summarize: ({ text, maxSentences = 2 }: { text: string; maxSentences?: number }) => {
    const sentences = String(text).split(/(?<=[.!?])\s+/).slice(0, maxSentences);
    return { summary: sentences.join(" ") };
  },
});

const mcpServers = new McpServerRegistry({
  docs: {
    url: "env(AGENTMARK_MCP_SSE_URL)",
    headers: { Authorization: "Bearer env(MCP_TOKEN)" },
  },
  local: {
    command: "npx",
    args: ["-y", "@mastra/mcp-docs-server"],
    env: { NODE_ENV: "production" },
  },
});

const agentMark = createAgentMarkClient({
  loader,
  modelRegistry,
  toolRegistry,
  mcpServers,
});
Type guard behavior (from core):
  • URL servers accept only url (and adapter-allowed headers).
  • stdio servers accept only command, args, cwd, env.

Environment interpolation

Use env("VAR") anywhere in the config; values resolve from process.env.VAR:
mcpServers: {
  docs: { url: "env(DOCS_MCP_URL)", headers: { Authorization: "Bearer env(MCP_TOKEN)" } },
  local: { command: "env(NODE_BIN)", args: ["-y", "@mastra/mcp-docs-server"] },
}

2) Reference MCP tools in prompts

Declare MCP tools in your prompt frontmatter. You can mix MCP tools with inline tool definitions.
mcp-example.prompt.mdx
---
name: mcp-example
text_config:
  model_name: gpt-4
  tools:
    search_docs: mcp://docs/web-search
    summarize:
      description: Summarize a block of text
      parameters:
        type: object
        properties:
          text: { type: string }
          maxSentences: { type: number }
        required: [text]
---

<System>
Use the search_docs tool to look up relevant documentation when needed.
Use the summarize tool to condense content into a short summary.
</System>

<User>
Find the page that explains MCP integration and summarize it in 2 sentences.
</User>
  • search_docs resolves to the MCP server docs, tool web-search.
  • summarize is an inline tool available via the adapter tool registry.

Wildcard: include all tools from a server

Include every tool exported by a server using *:
---
text_config:
  tools:
    all: mcp://docs/*
---
  • The alias key (all) is ignored; server tools are added by their original names.
  • If a tool name collides with an existing inline tool, the later-added tool overwrites the earlier one.

3) Format and run (AI SDK example)

import { createAgentMarkClient, VercelAIModelRegistry, VercelAIToolRegistry, McpServerRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";
import { ApiLoader } from "@agentmark-ai/loader-api";

const loader = new ApiLoader({ apiKey: process.env.AGENTMARK_API_KEY! });
const modelRegistry = new VercelAIModelRegistry();
modelRegistry.registerModels(["gpt-4o-mini"], (name) => ({ name } as any));

const toolRegistry = new VercelAIToolRegistry({
  summarize: ({ text, maxSentences = 2 }: { text: string; maxSentences?: number }) => {
    const sentences = String(text).split(/(?<=[.!?])\s+/).slice(0, maxSentences);
    return { summary: sentences.join(" ") };
  },
});

const mcpServers = new McpServerRegistry({
  test: { command: "npx", args: ["-y", "@mastra/mcp-docs-server"] },
});

const agentMark = createAgentMarkClient({
  loader,
  modelRegistry,
  toolRegistry,
  mcpServers,
});

(async () => {
  const prompt = await agentMark.loadTextPrompt("./mcp-text.prompt.mdx");
  const input = await prompt.format();
  // Pass input to your AI SDK, e.g. generateText(input)
})();

Notes and best practices

  • Keep server configs minimal to pass type guards.
  • Prefer environment interpolation for portability and secrets hygiene.
  • Use wildcard import to quickly expose a server’s full tool surface; be mindful of name collisions.

Have Questions?

We’re here to help! Choose the best way to reach us: