The AgentMark client is configured in agentmark.client.ts (or agentmark_client.py). It connects your prompts to AI models, tools, evaluations, and prompt loading — used by the CLI, the platform, and your application code.
Basic Configuration
The client file is generated by npm create agentmark@latest. Each adapter has its own client pattern:
AI SDK (Vercel)
Claude Agent SDK
Mastra
Pydantic AI (Python)
import {
createAgentMarkClient,
VercelAIModelRegistry,
VercelAIToolRegistry,
EvalRegistry,
} from "@agentmark-ai/ai-sdk-v5-adapter";
import { ApiLoader } from "@agentmark-ai/loader-api";
import { openai } from "@ai-sdk/openai";
const loader =
process.env.NODE_ENV === "development"
? ApiLoader.local({
baseUrl: process.env.AGENTMARK_BASE_URL || "http://localhost:9418",
})
: ApiLoader.cloud({
apiKey: process.env.AGENTMARK_API_KEY!,
appId: process.env.AGENTMARK_APP_ID!,
});
const modelRegistry = new VercelAIModelRegistry()
.registerModels(["gpt-4o", "gpt-4o-mini"], (name) => openai(name))
.registerModels(["dall-e-3"], (name) => openai.image(name))
.registerModels(["tts-1-hd"], (name) => openai.speech(name));
export const client = createAgentMarkClient({
loader,
modelRegistry,
});
Install:npm install @agentmark-ai/ai-sdk-v5-adapter @agentmark-ai/loader-api @ai-sdk/openai
import {
createAgentMarkClient,
ClaudeAgentModelRegistry,
EvalRegistry,
} from "@agentmark-ai/claude-agent-sdk-adapter";
import { ApiLoader } from "@agentmark-ai/loader-api";
const loader =
process.env.NODE_ENV === "development"
? ApiLoader.local({
baseUrl: process.env.AGENTMARK_BASE_URL || "http://localhost:9418",
})
: ApiLoader.cloud({
apiKey: process.env.AGENTMARK_API_KEY!,
appId: process.env.AGENTMARK_APP_ID!,
});
const modelRegistry = ClaudeAgentModelRegistry.createDefault();
export const client = createAgentMarkClient({
loader,
modelRegistry,
adapterOptions: {
permissionMode: "bypassPermissions",
maxTurns: 20,
},
});
Install:npm install @agentmark-ai/claude-agent-sdk-adapter @agentmark-ai/loader-api
The adapterOptions are unique to this adapter:| Option | Description |
|---|
permissionMode | 'default', 'acceptEdits', 'bypassPermissions', or 'plan' |
maxTurns | Maximum number of agent turns |
maxBudgetUsd | Spending limit per run |
cwd | Working directory for the agent |
allowedTools | Whitelist of tool names |
disallowedTools | Blacklist of tool names |
import {
createAgentMarkClient,
MastraModelRegistry,
MastraToolRegistry,
EvalRegistry,
} from "@agentmark-ai/mastra-v0-adapter";
import { ApiLoader } from "@agentmark-ai/loader-api";
import { openai } from "@ai-sdk/openai";
const loader =
process.env.NODE_ENV === "development"
? ApiLoader.local({
baseUrl: process.env.AGENTMARK_BASE_URL || "http://localhost:9418",
})
: ApiLoader.cloud({
apiKey: process.env.AGENTMARK_API_KEY!,
appId: process.env.AGENTMARK_APP_ID!,
});
const modelRegistry = new MastraModelRegistry()
.registerModels(["gpt-4o", "gpt-4o-mini"], (name) => openai(name));
export const client = createAgentMarkClient({
loader,
modelRegistry,
});
Install:npm install @agentmark-ai/mastra-v0-adapter @agentmark-ai/loader-api @ai-sdk/openai
import os
from agentmark_pydantic_ai_v0 import (
create_pydantic_ai_client,
create_default_model_registry,
PydanticAIToolRegistry,
)
from agentmark.prompt_core import EvalRegistry
from agentmark.loader_api import ApiLoader
if os.getenv("AGENTMARK_ENV") == "development":
loader = ApiLoader.local(
base_url=os.getenv("AGENTMARK_BASE_URL", "http://localhost:9418")
)
else:
loader = ApiLoader.cloud(
api_key=os.environ["AGENTMARK_API_KEY"],
app_id=os.environ["AGENTMARK_APP_ID"],
)
model_registry = create_default_model_registry()
tool_registry = PydanticAIToolRegistry()
client = create_pydantic_ai_client(
model_registry=model_registry,
tool_registry=tool_registry,
loader=loader,
)
Install:pip install agentmark-pydantic-ai-v0 agentmark-loader-api
create_default_model_registry() auto-resolves model names to providers: gpt-* to OpenAI, claude-* to Anthropic, gemini-* to Google, etc.
Prompt Loading
The loader determines how prompts are fetched at runtime. AgentMark provides two loaders:
ApiLoader (Recommended)
FileLoader (Self-Hosted)
Use ApiLoader for both development and production:import { ApiLoader } from "@agentmark-ai/loader-api";
// Development — loads from local dev server
const loader = ApiLoader.local({
baseUrl: "http://localhost:9418",
});
// Production — loads from AgentMark Cloud CDN
const loader = ApiLoader.cloud({
apiKey: process.env.AGENTMARK_API_KEY!,
appId: process.env.AGENTMARK_APP_ID!,
});
ApiLoader.cloud() fetches prompts from the AgentMark API with a 60-second TTL cache. ApiLoader.local() fetches from your running agentmark dev server. Use FileLoader to load pre-built prompts from disk (no cloud dependency):import { FileLoader } from "@agentmark-ai/loader-file";
const loader = new FileLoader("./dist/agentmark");
Requires running agentmark build --out dist/agentmark before deployment to compile your .prompt.mdx files into JSON.A common pattern is to use ApiLoader.local() in development and FileLoader in production:import { ApiLoader } from "@agentmark-ai/loader-api";
import { FileLoader } from "@agentmark-ai/loader-file";
const loader =
process.env.NODE_ENV === "development"
? ApiLoader.local({ baseUrl: "http://localhost:9418" })
: new FileLoader("./dist/agentmark");
Registering Models
The model registry maps model names (from prompt frontmatter) to actual AI SDK model instances. Each adapter has its own registry class.
AI SDK (Vercel)
Claude Agent SDK
Mastra
Pydantic AI (Python)
import { VercelAIModelRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
import { google } from "@ai-sdk/google";
const modelRegistry = new VercelAIModelRegistry()
// Language models
.registerModels(["gpt-4o", "gpt-4o-mini"], (name) => openai(name))
.registerModels(["claude-sonnet-4-20250514"], (name) => anthropic(name))
.registerModels(["gemini-2.0-flash"], (name) => google(name))
// Image models
.registerModels(["dall-e-3"], (name) => openai.image(name))
// Speech models
.registerModels(["tts-1-hd"], (name) => openai.speech(name));
You can also use regex patterns for dynamic matching:const modelRegistry = new VercelAIModelRegistry()
.registerModels(/^gpt-/, (name) => openai(name))
.registerModels(/^claude-/, (name) => anthropic(name));
import { ClaudeAgentModelRegistry } from "@agentmark-ai/claude-agent-sdk-adapter";
// Option 1: Default registry (passes model names through)
const modelRegistry = ClaudeAgentModelRegistry.createDefault();
// Option 2: Custom configuration per model
const modelRegistry = new ClaudeAgentModelRegistry()
.registerModels(["claude-sonnet-4-20250514"], (name) => ({
model: name,
maxThinkingTokens: 10000,
}));
import { MastraModelRegistry } from "@agentmark-ai/mastra-v0-adapter";
import { openai } from "@ai-sdk/openai";
const modelRegistry = new MastraModelRegistry()
.registerModels(["gpt-4o", "gpt-4o-mini"], (name) => openai(name));
from agentmark_pydantic_ai_v0 import (
PydanticAIModelRegistry,
create_default_model_registry,
)
# Option 1: Auto-resolves model names to providers
model_registry = create_default_model_registry()
# Option 2: Custom registry
model_registry = PydanticAIModelRegistry()
model_registry.register_models(
["gpt-4o", "gpt-4o-mini"],
lambda name, opts=None: f"openai:{name}"
)
Models referenced in prompt frontmatter must be registered in the model registry:
---
text_config:
model_name: gpt-4o
---
Use agentmark pull-models to add built-in models to your agentmark.json. You still need to register them in the client for runtime use.
Tools allow prompts to call functions during generation. Register them in the tool registry and reference them by name in prompt frontmatter.
AI SDK (Vercel)
Claude Agent SDK
Mastra
Pydantic AI (Python)
import { VercelAIToolRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";
const toolRegistry = new VercelAIToolRegistry()
.register("search_knowledgebase", async ({ query }) => {
const results = await searchKB(query);
return { articles: results };
})
.register("get_weather", async ({ location }) => {
return { temp: 72, condition: "sunny" };
});
export const client = createAgentMarkClient({
loader,
modelRegistry,
toolRegistry,
});
import { ClaudeAgentToolRegistry } from "@agentmark-ai/claude-agent-sdk-adapter";
const toolRegistry = new ClaudeAgentToolRegistry()
.register("search_knowledgebase", async ({ query }) => {
return { articles: await searchKB(query) };
});
import { MastraToolRegistry } from "@agentmark-ai/mastra-v0-adapter";
const toolRegistry = new MastraToolRegistry()
.register("search_knowledgebase", async ({ query }) => {
return { articles: await searchKB(query) };
});
from agentmark_pydantic_ai_v0 import PydanticAIToolRegistry
tool_registry = PydanticAIToolRegistry()
tool_registry.register("search_knowledgebase", search_kb_handler)
Reference tools in prompt frontmatter:
---
text_config:
model_name: gpt-4o
tools:
- search_knowledgebase
---
Learn more about tools
Registering Evaluations
Evaluations score prompt outputs during experiments. The EvalRegistry is shared across all adapters.
import { EvalRegistry } from "@agentmark-ai/prompt-core";
const evalRegistry = new EvalRegistry()
.register("exact_match", ({ output, expectedOutput }) => {
const match = output === expectedOutput;
return {
score: match ? 1 : 0,
passed: match,
reason: match ? undefined : `Expected "${expectedOutput}", got "${output}"`,
};
})
.register("contains_keyword", ({ output, expectedOutput }) => {
const contains = String(output).includes(String(expectedOutput));
return { score: contains ? 1 : 0, passed: contains };
});
You can also register multiple evals at once by passing an array of names:
evalRegistry.register(["length_check", "word_count"], ({ output }) => {
return { score: String(output).length > 10 ? 1 : 0, passed: true };
});
The EvalRegistry is re-exported by every adapter, so you can import it from your adapter package too:
import { EvalRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";
Pass the registry to your client:
export const client = createAgentMarkClient({
loader,
modelRegistry,
evalRegistry,
});
Reference evals in prompt frontmatter:
---
test_settings:
dataset: ./datasets/sentiment.jsonl
evals:
- exact_match
---
Learn more about evaluations
MCP Servers
MCP servers provide additional tools to your prompts. The AI SDK v5 adapter uses a McpServerRegistry:
import { McpServerRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";
const mcpRegistry = new McpServerRegistry()
.register("filesystem", {
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "./data"],
})
.register("github", {
command: "npx",
args: ["-y", "@modelcontextprotocol/server-github"],
env: { GITHUB_PERSONAL_ACCESS_TOKEN: process.env.GITHUB_TOKEN! },
});
// For remote MCP servers (URL/SSE):
mcpRegistry.register("docs", {
url: "https://example.com/mcp",
headers: { Authorization: "Bearer your-token" },
});
export const client = createAgentMarkClient({
loader,
modelRegistry,
mcpRegistry,
});
You can also register all servers from your agentmark.json at once:
mcpRegistry.registerServers({
docs: { url: "https://example.com/mcp" },
filesystem: { command: "npx", args: ["..."] },
});
MCP servers configured in agentmark.json are available in the platform editor. MCP servers configured in the client code are available at runtime.
Learn more about MCP
Observability
The AgentMark SDK provides OpenTelemetry-based tracing for monitoring prompts in production.
import { AgentMarkSDK } from "@agentmark-ai/sdk";
const sdk = new AgentMarkSDK({
apiKey: process.env.AGENTMARK_API_KEY!,
appId: process.env.AGENTMARK_APP_ID!,
});
// Initialize tracing (call once at startup)
sdk.initTracing();
// Use the SDK's built-in loader
const loader = sdk.getApiLoader();
initTracing() sets up an OpenTelemetry BatchSpanProcessor that exports traces to the AgentMark API. For debugging, use sdk.initTracing({ disableBatch: true }) for immediate span export.from agentmark_sdk import AgentMarkSDK
sdk = AgentMarkSDK(
api_key=os.environ["AGENTMARK_API_KEY"],
app_id=os.environ["AGENTMARK_APP_ID"],
)
sdk.init_tracing()
Learn more about observability
Type Safety
Run agentmark build to generate agentmark.types.ts with TypeScript types for all your prompts. Pass the type to createAgentMarkClient for autocomplete on prompt names, props, and outputs:
import type { AgentMarkTypes } from "./agentmark.types";
export const client = createAgentMarkClient<AgentMarkTypes>({
loader,
modelRegistry,
});
// Type-checked: prompt name, props, and output
const prompt = await client.loadTextPrompt("greeting.prompt.mdx");
const input = await prompt.format({
props: { name: "Alice", role: "developer" }, // type-checked
});
Learn more about type safety
Using the Client
Import the client in your application to load and run prompts:
AI SDK (Vercel)
Claude Agent SDK
Mastra
Pydantic AI (Python)
import { client } from "./agentmark.client";
import { generateText } from "ai";
const prompt = await client.loadTextPrompt("greeting.prompt.mdx");
const input = await prompt.format({
props: { name: "Alice" },
telemetry: { isEnabled: true },
});
const result = await generateText(input);
console.log(result.text);
import { client } from "./agentmark.client";
import { withTracing } from "@agentmark-ai/claude-agent-sdk-adapter";
import Anthropic from "@anthropic-ai/claude-code";
const anthropic = new Anthropic();
const prompt = await client.loadTextPrompt("agent-task.prompt.mdx");
const adapted = await prompt.format({
props: { task: "Refactor the auth module" },
telemetry: { isEnabled: true },
});
const result = await withTracing(
(opts) => anthropic.messages.stream(opts),
adapted
);
for await (const message of result) {
console.log(message);
}
import { client } from "./agentmark.client";
const prompt = await client.loadTextPrompt("greeting.prompt.mdx");
const input = await prompt.format({
props: { name: "Alice" },
telemetry: { isEnabled: true },
});
const result = await agent.generate(input);
from agentmark_client import client
prompt = await client.load_text_prompt("greeting.prompt.mdx")
input_data = await prompt.format(props={"name": "Alice"})
result = await agent.run(input_data)
Troubleshooting
| Issue | Solution |
|---|
| Model not found | Ensure the model name in prompt frontmatter is registered in your model registry |
| Tool not available | Check the tool is registered in the tool registry and the name matches the prompt config |
| Loader connection failed | Verify agentmark dev is running for local mode, or check AGENTMARK_API_KEY / AGENTMARK_APP_ID for cloud mode |
| MCP server not connecting | Verify the command/args are correct and any required env vars are set |
| Type errors | Run agentmark build to regenerate agentmark.types.ts |
Next Steps
Have Questions?
We’re here to help! Choose the best way to reach us: