Use this file to discover all available pages before exploring further.
The AgentMark client is configured in agentmark.client.ts (or agentmark_client.py). It connects your prompts to AI models, tools, evaluations, and prompt loading — used by the CLI, AgentMark Cloud, and your application code.
The Python adapters don’t ship a “default” model registry — you register provider prefixes explicitly with register_models(). The format "openai:<model>" tells Pydantic AI which provider to use at runtime.
The loader determines how prompts are fetched at runtime. AgentMark provides two loaders:
ApiLoader (recommended)
FileLoader (self-hosted)
Use ApiLoader for both development and production:
import { ApiLoader } from "@agentmark-ai/loader-api";// Development — loads from local dev serverconst loader = ApiLoader.local({ baseUrl: "http://localhost:9418",});// Production — loads from AgentMark Cloud CDNconst loader = ApiLoader.cloud({ apiKey: process.env.AGENTMARK_API_KEY!, appId: process.env.AGENTMARK_APP_ID!,});
ApiLoader.cloud() fetches prompts from the AgentMark API with a 60-second TTL cache. ApiLoader.local() fetches from your running agentmark dev server.
Use FileLoader to load pre-built prompts from disk (no AgentMark Cloud dependency):
import { FileLoader } from "@agentmark-ai/loader-file";const loader = new FileLoader("./dist/agentmark");
Requires running npx agentmark build --out dist/agentmark before deployment to compile your .prompt.mdx files into JSON.A common pattern is to use ApiLoader.local() in development and FileLoader in production:
import { ApiLoader } from "@agentmark-ai/loader-api";import { FileLoader } from "@agentmark-ai/loader-file";const loader = process.env.NODE_ENV === "development" ? ApiLoader.local({ baseUrl: "http://localhost:9418" }) : new FileLoader("./dist/agentmark");
The "<provider>:<model>" string is the format Pydantic AI uses to pick a provider at runtime. AgentMark doesn’t ship a pre-built default registry for Python — register the providers you use.
Models referenced in prompt frontmatter must be registered in the model registry:
---text_config: model_name: gpt-4o---
Use npx agentmark pull-models to add built-in models to your agentmark.json. You still need to register them in the client for runtime use.
Tools allow prompts to call functions during generation. Pass tools directly as a plain object to createAgentMarkClient and reference them by name in prompt frontmatter.
AI SDK (Vercel)
Claude Agent SDK
Claude Agent SDK (Python)
Mastra
Pydantic AI (Python)
Use the native tool() function from the ai package to define tools. AI SDK v5 uses inputSchema (Zod) — parameters is the v4 name and fails type-checking in v5.
import { createAgentMarkClient, VercelAIModelRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";import { tool } from "ai";import { z } from "zod";const searchTool = tool({ description: "Search the knowledge base", inputSchema: z.object({ query: z.string() }), execute: async ({ query }) => ({ results: [`Result for ${query}`] }),});const weatherTool = tool({ description: "Get current weather for a location", inputSchema: z.object({ location: z.string() }), execute: async ({ location }) => ({ temp: 72, condition: "sunny" }),});export const client = createAgentMarkClient({ loader, modelRegistry, tools: { search_knowledgebase: searchTool, get_weather: weatherTool, },});
The Claude Agent SDK adapter uses mcpServers (camelCase) instead of tools, since the Claude agent accesses tools through MCP:
Pass native Python functions (or pydantic_ai.Tool objects) as a tools list. The adapter filters the list at adapt time against the tool names in the prompt’s frontmatter:
from agentmark_pydantic_ai_v0 import create_pydantic_ai_clientasync def search_knowledgebase(query: str) -> dict: return {"results": [f"Result for {query}"]}client = create_pydantic_ai_client( model_registry=model_registry, tools=[search_knowledgebase], loader=loader,)
Eval functions score prompt outputs during experiments. Score schemas are defined separately in agentmark.json (see Project config) and deployed to AgentMark Cloud. Eval functions are registered in your client config and connected to scores by name.
Each key in the mcpServers object is the server name. Local servers use command and args, while remote servers use url and optional headers.
MCP servers configured in agentmark.json are available in the AgentMark Dashboard prompt editor. MCP servers configured in the client code are available at runtime.
The AgentMark SDK provides OpenTelemetry-based tracing for monitoring prompts in production.
TypeScript
Python
import { AgentMarkSDK } from "@agentmark-ai/sdk";const sdk = new AgentMarkSDK({ apiKey: process.env.AGENTMARK_API_KEY!, appId: process.env.AGENTMARK_APP_ID!,});// Initialize tracing (call once at startup)sdk.initTracing();// Use the SDK's built-in loaderconst loader = sdk.getApiLoader();
initTracing() sets up an OpenTelemetry BatchSpanProcessor that exports traces to the AgentMark API. For debugging, use sdk.initTracing({ disableBatch: true }) for immediate span export.To redact sensitive data from traces, pass a mask function. See PII masking.
from agentmark_sdk import AgentMarkSDKsdk = AgentMarkSDK( api_key=os.environ["AGENTMARK_API_KEY"], app_id=os.environ["AGENTMARK_APP_ID"],)sdk.init_tracing()
To redact sensitive data from traces, pass a mask function. See PII masking.
You can also pass a mask function to redact sensitive data from traces before they leave your application:
Run npx agentmark generate-types --root-dir agentmark > agentmark.types.ts to generate TypeScript types for all your prompts. The generated file exports a default interface named AgentmarkTypes. Pass it to createAgentMarkClient for autocomplete on prompt names, props, and outputs: