Skip to main content
The AgentMark client is configured in agentmark.client.ts (or agentmark_client.py). It connects your prompts to AI models, tools, evaluations, and prompt loading — used by the CLI, the platform, and your application code.

Basic Configuration

The client file is generated by npm create agentmark@latest. Each adapter has its own client pattern:
agentmark.client.ts
import {
  createAgentMarkClient,
  VercelAIModelRegistry,
  VercelAIToolRegistry,
  EvalRegistry,
} from "@agentmark-ai/ai-sdk-v5-adapter";
import { ApiLoader } from "@agentmark-ai/loader-api";
import { openai } from "@ai-sdk/openai";

const loader =
  process.env.NODE_ENV === "development"
    ? ApiLoader.local({
        baseUrl: process.env.AGENTMARK_BASE_URL || "http://localhost:9418",
      })
    : ApiLoader.cloud({
        apiKey: process.env.AGENTMARK_API_KEY!,
        appId: process.env.AGENTMARK_APP_ID!,
      });

const modelRegistry = new VercelAIModelRegistry()
  .registerModels(["gpt-4o", "gpt-4o-mini"], (name) => openai(name))
  .registerModels(["dall-e-3"], (name) => openai.image(name))
  .registerModels(["tts-1-hd"], (name) => openai.speech(name));

export const client = createAgentMarkClient({
  loader,
  modelRegistry,
});
Install:
npm install @agentmark-ai/ai-sdk-v5-adapter @agentmark-ai/loader-api @ai-sdk/openai

Prompt Loading

The loader determines how prompts are fetched at runtime. AgentMark provides two loaders:

Registering Models

The model registry maps model names (from prompt frontmatter) to actual AI SDK model instances. Each adapter has its own registry class.
import { VercelAIModelRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
import { google } from "@ai-sdk/google";

const modelRegistry = new VercelAIModelRegistry()
  // Language models
  .registerModels(["gpt-4o", "gpt-4o-mini"], (name) => openai(name))
  .registerModels(["claude-sonnet-4-20250514"], (name) => anthropic(name))
  .registerModels(["gemini-2.0-flash"], (name) => google(name))
  // Image models
  .registerModels(["dall-e-3"], (name) => openai.image(name))
  // Speech models
  .registerModels(["tts-1-hd"], (name) => openai.speech(name));
You can also use regex patterns for dynamic matching:
const modelRegistry = new VercelAIModelRegistry()
  .registerModels(/^gpt-/, (name) => openai(name))
  .registerModels(/^claude-/, (name) => anthropic(name));
Models referenced in prompt frontmatter must be registered in the model registry:
---
text_config:
  model_name: gpt-4o
---
Use agentmark pull-models to add built-in models to your agentmark.json. You still need to register them in the client for runtime use.

Registering Tools

Tools allow prompts to call functions during generation. Register them in the tool registry and reference them by name in prompt frontmatter.
import { VercelAIToolRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";

const toolRegistry = new VercelAIToolRegistry()
  .register("search_knowledgebase", async ({ query }) => {
    const results = await searchKB(query);
    return { articles: results };
  })
  .register("get_weather", async ({ location }) => {
    return { temp: 72, condition: "sunny" };
  });

export const client = createAgentMarkClient({
  loader,
  modelRegistry,
  toolRegistry,
});
Reference tools in prompt frontmatter:
---
text_config:
  model_name: gpt-4o
  tools:
    - search_knowledgebase
---
Learn more about tools

Registering Evaluations

Evaluations score prompt outputs during experiments. The EvalRegistry is shared across all adapters.
import { EvalRegistry } from "@agentmark-ai/prompt-core";

const evalRegistry = new EvalRegistry()
  .register("exact_match", ({ output, expectedOutput }) => {
    const match = output === expectedOutput;
    return {
      score: match ? 1 : 0,
      passed: match,
      reason: match ? undefined : `Expected "${expectedOutput}", got "${output}"`,
    };
  })
  .register("contains_keyword", ({ output, expectedOutput }) => {
    const contains = String(output).includes(String(expectedOutput));
    return { score: contains ? 1 : 0, passed: contains };
  });
You can also register multiple evals at once by passing an array of names:
evalRegistry.register(["length_check", "word_count"], ({ output }) => {
  return { score: String(output).length > 10 ? 1 : 0, passed: true };
});
The EvalRegistry is re-exported by every adapter, so you can import it from your adapter package too:
import { EvalRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";
Pass the registry to your client:
export const client = createAgentMarkClient({
  loader,
  modelRegistry,
  evalRegistry,
});
Reference evals in prompt frontmatter:
---
test_settings:
  dataset: ./datasets/sentiment.jsonl
  evals:
    - exact_match
---
Learn more about evaluations

MCP Servers

MCP servers provide additional tools to your prompts. The AI SDK v5 adapter uses a McpServerRegistry:
import { McpServerRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";

const mcpRegistry = new McpServerRegistry()
  .register("filesystem", {
    command: "npx",
    args: ["-y", "@modelcontextprotocol/server-filesystem", "./data"],
  })
  .register("github", {
    command: "npx",
    args: ["-y", "@modelcontextprotocol/server-github"],
    env: { GITHUB_PERSONAL_ACCESS_TOKEN: process.env.GITHUB_TOKEN! },
  });

// For remote MCP servers (URL/SSE):
mcpRegistry.register("docs", {
  url: "https://example.com/mcp",
  headers: { Authorization: "Bearer your-token" },
});

export const client = createAgentMarkClient({
  loader,
  modelRegistry,
  mcpRegistry,
});
You can also register all servers from your agentmark.json at once:
mcpRegistry.registerServers({
  docs: { url: "https://example.com/mcp" },
  filesystem: { command: "npx", args: ["..."] },
});
MCP servers configured in agentmark.json are available in the platform editor. MCP servers configured in the client code are available at runtime.
Learn more about MCP

Observability

The AgentMark SDK provides OpenTelemetry-based tracing for monitoring prompts in production.
import { AgentMarkSDK } from "@agentmark-ai/sdk";

const sdk = new AgentMarkSDK({
  apiKey: process.env.AGENTMARK_API_KEY!,
  appId: process.env.AGENTMARK_APP_ID!,
});

// Initialize tracing (call once at startup)
sdk.initTracing();

// Use the SDK's built-in loader
const loader = sdk.getApiLoader();
initTracing() sets up an OpenTelemetry BatchSpanProcessor that exports traces to the AgentMark API. For debugging, use sdk.initTracing({ disableBatch: true }) for immediate span export.
Learn more about observability

Type Safety

Run agentmark build to generate agentmark.types.ts with TypeScript types for all your prompts. Pass the type to createAgentMarkClient for autocomplete on prompt names, props, and outputs:
import type { AgentMarkTypes } from "./agentmark.types";

export const client = createAgentMarkClient<AgentMarkTypes>({
  loader,
  modelRegistry,
});

// Type-checked: prompt name, props, and output
const prompt = await client.loadTextPrompt("greeting.prompt.mdx");
const input = await prompt.format({
  props: { name: "Alice", role: "developer" }, // type-checked
});
Learn more about type safety

Using the Client

Import the client in your application to load and run prompts:
import { client } from "./agentmark.client";
import { generateText } from "ai";

const prompt = await client.loadTextPrompt("greeting.prompt.mdx");
const input = await prompt.format({
  props: { name: "Alice" },
  telemetry: { isEnabled: true },
});

const result = await generateText(input);
console.log(result.text);

Troubleshooting

IssueSolution
Model not foundEnsure the model name in prompt frontmatter is registered in your model registry
Tool not availableCheck the tool is registered in the tool registry and the name matches the prompt config
Loader connection failedVerify agentmark dev is running for local mode, or check AGENTMARK_API_KEY / AGENTMARK_APP_ID for cloud mode
MCP server not connectingVerify the command/args are correct and any required env vars are set
Type errorsRun agentmark build to regenerate agentmark.types.ts

Next Steps

Have Questions?

We’re here to help! Choose the best way to reach us: