Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.agentmark.co/llms.txt

Use this file to discover all available pages before exploring further.

The AgentMark client is configured in agentmark.client.ts (or agentmark_client.py). It connects your prompts to AI models, tools, evaluations, and prompt loading — used by the CLI, AgentMark Cloud, and your application code.

Basic configuration

The client file is generated by npm create agentmark@latest. Each adapter has its own client pattern:
agentmark.client.ts
import {
  createAgentMarkClient,
  VercelAIModelRegistry,
} from "@agentmark-ai/ai-sdk-v5-adapter";
import { ApiLoader } from "@agentmark-ai/loader-api";
import { openai } from "@ai-sdk/openai";

const loader =
  process.env.NODE_ENV === "development"
    ? ApiLoader.local({
        baseUrl: process.env.AGENTMARK_BASE_URL || "http://localhost:9418",
      })
    : ApiLoader.cloud({
        apiKey: process.env.AGENTMARK_API_KEY!,
        appId: process.env.AGENTMARK_APP_ID!,
      });

const modelRegistry = new VercelAIModelRegistry()
  .registerModels(["gpt-4o", "gpt-4o-mini"], (name) => openai(name))
  .registerModels(["dall-e-3"], (name) => openai.image(name))
  .registerModels(["tts-1-hd"], (name) => openai.speech(name));

export const client = createAgentMarkClient({
  loader,
  modelRegistry,
});
Install:
npm install @agentmark-ai/ai-sdk-v5-adapter @agentmark-ai/loader-api @ai-sdk/openai

Prompt loading

The loader determines how prompts are fetched at runtime. AgentMark provides two loaders:

Registering models

The model registry maps model names (from prompt frontmatter) to actual AI SDK model instances. Each adapter has its own registry class.
import { VercelAIModelRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
import { google } from "@ai-sdk/google";

const modelRegistry = new VercelAIModelRegistry()
  // Language models
  .registerModels(["gpt-4o", "gpt-4o-mini"], (name) => openai(name))
  .registerModels(["claude-sonnet-4-20250514"], (name) => anthropic(name))
  .registerModels(["gemini-2.0-flash"], (name) => google(name))
  // Image models
  .registerModels(["dall-e-3"], (name) => openai.image(name))
  // Speech models
  .registerModels(["tts-1-hd"], (name) => openai.speech(name));
You can also use regex patterns for dynamic matching:
const modelRegistry = new VercelAIModelRegistry()
  .registerModels(/^gpt-/, (name) => openai(name))
  .registerModels(/^claude-/, (name) => anthropic(name));
Models referenced in prompt frontmatter must be registered in the model registry:
---
text_config:
  model_name: gpt-4o
---
Use npx agentmark pull-models to add built-in models to your agentmark.json. You still need to register them in the client for runtime use.

Registering tools

Tools allow prompts to call functions during generation. Pass tools directly as a plain object to createAgentMarkClient and reference them by name in prompt frontmatter.
Use the native tool() function from the ai package to define tools. AI SDK v5 uses inputSchema (Zod) — parameters is the v4 name and fails type-checking in v5.
import { createAgentMarkClient, VercelAIModelRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";
import { tool } from "ai";
import { z } from "zod";

const searchTool = tool({
  description: "Search the knowledge base",
  inputSchema: z.object({ query: z.string() }),
  execute: async ({ query }) => ({ results: [`Result for ${query}`] }),
});

const weatherTool = tool({
  description: "Get current weather for a location",
  inputSchema: z.object({ location: z.string() }),
  execute: async ({ location }) => ({ temp: 72, condition: "sunny" }),
});

export const client = createAgentMarkClient({
  loader,
  modelRegistry,
  tools: {
    search_knowledgebase: searchTool,
    get_weather: weatherTool,
  },
});
Reference tools in prompt frontmatter:
---
text_config:
  model_name: gpt-4o
  tools:
    - search_knowledgebase
---
Learn more about tools

Registering evals

Eval functions score prompt outputs during experiments. Score schemas are defined separately in agentmark.json (see Project config) and deployed to AgentMark Cloud. Eval functions are registered in your client config and connected to scores by name.
import type { EvalFunction } from "@agentmark-ai/prompt-core";

const evals: Record<string, EvalFunction> = {
  exact_match: ({ output, expectedOutput }) => {
    const match = output === expectedOutput;
    return { score: match ? 1 : 0, passed: match };
  },
  contains_keyword: ({ output, expectedOutput }) => {
    const contains = String(output).includes(String(expectedOutput));
    return { passed: contains };
  },
};
Pass the evals to your client:
export const client = createAgentMarkClient({
  loader,
  modelRegistry,
  evals,
});
Reference evals in prompt frontmatter:
---
test_settings:
  dataset: ./datasets/sentiment.jsonl
  evals:
    - exact_match
---
Learn more about evaluations

MCP servers

MCP servers provide additional tools to your prompts. Pass them as a plain mcpServers object to createAgentMarkClient:
import { createAgentMarkClient, VercelAIModelRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";

export const client = createAgentMarkClient({
  loader,
  modelRegistry,
  mcpServers: {
    filesystem: {
      command: "npx",
      args: ["-y", "@modelcontextprotocol/server-filesystem", "./docs"],
    },
    github: {
      command: "npx",
      args: ["-y", "@modelcontextprotocol/server-github"],
      env: { GITHUB_PERSONAL_ACCESS_TOKEN: process.env.GITHUB_TOKEN! },
    },
    docs: {
      url: "https://docs.example.com/mcp",
      headers: { Authorization: "Bearer env(MCP_TOKEN)" },
    },
  },
});
Each key in the mcpServers object is the server name. Local servers use command and args, while remote servers use url and optional headers.
MCP servers configured in agentmark.json are available in the AgentMark Dashboard prompt editor. MCP servers configured in the client code are available at runtime.
Learn more about MCP

Observability

The AgentMark SDK provides OpenTelemetry-based tracing for monitoring prompts in production.
import { AgentMarkSDK } from "@agentmark-ai/sdk";

const sdk = new AgentMarkSDK({
  apiKey: process.env.AGENTMARK_API_KEY!,
  appId: process.env.AGENTMARK_APP_ID!,
});

// Initialize tracing (call once at startup)
sdk.initTracing();

// Use the SDK's built-in loader
const loader = sdk.getApiLoader();
initTracing() sets up an OpenTelemetry BatchSpanProcessor that exports traces to the AgentMark API. For debugging, use sdk.initTracing({ disableBatch: true }) for immediate span export.To redact sensitive data from traces, pass a mask function. See PII masking.
You can also pass a mask function to redact sensitive data from traces before they leave your application:
import { AgentMarkSDK, createPiiMasker } from '@agentmark-ai/sdk';

const sdk = new AgentMarkSDK({
  apiKey: process.env.AGENTMARK_API_KEY!,
  appId: process.env.AGENTMARK_APP_ID!,
  mask: createPiiMasker({ email: true, phone: true, ssn: true }),
});
sdk.initTracing();
Learn more about PII masking Learn more about observability

Type safety

Run npx agentmark generate-types --root-dir agentmark > agentmark.types.ts to generate TypeScript types for all your prompts. The generated file exports a default interface named AgentmarkTypes. Pass it to createAgentMarkClient for autocomplete on prompt names, props, and outputs:
import type AgentmarkTypes from "./agentmark.types";

export const client = createAgentMarkClient<AgentmarkTypes>({
  loader,
  modelRegistry,
});

// Type-checked: prompt name, props, and output
const prompt = await client.loadTextPrompt("greeting.prompt.mdx");
const input = await prompt.format({
  props: { name: "Alice", role: "developer" }, // type-checked
});
Learn more about type safety

Using the client

Import the client in your application to load and run prompts:
import { client } from "./agentmark.client";
import { generateText } from "ai";

const prompt = await client.loadTextPrompt("greeting.prompt.mdx");
const input = await prompt.format({
  props: { name: "Alice" },
  telemetry: { isEnabled: true },
});

const result = await generateText(input);
console.log(result.text);

Troubleshooting

IssueSolution
Model not foundEnsure the model name in prompt frontmatter is registered in your model registry
Tool not availableCheck the tool is included in the tools object passed to createAgentMarkClient and the name matches the prompt config
Loader connection failedVerify agentmark dev is running for local mode, or check AGENTMARK_API_KEY / AGENTMARK_APP_ID for Cloud mode
MCP server not connectingVerify the command/args are correct and any required env vars are set
Type errorsRun npx agentmark generate-types --root-dir agentmark > agentmark.types.ts to regenerate types

Next steps

Running prompts

Use the client to run prompts

Tools and agents

Register and use tools

MCP integration

Connect MCP servers

Type safety

Add TypeScript types

Have Questions?

We’re here to help! Choose the best way to reach us: