Skip to main content
The AgentMark client is your central configuration for running, testing, and observing your prompts and agents. Configure it once in agentmark.client.ts, then use it across our CLI, the AgentMark platform, and your app.

What It Does

The client connects your prompts to:
  • AI models (OpenAI, Anthropic, etc.)
  • Tools your prompts can call
  • Evaluations for testing
  • MCP servers for extended capabilities
  • Prompt loading (local dev or cloud)
  • SDK adapters (Vercel AI SDK v4, Mastra, etc.)

Basic Configuration

Created automatically when you run npm create agentmark:
// agentmark.client.ts
import { createAgentMarkClient, VercelAIModelRegistry } from "@agentmark/ai-sdk-v4-adapter";
import { AgentMarkSDK } from "@agentmark/sdk";
import { openai } from '@ai-sdk/openai';

function createModelRegistry() {
  const modelRegistry = new VercelAIModelRegistry()
    .registerModels(["gpt-4o"], (name) => openai(name))
    .registerModels(["dall-e-3"], (name) => openai.image(name))
    .registerModels(["tts-1-hd"], (name) => openai.speech(name));
  return modelRegistry;
}

function createClient() {
  const sdk = new AgentMarkSDK({
    baseUrl: process.env.AGENTMARK_BASE_URL || 'http://localhost:9418',
    apiKey: process.env.AGENTMARK_API_KEY || '',
    appId: process.env.AGENTMARK_APP_ID || ''
  });

  return createAgentMarkClient({
    loader: sdk.getFileLoader(),
    modelRegistry: createModelRegistry()
  });
}

export const client = createClient();

Registering Models

Add models from your AI SDK:
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';

const modelRegistry = new VercelAIModelRegistry()
  .registerModels(["gpt-4o", "gpt-4o-mini"], (name) => openai(name))
  .registerModels(["claude-3-5-sonnet-20241022"], (name) => anthropic(name))
  .registerModels(["dall-e-3"], (name) => openai.image(name))
  .registerModels(["tts-1-hd"], (name) => openai.speech(name));
Reference in prompts:
---
text_config:
  model_name: gpt-4o
---

Registering Tools

Add callable tools for prompts:
import { VercelAIToolRegistry } from "@agentmark/ai-sdk-v4-adapter";

const toolRegistry = new VercelAIToolRegistry()
  .register('search_knowledgebase', async ({ query }) => {
    return { articles: [{ topic: 'shipping', content: '3-5 days' }] };
  })
  .register('get_weather', async ({ location }) => {
    return { temp: 72, condition: 'sunny' };
  });

// Pass to client
createAgentMarkClient({
  loader: fileLoader,
  modelRegistry,
  toolRegistry
});
Use in prompts:
---
text_config:
  tools:
    - search_knowledgebase
---
Learn more about tools →

Configuring MCP Servers

Connect to MCP servers for extended capabilities:
import { AgentMarkSDK } from "@agentmark/sdk";

const sdk = new AgentMarkSDK({
  baseUrl: 'http://localhost:9418',
  mcpServers: {
    filesystem: {
      command: "npx",
      args: ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/files"],
      env: {}
    },
    github: {
      command: "npx",
      args: ["-y", "@modelcontextprotocol/server-github"],
      env: {
        GITHUB_PERSONAL_ACCESS_TOKEN: process.env.GITHUB_TOKEN
      }
    }
  }
});
MCP servers are automatically available to your prompts once configured. Learn more about MCP →

Registering Evaluations

Add custom evaluations for testing:
import { EvalRegistry } from "@agentmark/prompt-core";

const evalRegistry = new EvalRegistry()
  .register('exact_match', ({ output, expectedOutput }) => {
    const match = output === expectedOutput;
    return {
      score: match ? 1 : 0,
      passed: match,
      reason: match ? undefined : `Expected ${expectedOutput}, got ${output}`
    };
  });

// Pass to client
createAgentMarkClient({
  loader: fileLoader,
  modelRegistry,
  evalRegistry
});
Use in test settings:
---
test_settings:
  evals:
    - exact_match
---
Learn more about evaluations →

Using the Client

Import and use throughout your application:
import { client } from './agentmark.client';
import { generateText } from 'ai';  // Vercel AI SDK

// Load prompt and format with props
const prompt = await client.loadTextPrompt('agentmark/greeting.prompt.mdx');
const input = await prompt.format({
  props: { name: 'Alice', role: 'developer' }
});

// Pass to your AI SDK (Vercel AI SDK shown here)
const result = await generateText(input);
console.log(result.text);
AgentMark supports multiple adapters (Vercel AI SDK, Mastra, custom). The format is the same—load, format, then pass to your SDK’s generation function.

Environment Configuration

Local development (default):
baseUrl: 'http://localhost:9418'  // Dev server
Start dev server:
npm run dev
Cloud deployment: Set environment variables:
AGENTMARK_BASE_URL=https://api.agentmark.co
AGENTMARK_API_KEY=your-api-key
AGENTMARK_APP_ID=your-app-id
The client automatically uses these values.

Type Safety

Add TypeScript types for autocomplete:
// agentmark.types.ts
export interface AgentMarkTypes {
  prompts: {
    'greeting': {
      props: { name: string; role: string };
      output: string;
    };
  };
  tools: {
    search_knowledgebase: {
      args: { query: string };
      returns: { articles: Array<{ topic: string; content: string }> };
    };
  };
}
Use with client:
import AgentMarkTypes from './agentmark.types';
import { generateText } from 'ai';

const client = createAgentMarkClient<AgentMarkTypes>({
  loader: fileLoader,
  modelRegistry
});

// Type-checked props and autocomplete
const prompt = await client.loadTextPrompt('greeting');
const input = await prompt.format({
  props: { name: 'Alice', role: 'developer' }  // ✅ Type-checked
});

const result = await generateText(input);
Learn more about type safety →

Adapters

AgentMark supports multiple AI SDKs through adapters. Choose the one that fits your stack. Learn more about integrations →

Complete Example

// agentmark.client.ts
import { createAgentMarkClient, VercelAIModelRegistry, VercelAIToolRegistry } from "@agentmark/ai-sdk-v4-adapter";
import { AgentMarkSDK } from "@agentmark/sdk";
import { EvalRegistry } from "@agentmark/prompt-core";
import { openai } from '@ai-sdk/openai';
import AgentMarkTypes from './agentmark.types';

function createModelRegistry() {
  return new VercelAIModelRegistry()
    .registerModels(["gpt-4o"], (name) => openai(name))
    .registerModels(["dall-e-3"], (name) => openai.image(name));
}

function createToolRegistry() {
  return new VercelAIToolRegistry()
    .register('search', async ({ query }) => {
      return { results: [] };
    });
}

function createEvalRegistry() {
  return new EvalRegistry()
    .register('exact_match', ({ output, expectedOutput }) => ({
      score: output === expectedOutput ? 1 : 0,
      passed: output === expectedOutput
    }));
}

function createClient() {
  const sdk = new AgentMarkSDK({
    baseUrl: process.env.AGENTMARK_BASE_URL || 'http://localhost:9418',
    apiKey: process.env.AGENTMARK_API_KEY || '',
    appId: process.env.AGENTMARK_APP_ID || '',
    mcpServers: {
      filesystem: {
        command: "npx",
        args: ["-y", "@modelcontextprotocol/server-filesystem", "./data"]
      }
    }
  });

  return createAgentMarkClient<AgentMarkTypes>({
    loader: sdk.getFileLoader(),
    modelRegistry: createModelRegistry(),
    toolRegistry: createToolRegistry(),
    evalRegistry: createEvalRegistry()
  });
}

export const client = createClient();

Troubleshooting

Models not found:
  • Check model name matches prompt frontmatter
  • Verify adapter import: import { openai } from '@ai-sdk/openai'
Tools not available:
  • Ensure tool is registered in tool registry
  • Check tool name matches prompt configuration
MCP server not connecting:
  • Verify command and args are correct
  • Check environment variables are set
  • Review MCP server logs
Type errors:
  • Update agentmark.types.ts with current prompts
  • Ensure prompt names match type definitions

Next Steps