Skip to main content
AgentMark projects are configured through two main files: agentmark.json for project-level settings, and agentmark.client.ts (or agentmark_client.py) for runtime configuration like models, tools, and loaders.

agentmark.json

The agentmark.json file lives at your project root and configures your AgentMark application. It is read by both the CLI and the platform.

Basic Example

agentmark.json
{
  "$schema": "https://unpkg.com/@agentmark-ai/cli/agentmark.schema.json",
  "version": "2.0.0",
  "mdxVersion": "1.0",
  "agentmarkPath": "/",
  "builtInModels": ["gpt-4o"]
}

Configuration Properties

$schema (optional)

Points to the JSON Schema for editor autocompletion and validation.
"$schema": "https://unpkg.com/@agentmark-ai/cli/agentmark.schema.json"

agentmarkPath (required)

The base directory where AgentMark looks for prompts, components, and datasets. Default is "/", meaning the agentmark/ directory at your project root.
"agentmarkPath": "/"
In a monorepo, set this to the relative path of the package containing your AgentMark files (e.g., "/packages/ai").

version (required)

The AgentMark configuration version. Use "2.0.0" for new projects.

mdxVersion (optional)

The prompt format version. Use "1.0" for the current format.

builtInModels (optional)

An array of model names that are available for use in prompts. These models are pre-configured with pricing and settings in the platform.
"builtInModels": ["gpt-4o", "gpt-4o-mini", "claude-sonnet-4-20250514"]
Use the pull-models CLI command to interactively add models from supported providers:
agentmark pull-models
See Adding Models for details.

evals (optional)

An array of evaluation function names that correspond to evaluations registered in your client’s EvalRegistry. Listing them here makes them available for selection in the platform editor.
"evals": ["correctness", "hallucination", "relevance"]
These names must match what you register in your client config:
evalRegistry.register("correctness", (params) => {
  return { score: 0.9, label: "correct", reason: "..." };
});
See Evaluations for details.

modelSchemas (optional)

Define custom model configurations with settings, pricing, and UI controls. Use this for models not covered by builtInModels, or to customize settings for existing models.
"modelSchemas": {
  "my-custom-model": {
    "label": "My Custom Model",
    "cost": {
      "inputCost": 0.01,
      "outputCost": 0.03,
      "unitScale": 1000000
    },
    "settings": {
      "temperature": {
        "label": "Temperature",
        "order": 1,
        "default": 0.7,
        "minimum": 0,
        "maximum": 2,
        "multipleOf": 0.1,
        "type": "slider"
      }
    }
  }
}
See Adding Models for the full schema reference.

mcpServers (optional)

Configure Model Context Protocol (MCP) servers that your prompts can use as tools. Servers listed here are available for selection in the platform editor.
For remote MCP servers accessible via HTTP:
"mcpServers": {
  "docs": {
    "url": "https://example.com/mcp",
    "headers": {
      "Authorization": "Bearer your-token"
    }
  }
}
See MCP Integration for usage in prompts.

Full Example

agentmark.json
{
  "$schema": "https://unpkg.com/@agentmark-ai/cli/agentmark.schema.json",
  "version": "2.0.0",
  "mdxVersion": "1.0",
  "agentmarkPath": "/",
  "builtInModels": ["gpt-4o", "gpt-4o-mini", "claude-sonnet-4-20250514"],
  "evals": ["correctness", "hallucination"],
  "mcpServers": {
    "docs": {
      "url": "https://example.com/mcp"
    }
  },
  "modelSchemas": {
    "my-fine-tuned-model": {
      "label": "My Fine-tuned Model",
      "cost": {
        "inputCost": 0.005,
        "outputCost": 0.015,
        "unitScale": 1000000
      },
      "settings": {
        "temperature": {
          "label": "Temperature",
          "order": 1,
          "default": 0.7,
          "minimum": 0,
          "maximum": 2,
          "multipleOf": 0.1,
          "type": "slider"
        }
      }
    }
  }
}

Client Configuration

The client configuration file (agentmark.client.ts or agentmark_client.py) defines your runtime setup: which models to use, what tools are available, how to load prompts, and which evaluations to run. This file is auto-generated by npm create agentmark@latest and can be customized for your project.
In cloud mode, prompts are loaded from the AgentMark API in production and from your local dev server during development:
agentmark.client.ts
import { ApiLoader } from "@agentmark-ai/loader-api";

const loader = process.env.NODE_ENV === 'development'
  ? ApiLoader.local({
      baseUrl: process.env.AGENTMARK_BASE_URL || 'http://localhost:9418'
    })
  : ApiLoader.cloud({
      apiKey: process.env.AGENTMARK_API_KEY!,
      appId: process.env.AGENTMARK_APP_ID!,
    });

Environment Variables

VariableRequiredDescription
AGENTMARK_API_KEYCloud modeAPI key from AgentMark platform settings
AGENTMARK_APP_IDCloud modeApp ID from AgentMark platform settings
AGENTMARK_BASE_URLNoOverride the local dev server URL (default: http://localhost:9418)
OPENAI_API_KEYDepends on adapterOpenAI API key for AI SDK, Mastra, or Pydantic AI adapters
ANTHROPIC_API_KEYDepends on adapterAnthropic API key for Claude Agent SDK adapter

Have Questions?

We’re here to help! Choose the best way to reach us: