AgentMark projects are configured through two main files:Documentation Index
Fetch the complete documentation index at: https://docs.agentmark.co/llms.txt
Use this file to discover all available pages before exploring further.
agentmark.json for project-level settings, and agentmark.client.ts (or agentmark_client.py) for runtime configuration like models, tools, and loaders.
agentmark.json
Theagentmark.json file lives at your project root and configures your AgentMark application. It is read by both the CLI and AgentMark Cloud.
Basic example
agentmark.json
Configuration properties
$schema (optional)
Points to the JSON Schema for editor autocompletion and validation.agentmarkPath (required)
The base directory (relative to your project root) where AgentMark looks for theagentmark/ folder containing prompts, components, and datasets. Projects scaffolded with npm create agentmark@latest use "." — the agentmark/ directory at the project root.
version (required)
The AgentMark configuration version. Use"2.0.0" for new projects. AgentMark Cloud uses this to choose the storage folder for deployed prompts — versions >= "2.0.0" use the agentmark/ folder, earlier versions use the legacy puzzlet/ folder.
mdxVersion (optional)
The prompt format version. Accepts"1.0" (current) or "0.0" (legacy). Use "1.0" for new projects.
builtInModels (optional)
An array of model IDs allowed in prompts. When set and non-empty,prompt-core rejects any prompt whose model_name is not in the list. IDs use the provider/model format (e.g., openai/gpt-4o) so the adapter’s model registry can auto-resolve the provider when you call .registerProviders({ openai, anthropic }). Pricing and settings for these models come from the bundled AgentMark model registry.
pull-models CLI command to interactively add models from supported providers — it emits the correct provider/model format automatically:
evals (deprecated)
Usescores instead. The evals field listed evaluation function names but did not include schema definitions. It is still supported for backward compatibility.
scores (optional)
Define score schemas for evaluation and human annotation. Each entry declares a score name and its type (boolean, numeric, or categorical). These schemas are synced to AgentMark Cloud through the deployment pipeline and used by both the annotation UI and experiment runner.evals option. See Evaluations for details.
modelSchemas (optional)
Define custom model configurations with settings, pricing, and UI controls. Use this for models not covered bybuiltInModels, or to customize settings for existing models.
mcpServers (optional)
Configure Model Context Protocol (MCP) servers that your prompts can reference as tools. Servers listed here are registered with the adapter at runtime (AI SDK, Mastra, Claude Agent SDK) and become available to prompts that reference them asmcp://<server-name>/<tool> in the tools: frontmatter.
- URL / SSE
- Stdio
For remote MCP servers accessible via HTTP:
handler (optional)
Path to your handler file for managed code deployment. AgentMark Cloud bundles and deploys this file so prompts can be executed from the Dashboard. The file extension determines the runtime (.py → Python, anything else → Node.js).
agentmark.json
handler.py at the repository root first (Python runtime), then falls back to handler.ts (Node.js runtime). If neither is found, managed code deployment is skipped. Projects scaffolded with npm create agentmark@latest --cloud write this field automatically (handler.ts for TypeScript, handler.py for Python).
Full example
An illustrative config showing every top-level field (not all are written by the scaffolder — see each field’s section above for when it applies):agentmark.json
Client configuration
The client configuration file (agentmark.client.ts or agentmark_client.py) defines your runtime setup: which models to use, what tools are available, how to load prompts, and which evaluations to run.
This file is auto-generated by npm create agentmark@latest and can be customized for your project.
- Cloud mode
- Self-hosted mode
In Cloud mode, prompts are loaded from the AgentMark API in production and from your local dev server during development:
agentmark.client.ts
Environment variables
| Variable | Required | Description |
|---|---|---|
AGENTMARK_API_KEY | Cloud mode | API key from AgentMark Dashboard settings |
AGENTMARK_APP_ID | Cloud mode | App ID from AgentMark Dashboard settings |
AGENTMARK_BASE_URL | No | Override the local dev server URL in scaffolded clients (default: http://localhost:9418) |
OPENAI_API_KEY | Depends on adapter | OpenAI API key for AI SDK, Mastra, or Pydantic AI adapters |
ANTHROPIC_API_KEY | Depends on adapter | Anthropic API key for Claude Agent SDK adapter |
Have Questions?
We’re here to help! Choose the best way to reach us:
- Email us at hello@agentmark.co for support
- Schedule an Enterprise Demo to learn about our business solutions