Prerequisites
- Node.js 18+
- An LLM provider API key (OpenAI or Anthropic, depending on your adapter choice)
Create Your AgentMark App
Run the interactive setup:The CLI will guide you through the following prompts:
| Prompt | Description |
|---|---|
| Project folder | Where to create your project (default: my-agentmark-app) |
| Language | TypeScript or Python |
| Adapter | Your preferred AI framework (AI SDK, Claude Agent SDK, Mastra, or Pydantic AI) |
| API key | Your OpenAI or Anthropic API key (can be skipped and added later) |
| Deployment mode | Choose AgentMark Cloud to sync with the platform |
| IDE | Optionally configure MCP servers for your editor |
- TypeScript Adapters
- Python Adapters
- AI SDK (Vercel) — Recommended for most TypeScript projects
- Claude Agent SDK — For Anthropic-native agent workflows
- Mastra — For Mastra framework users
Select AgentMark Cloud as the deployment mode. This connects your project to the AgentMark platform for prompt management, datasets, tracing, experiments, and alerts.
Set Up Environment Variables
After setup, your To get your AgentMark Cloud credentials:
.env file will contain:- AI SDK / Mastra / Pydantic AI
- Claude Agent SDK
- Sign in at app.agentmark.co
- Create a new organization and app (or select an existing one)
- Navigate to Settings to find your API key and App ID
Start the Development Server
- TypeScript
- Python
- API server (port 9418) — serves prompts and collects traces
- Webhook server (port 9417) — executes prompts via your adapter
- Dashboard (port 3000) — view traces, sessions, and requests in your browser
Run Your First Prompt
In a separate terminal, run one of the example prompts:Run an experiment against a test dataset:Or build and run the demo application:
Connect to the Platform
To sync your prompts with the AgentMark platform:
Once synced, you can edit prompts in the platform’s visual editor, and changes automatically deploy to your application via the cloud loader.
- Commit and push your project to a Git repository
- In the AgentMark platform, navigate to your app
- Connect your repository

What’s in Your Project
- TypeScript
- Python
| File / Directory | Purpose |
|---|---|
agentmark/ | Prompt templates (.prompt.mdx) and test datasets (.jsonl) |
agentmark.client.ts | Client configuration — models, tools, and loader setup |
agentmark.json | Project configuration (models, evals, schema) |
agentmark.types.ts | Auto-generated TypeScript types for your prompts |
dev-entry.ts | Development server entry point (customizable) |
index.ts | Example application entry point |
.env | Environment variables (API keys, credentials) |
Available Scripts
| Script | Command | Description |
|---|---|---|
dev | npm run dev | Start the local development server with dashboard |
prompt | npm run prompt <file> | Run a single prompt with test props |
experiment | npm run experiment <file> | Run a prompt against its test dataset |
build | agentmark build | Compile prompts for standalone use |
demo | npm run demo | Run the example application (requires build first) |
IDE Integration
If you selected an IDE during setup, your project includes MCP server configuration that gives your AI assistant access to AgentMark documentation and trace debugging. Supported editors: Claude Code, Cursor, VS Code, ZedNext Steps
Core Concepts
Understand organizations, apps, and branches
Writing Prompts
Learn how to create and configure prompts
Testing & Evals
Test prompts with datasets and evaluations
Observability
Monitor traces, costs, and performance
Have Questions?
We’re here to help! Choose the best way to reach us:
- Join our Discord community for quick answers and discussions
- Email us at [email protected] for support
- Schedule an Enterprise Demo to learn about our business solutions