Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.agentmark.co/llms.txt

Use this file to discover all available pages before exploring further.

Quickstart

Choose your mode and follow the steps below.

Prerequisites

  • Node.js 18+
  • An LLM provider API key (OpenAI or Anthropic)

Step 1: Create your project

npm create agentmark@latest -- --cloud
The CLI walks you through selecting your language, adapter, and IDE setup. Choose AgentMark Cloud as the deployment mode.
You can skip the interactive prompts by passing flags directly:
npm create agentmark@latest -- --cloud --typescript --adapter ai-sdk --client skip
Available flags:
  • --typescript / --python — language
  • --adapter <name> — TypeScript: ai-sdk, claude-agent-sdk, mastra. Python: pydantic-ai, claude-agent-sdk.
  • --cloud / --self-host — deployment mode (Cloud vs Local)
  • --client <ide>claude-code, cursor, vscode, zed, or skip
  • --path <dir> — project directory
  • --api-key <key> — LLM provider API key

Step 2: Create your app in Cloud

  1. Commit and push your project to a Git repository
  2. In the AgentMark Dashboard, click Create App and select your GitHub repository
  3. Add your LLM provider API key in Settings > Environment Variables
Apps list in the AgentMark Dashboard showing the Create App buttonThe Apps list shows every app in your organization with its name, linked Git repository, and last-sync status. Click Create App in the top right to start a new one — the modal walks you through selecting a GitHub repository and naming the app.Once your repository is connected, AgentMark Cloud syncs your prompt files and deploys your handler automatically.

Step 3: Run your first prompt

Open a prompt in the Dashboard and click Run. AgentMark Cloud executes it on your deployed handler and streams results back in real time.Running a prompt in the AgentMark DashboardThe prompt editor shows your .prompt.mdx content, the selected model, input-variable fields, and a streaming output pane. Click Run to execute the prompt against your deployed handler.

Step 4: Run an experiment

Experiments test a prompt against a dataset and score the results with evaluators. Your project includes example prompts and datasets ready to go.
  1. Navigate to the party-planner prompt in the Dashboard
  2. Open the Experiments tab
  3. Click Run Experiment
  4. Review the results — scores, pass rates, and individual outputs
Experiment results in the AgentMark Dashboard showing scores, cost, and latencyThe experiment results page shows each dataset row with its input, the AI output, the expected output, and evaluator pass/fail scores — plus aggregate metrics like average score, latency, total cost, and total tokens across the run.

Step 5: View your traces

Every prompt and experiment execution is automatically traced. Navigate to the Traces page to see the full execution timeline — span details, token usage, cost, and latency.Traces list in the AgentMark DashboardThe Traces page lists every prompt and experiment execution with columns for name, status, latency, cost, tokens, spans, tags, and timestamp. Filter by time range from the toolbar, or click a row to drill into the full span tree for that execution.

What’s in your project

File / DirectoryPurpose
agentmark/Prompt templates (.prompt.mdx) and test datasets (.jsonl)
agentmark.client.tsClient configuration — models, tools, and loader setup
agentmark.jsonProject configuration (models, scores, schema)
agentmark.types.tsAuto-generated TypeScript types for your prompts
handler.tsExecution handler for AgentMark Cloud (Cloud mode only)
dev-entry.tsDevelopment server entry point (customizable)
index.tsExample application entry point
.envEnvironment variables (API keys)

Next steps

Build Prompts

Create prompts with tools, structured output, and components

Evaluate

Test your prompts with datasets and automated evaluators

Observe

Monitor traces, sessions, and costs in production

Integrations

Connect with Vercel AI SDK, Pydantic AI, Mastra, and more

Have Questions?

We’re here to help! Choose the best way to reach us: