The AgentMark CLI automatically detects and runs Python projects with the appropriate dev server configuration.
Starting the Dev Server
The CLI detects Python projects and spawns the Python webhook server alongside the API server and UI.
Project Detection
The CLI identifies Python projects by checking for:
pyproject.toml - Python project manifest
agentmark_client.py - AgentMark client configuration
.agentmark/dev_server.py - Auto-generated entry point
If any of these files exist, the CLI runs in Python mode.
Virtual Environment Detection
The CLI automatically detects and uses virtual environments:
Priority order:
1. .venv/bin/python (or .venv\Scripts\python.exe on Windows)
2. venv/bin/python (or venv\Scripts\python.exe on Windows)
3. System python
When a virtual environment is found, the CLI prints:
Using virtual environment: .venv/
Entry Point Resolution
The dev server entry point is resolved in this order:
| Location | Description |
|---|
dev_server.py | Custom dev server (project root) |
.agentmark/dev_server.py | Auto-generated server |
Custom Dev Server
Create a dev_server.py in your project root to customize the webhook server:
"""Custom webhook server for AgentMark development."""
import argparse
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent))
from agentmark_pydantic_ai_v0 import create_webhook_server
from agentmark_client import client
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--webhook-port", type=int, default=9417)
parser.add_argument("--api-server-port", type=int, default=9418)
args = parser.parse_args()
create_webhook_server(client, args.webhook_port, args.api_server_port)
Auto-Generated Server
When you run npm create agentmark@latest, an entry point is created at .agentmark/dev_server.py:
"""Auto-generated webhook server for AgentMark development."""
import argparse
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent))
from agentmark_pydantic_ai_v0 import create_webhook_server
from agentmark_client import client
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--webhook-port", type=int, default=9417)
parser.add_argument("--api-server-port", type=int, default=9418)
args = parser.parse_args()
create_webhook_server(client, args.webhook_port, args.api_server_port)
Environment Variables
The dev server sets the following environment variables:
| Variable | Value | Description |
|---|
PYTHONDONTWRITEBYTECODE | 1 | Prevents __pycache__ creation |
PYTHONUNBUFFERED | 1 | Ensures real-time output |
AGENTMARK_BASE_URL | http://localhost:{api_port} | API server URL for telemetry |
Server Architecture
When you run agentmark dev, three servers start:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ API Server │────▶│ Webhook Server │────▶│ UI Server │
│ (port 9418) │ │ (port 9417) │ │ (port 3000) │
│ │ │ │ │ │
│ Telemetry API │ │ Python Process │ │ Next.js │
│ Trace Storage │ │ Prompt Executor │ │ Dashboard │
└─────────────────┘ └─────────────────┘ └─────────────────┘
The Python webhook server:
- Receives prompt execution requests from the CLI
- Uses your
agentmark_client.py configuration
- Executes prompts via the configured adapter (Pydantic AI or Claude Agent SDK)
- Returns streaming or non-streaming responses
Port Configuration
Override default ports with CLI options:
agentmark dev --webhook-port 8080 --api-port 8081 --app-port 8082
| Option | Default | Description |
|---|
--webhook-port | 9417 | Webhook server port |
--api-port | 9418 | API server port |
--app-port | 3000 | UI server port |
Webhook Handler
The webhook server implements two event types:
prompt-run
Executes a single prompt:
{
"type": "prompt-run",
"data": {
"ast": { ... },
"options": {
"shouldStream": true
},
"customProps": { ... }
}
}
dataset-run
Executes a prompt across a dataset:
{
"type": "dataset-run",
"data": {
"ast": { ... },
"experimentId": "exp-123",
"datasetPath": "./datasets/test.yaml"
}
}
Running Prompts
With the dev server running, execute prompts from another terminal:
npm run prompt ./agentmark/party-planner.prompt.mdx
Or run experiments:
npm run experiment ./agentmark/party-planner.prompt.mdx
Troubleshooting
Virtual Environment Not Found
If you see “python not found” errors:
# Create a virtual environment
python -m venv .venv
# Activate it
source .venv/bin/activate # macOS/Linux
.venv\Scripts\activate # Windows
# Install dependencies
pip install -e ".[dev]"
Module Not Found
Ensure dependencies are installed in the correct virtual environment:
pip install agentmark-pydantic-ai-v0 agentmark-prompt-core python-dotenv
Port Already in Use
If ports are busy, specify alternative ports:
agentmark dev --webhook-port 9500 --api-port 9501
agentmark_client.py Not Found
The CLI requires agentmark_client.py in your project root:
# Create a new project
npm create agentmark@latest
# Or manually create agentmark_client.py
Adapter-Specific Considerations
Pydantic AI
The Pydantic AI dev server uses aiohttp for async HTTP handling:
from agentmark_pydantic_ai_v0 import create_webhook_server
create_webhook_server(client, webhook_port=9417, api_server_port=9418)
Claude Agent SDK
The Claude Agent SDK dev server handles agentic execution:
from agentmark_claude_agent_sdk import create_webhook_server
create_webhook_server(client, webhook_port=9417, api_server_port=9418)
Next Steps
Python Overview
Python SDK and project setup
Pydantic AI
Type-safe LLM interactions
Claude Agent SDK
Agentic task execution
Running Prompts
Execute prompts from CLI
Have Questions?
We’re here to help! Choose the best way to reach us: