MCP Trace Server
The @agentmark-ai/mcp-server package exposes your AgentMark traces to AI-powered editors through the Model Context Protocol (MCP). This enables AI assistants to query and debug your agent executions directly.
This is different from the AI Editor Integration which helps AI editors create AgentMark files. The MCP Trace Server provides trace debugging capabilities - allowing AI assistants to inspect execution logs, find errors, and analyze performance.
Overview
The MCP Trace Server provides two tools:
Tool Description list_tracesList recent traces with metadata (status, latency, cost, tokens) get_traceGet detailed trace data with span filtering and pagination
Installation
npm install @agentmark-ai/mcp-server
Configuration
Environment Variables
Variable Default Description AGENTMARK_URLhttp://localhost:9418URL of the AgentMark API server AGENTMARK_API_KEY- API key for authentication (required for cloud) AGENTMARK_TIMEOUT_MS30000Request timeout in milliseconds
Editor Setup
Claude Code
Cursor
Claude Desktop
Add to your project’s .mcp.json: {
"mcpServers" : {
"agentmark-traces" : {
"command" : "npx" ,
"args" : [ "@agentmark-ai/mcp-server" ],
"env" : {
"AGENTMARK_URL" : "http://localhost:9418"
}
}
}
}
Add to your project’s .cursor/mcp.json: {
"mcpServers" : {
"agentmark-traces" : {
"command" : "npx" ,
"args" : [ "@agentmark-ai/mcp-server" ],
"env" : {
"AGENTMARK_URL" : "http://localhost:9418"
}
}
}
}
Add to your Claude Desktop configuration (claude_desktop_config.json): {
"mcpServers" : {
"agentmark-traces" : {
"command" : "npx" ,
"args" : [ "@agentmark-ai/mcp-server" ],
"env" : {
"AGENTMARK_URL" : "http://localhost:9418"
}
}
}
}
Cloud Configuration
For AgentMark Cloud integration, add your API key:
{
"mcpServers" : {
"agentmark-traces" : {
"command" : "npx" ,
"args" : [ "@agentmark-ai/mcp-server" ],
"env" : {
"AGENTMARK_URL" : "https://api.agentmark.co" ,
"AGENTMARK_API_KEY" : "your-api-key"
}
}
}
}
Tools Reference
list_traces
List recent traces with metadata. Returns trace IDs, names, status, latency, cost, and token counts.
Parameters:
Parameter Type Default Description limitnumber50Maximum traces to return (max: 200) sessionIdstring- Filter traces by session ID datasetRunIdstring- Filter traces by dataset run ID cursorstring- Pagination cursor from previous response
Response:
{
"items" : [
{
"id" : "trace-abc123" ,
"name" : "my-prompt" ,
"status" : "0" ,
"latency" : 1234 ,
"cost" : 0.05 ,
"tokens" : 500 ,
"start" : 1704067200000 ,
"end" : 1704067201234
}
],
"cursor" : "eyJvZmZzZXQiOjUwfQ==" ,
"hasMore" : true
}
Status Values:
0 = OK
1 = Warning
2 = Error
get_trace
Get trace summary with filtered and paginated spans. Use this to drill into specific trace details.
Parameters:
Parameter Type Default Description traceIdstring- Required. The trace ID to retrievefiltersarray- Filter criteria for spans (see Filter Schema) limitnumber50Results per page (max: 200) cursorstring- Pagination cursor from previous response
Filter Schema:
Each filter object has three fields:
Field Type Description fieldstringField to filter on operatorstringComparison operator valuestring | numberValue to compare against
Supported Fields and Operators:
Field Operators Description statuseqSpan status (“0”=ok, “1”=warning, “2”=error) durationgt, gte, lt, lteSpan duration in milliseconds namecontainsSpan name substring match data.typeeqSpan type (“GENERATION”, “SPAN”, “EVENT”) data.modelcontainsModel name substring match
Example - Find error spans:
{
"traceId" : "trace-abc123" ,
"filters" : [
{ "field" : "status" , "operator" : "eq" , "value" : "2" }
]
}
Example - Find slow LLM generations:
{
"traceId" : "trace-abc123" ,
"filters" : [
{ "field" : "data.type" , "operator" : "eq" , "value" : "GENERATION" },
{ "field" : "duration" , "operator" : "gt" , "value" : 5000 }
]
}
Response:
{
"trace" : {
"id" : "trace-abc123" ,
"name" : "my-prompt" ,
"spans" : [],
"data" : {
"status" : "0" ,
"latency" : 1234 ,
"cost" : 0.05 ,
"tokens" : 500
}
},
"spans" : {
"items" : [
{
"id" : "span-xyz789" ,
"name" : "llm-call" ,
"duration" : 892 ,
"parentId" : null ,
"timestamp" : 1704067200000 ,
"traceId" : "trace-abc123" ,
"status" : "0" ,
"data" : {
"type" : "GENERATION" ,
"model" : "gpt-4o-mini" ,
"inputTokens" : 150 ,
"outputTokens" : 350 ,
"cost" : 0.05
}
}
],
"cursor" : "eyJvZmZzZXQiOjUwfQ==" ,
"hasMore" : false
}
}
Error Handling
All tools return structured errors with error codes:
{
"error" : "Trace not found: trace-abc123" ,
"code" : "NOT_FOUND" ,
"details" : { "traceId" : "trace-abc123" }
}
Error Codes:
Code Description CONNECTION_FAILEDCannot reach the AgentMark server INVALID_QUERYMalformed filter or unsupported field/operator NOT_FOUNDTrace or resource does not exist TIMEOUTRequest exceeded time limit
Requirements
For local development, the MCP server connects to the AgentMark CLI local API server:
Run agentmark dev to start the local development server
Execute prompts to generate traces
Use your AI editor to query and debug traces
Programmatic Usage
You can also use the server programmatically:
import { createMCPServer , runServer } from '@agentmark-ai/mcp-server' ;
// Run with stdio transport (for MCP clients)
await runServer ();
// Or create a server instance for custom transport
const server = await createMCPServer ();
Related Documentation
AI Editor Integration Help AI editors create AgentMark files
Traces and Logs Learn about AgentMark tracing
CLI Reference Start the dev server with agentmark dev
Have Questions? We’re here to help! Choose the best way to reach us: