Developers set up tracing in your application. See Development documentation for setup instructions.

Understanding Traces
A trace represents the complete execution of a prompt, including all its steps, tool calls, and metadata. Each trace contains: Execution Timeline - See exactly when each step occurred and how long it took. Token Usage - Track input tokens, output tokens, and total tokens consumed. Costs - Monitor spending on a per-request basis. Tool Calls - View all tool executions, their parameters, and results. Custom Metadata - Add context like user IDs, session IDs, and custom attributes. Error Information - Detailed error messages and stack traces when issues occur.Collected Spans
AgentMark records the following OpenTelemetry spans:| Span Type | Description | Attributes |
|---|---|---|
ai.inference | Full length of the inference call | operation.name, ai.operationId, ai.prompt, ai.response.text, ai.response.toolCalls, ai.response.finishReason |
ai.toolCall | Individual tool executions | operation.name, ai.operationId, ai.toolCall.name, ai.toolCall.args, ai.toolCall.result |
ai.stream | Streaming response data | ai.response.msToFirstChunk, ai.response.msToFinish, ai.response.avgCompletionTokensPerSecond |
LLM Span Attributes
Each LLM span contains attributes that vary slightly depending on the adapter you use. The table below shows common attributes across integrations:- AI SDK (Vercel)
- Claude Agent SDK
| Attribute | Description |
|---|---|
ai.model.id | Model identifier |
ai.model.provider | Model provider name |
ai.usage.promptTokens | Number of prompt tokens |
ai.usage.completionTokens | Number of completion tokens |
ai.settings.maxRetries | Maximum retry attempts |
ai.telemetry.functionId | Function identifier |
ai.telemetry.metadata.* | Custom metadata |
ai.response.text | Response text |
ai.response.toolCalls | Tool calls array |
ai.response.finishReason | Finish reason |
agentmark.metadata.* attributes.
Grouping Traces
Organize related traces together using custom grouping. This is useful for understanding complex workflows that span multiple prompt executions.
- Multi-step agent workflows
- Nested component execution
- Parallel processing pipelines
Graph View
For complex AI agent workflows, AgentMark provides an interactive graph visualization that shows the relationships between different components, execution flow, and dependencies. Learn more about Graph View →Viewing Traces
Access traces in the AgentMark dashboard under the “Traces” tab. Each trace shows:- Complete prompt execution timeline
- Tool calls and their durations
- Token usage and costs
- Custom metadata and attributes
- Error information (if any)
- Graph visualization (when graph metadata is present)
- Manual annotations for quality assessment
Filtering and Search
Find specific traces using:- Function ID - Filter by specific prompt or function
- Session ID - View all traces in a session
- User ID - See all activity for a specific user
- Time Range - Narrow results to specific periods
- Status - Filter by success, error, or specific finish reasons
- Model - View traces for specific LLM models
Integration
AgentMark works with any application that uses OpenTelemetry. For detailed setup instructions, see the Development Observability documentation.MCP Trace Server
For debugging traces directly from your IDE, AgentMark provides an MCP server that exposeslist_traces and get_trace tools. This lets you query and inspect traces without leaving your development environment.
Best Practices
Use Meaningful IDs - Choose descriptive function IDs for easy filtering and debugging. Add Context - Include relevant metadata like user IDs, session IDs, and business context. Monitor Regularly - Check traces frequently to catch issues early. Set Up Alerts - Configure alerts for cost, latency, or error thresholds. Analyze Patterns - Use the dashboard’s filtering to identify trends and patterns.Next Steps
Sessions
Group related traces together
Alerts
Get notified of critical issues
Annotations
Manually label and score traces
Development Setup
Integrate observability in your app
Have Questions?
We’re here to help! Choose the best way to reach us:
- Join our Discord community for quick answers and discussions
- Email us at hello@agentmark.co for support
- Schedule an Enterprise Demo to learn about our business solutions