What Gets Tracked
AgentMark automatically collects: Inference Spans - Full prompt execution lifecycle- Token usage and costs
- Response times
- Model information
- Completion status
- Tool name and parameters
- Execution duration
- Success/failure status
- Return values
- Time to first token
- Tokens per second
- Total streaming duration
- LLM, tool, agent, retrieval, embedding, guardrail, and function
- Used for filtering, graph visualization, and analytics grouping
- Set via
observe()orctx.span()
- Organize by user interaction
- Track multi-step workflows
- Monitor batch processing
- Analyze performance patterns
Quick Start
Enable telemetry when formatting your prompts:Setup
Initialize tracing in your application:When to Use
Development:- Debug prompt behavior
- Optimize token usage
- Understand execution flow
- Test different approaches
- Monitor performance
- Track costs
- Debug user issues
- Analyze usage patterns
PII masking
If your traces contain sensitive data (emails, SSNs, credit card numbers), you can redact it before it leaves your application. Masking runs client-side in your process — no unmasked data is ever exported.AGENTMARK_HIDE_INPUTS=true or AGENTMARK_HIDE_OUTPUTS=true to suppress all input or output attributes.
Learn more about PII masking
Next Steps
Traces and Logs
Track execution and debug issues
Sessions
Group related traces together
PII Masking
Redact sensitive data from traces before export