
What We Track
AgentMark automatically collects:-
Inference Spans: Full lifecycle of prompt execution
- Token usage
- Response times
- Model information
- Completion status
- Cost
- Response status
-
Tool Calls: When your prompts use tools
- Tool name and parameters
- Execution duration
- Success/failure status
- Return values
-
Streaming Metrics: For streaming responses
- Time to first token
- Tokens per second
- Total streaming duration
-
Sessions: Group related traces together
- Organize traces by user interaction
- Track multi-step workflows
- Monitor batch processing jobs
- Analyze performance across related operations
-
Alerts: Monitor critical metrics and get notified
- Cost thresholds
- Latency monitoring
- Error rate tracking
- Notifications via Slack or webhooks
- Alert history for pattern analysis
Basic Usage
Enable telemetry in your AgentMark client:Learn More
For detailed information about spans, metrics, and custom configuration, see:Have Questions?
We’re here to help! Choose the best way to reach us:
- Join our Discord community for quick answers and discussions
- Email us at hello@agentmark.co for support
- Schedule an Enterprise Demo to learn about our business solutions