Skip to main content
AgentMark provides built-in observability powered by OpenTelemetry, allowing you to track, monitor, and debug your prompts throughout their lifecycle.

What Gets Tracked

AgentMark automatically collects: Inference Spans - Full prompt execution lifecycle
  • Token usage and costs
  • Response times
  • Model information
  • Completion status
Tool Calls - When prompts use tools
  • Tool name and parameters
  • Execution duration
  • Success/failure status
  • Return values
Streaming Metrics - For streaming responses
  • Time to first token
  • Tokens per second
  • Total streaming duration
Sessions - Group related traces
  • Organize by user interaction
  • Track multi-step workflows
  • Monitor batch processing
  • Analyze performance patterns

Quick Start

Enable telemetry when formatting your prompts:
import { client } from './agentmark.client';
import { generateText } from 'ai';

const prompt = await client.loadTextPrompt('greeting.prompt.mdx');
const input = await prompt.format({
  props: { name: 'Alice' },
  telemetry: {
    isEnabled: true,
    functionId: 'greeting-handler',
    metadata: {
      userId: 'user-123',
      sessionId: 'session-abc',
      sessionName: 'Customer Support Chat'
    }
  }
});

const result = await generateText(input);

Setup

Initialize tracing in your application:
import { AgentMarkSDK } from "@agentmark/sdk";

const sdk = new AgentMarkSDK({
  baseUrl: process.env.AGENTMARK_BASE_URL,
  apiKey: process.env.AGENTMARK_API_KEY,
  appId: process.env.AGENTMARK_APP_ID
});

// Initialize tracing
const tracer = sdk.initTracing();

// Your application code...

// Shutdown tracer (for short-running scripts)
await tracer.shutdown();

When to Use

Development:
  • Debug prompt behavior
  • Optimize token usage
  • Understand execution flow
  • Test different approaches
Production:
  • Monitor performance
  • Track costs
  • Debug user issues
  • Analyze usage patterns

Next Steps