Skip to main content
AgentMark provides built-in observability powered by OpenTelemetry, allowing you to track, monitor, and debug your prompts throughout their lifecycle.

What Gets Tracked

AgentMark automatically collects: Inference Spans - Full prompt execution lifecycle
  • Token usage and costs
  • Response times
  • Model information
  • Completion status
Tool Calls - When prompts use tools
  • Tool name and parameters
  • Execution duration
  • Success/failure status
  • Return values
Streaming Metrics - For streaming responses
  • Time to first token
  • Tokens per second
  • Total streaming duration
Span Kinds - Categorize operations by type
  • LLM, tool, agent, retrieval, embedding, guardrail, and function
  • Used for filtering, graph visualization, and analytics grouping
  • Set via observe() or ctx.span()
Sessions - Group related traces
  • Organize by user interaction
  • Track multi-step workflows
  • Monitor batch processing
  • Analyze performance patterns

Quick Start

Enable telemetry when formatting your prompts:
import { client } from './agentmark.client';
import { generateText } from 'ai';

const prompt = await client.loadTextPrompt('greeting.prompt.mdx');
const input = await prompt.format({
  props: { name: 'Alice' },
  telemetry: {
    isEnabled: true,
    functionId: 'greeting-handler',
    metadata: {
      userId: 'user-123',
      sessionId: 'session-abc',
      sessionName: 'Customer Support Chat'
    }
  }
});

const result = await generateText(input);

Setup

Initialize tracing in your application:
import { AgentMarkSDK } from "@agentmark-ai/sdk";

const sdk = new AgentMarkSDK({
  apiKey: process.env.AGENTMARK_API_KEY,
  appId: process.env.AGENTMARK_APP_ID,
  baseUrl: process.env.AGENTMARK_BASE_URL  // defaults to https://api.agentmark.co
});

// Initialize tracing
const tracer = sdk.initTracing();

// Your application code...

// Shutdown tracer (for short-running scripts)
await tracer.shutdown();
For local development with agentmark dev, traces are sent to http://localhost:9418 automatically. Set disableBatch if you need traces flushed immediately (useful for short scripts):
const tracer = sdk.initTracing({ disableBatch: true });

When to Use

Development:
  • Debug prompt behavior
  • Optimize token usage
  • Understand execution flow
  • Test different approaches
Production:
  • Monitor performance
  • Track costs
  • Debug user issues
  • Analyze usage patterns

PII masking

If your traces contain sensitive data (emails, SSNs, credit card numbers), you can redact it before it leaves your application. Masking runs client-side in your process — no unmasked data is ever exported.
import { AgentMarkSDK, createPiiMasker } from '@agentmark-ai/sdk';

const sdk = new AgentMarkSDK({
  apiKey: process.env.AGENTMARK_API_KEY!,
  appId: process.env.AGENTMARK_APP_ID!,
  mask: createPiiMasker({ email: true, ssn: true }),
});
sdk.initTracing();
For a zero-code option, set AGENTMARK_HIDE_INPUTS=true or AGENTMARK_HIDE_OUTPUTS=true to suppress all input or output attributes. Learn more about PII masking

Next Steps

Traces and Logs

Track execution and debug issues

Sessions

Group related traces together

PII Masking

Redact sensitive data from traces before export