AgentMark uses OpenTelemetry for collecting telemetry data. This provides a vendor-agnostic way to collect distributed traces and metrics for your prompt executions.

Enabling Tracing

Enable tracing in your AgentMark client:

import { AgentMarkSDK } from "@agentmark/sdk";
import {
  createAgentMarkClient,
  VercelAIModelRegistry
} from "@agentmark/vercel-ai-v4-adapter";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";


const sdk = new AgentMarkSDK({
  apiKey: process.env.AGENTMARK_API_KEY!,
  appId: process.env.AGENTMARK_APP_ID!
});

// Initialize tracing
const tracer = sdk.initTracing();

// Configure agentmark with vercel ai v4 adapter
const modelRegistry = new VercelAIModelRegistry();
modelRegistry.registerModels("gpt-4o-mini", (name: string) => {
    return openai(name);
});

const agentmark = createAgentMarkClient({
  loader: sdk.getFileLoader(),
  modelRegistry,
});

const myPrompt = await agentmark.loadTextPrompt("my-prompt.prompt.mdx");

// Format the prompt with props and telemetry
const vercelInput = await myPrompt.format({
  props,
  telemetry: { 
    isEnabled: true,
    functionId: "my-function",
    metadata: {
      userId: "123",
    }
  } 
});

const result = await generateText(vercelInput);

// Shutdown tracer (only needed for short-running scripts, local testing)
await tracer.shutdown();

Grouping Traces

You can group traces by using the trace function. This will create a new trace with the same name as the function.

import { trace } from "@agentmark/sdk";
...
trace('my-trace', async () => {
  // Your code here
});

You can create sub-groups by using the component function. This will create a new sub group within the parent trace.

import { component } from "@agentmark/sdk";
...
trace('my-trace', async () => {
  component('my-component', async () => {
    // Your code here
  });
});

Collected Spans

AgentMark records the following OpenTelemetry spans:

Span TypeDescriptionAttributes
ai.inferenceFull length of the inference calloperation.name, ai.operationId, ai.prompt, ai.response.text, ai.response.toolCalls, ai.response.finishReason
ai.toolCallIndividual tool executionsoperation.name, ai.operationId, ai.toolCall.name, ai.toolCall.args, ai.toolCall.result
ai.streamStreaming response dataai.response.msToFirstChunk, ai.response.msToFinish, ai.response.avgCompletionTokensPerSecond

Basic LLM Span Information

Each LLM span contains:

AttributeDescription
ai.model.idModel identifier
ai.model.providerModel provider name
ai.usage.promptTokensNumber of prompt tokens
ai.usage.completionTokensNumber of completion tokens
ai.settings.maxRetriesMaximum retry attempts
ai.telemetry.functionIdFunction identifier
ai.telemetry.metadata.*Custom metadata

Viewing Traces

Traces can be viewed in the AgentMark dashboard under the “Traces” tab. Each trace shows:

  • Complete prompt execution timeline
  • Tool calls and their durations
  • Token usage and costs
  • Custom metadata and attributes
  • Error information (if any)

Best Practices

  1. Use meaningful function IDs for easy filtering
  2. Add relevant metadata for debugging context
  3. Monitor token usage and costs regularly
  4. Enable tracing in production environments
  5. Use the dashboard’s filtering capabilities to debug specific issues

Learn More

Have Questions?

We’re here to help! Choose the best way to reach us: