Skip to main content
AgentMark uses OpenTelemetry to collect distributed traces and metrics for your prompt executions. This provides detailed visibility into how your prompts perform in development and production.

Setup

Initialize tracing in your application:
import { AgentMarkSDK } from "@agentmark/sdk";
import { createAgentMarkClient, VercelAIModelRegistry } from "@agentmark/ai-sdk-v4-adapter";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";

const sdk = new AgentMarkSDK({
  baseUrl: process.env.AGENTMARK_BASE_URL,
  apiKey: process.env.AGENTMARK_API_KEY,
  appId: process.env.AGENTMARK_APP_ID
});

// Initialize tracing
const tracer = sdk.initTracing();

// Configure client
const modelRegistry = new VercelAIModelRegistry()
  .registerModels(["gpt-4o-mini"], (name) => openai(name));

const client = createAgentMarkClient({
  loader: sdk.getFileLoader(),
  modelRegistry
});

// Load and run prompt with telemetry
const prompt = await client.loadTextPrompt("greeting.prompt.mdx");
const input = await prompt.format({
  props: { name: 'Alice' },
  telemetry: {
    isEnabled: true,
    functionId: "greeting-function",
    metadata: {
      userId: "123",
      environment: "production"
    }
  }
});

const result = await generateText(input);

// Shutdown tracer (only for short-running scripts)
await tracer.shutdown();

Collected Spans

AgentMark records these OpenTelemetry spans:
Span TypeDescriptionKey Attributes
ai.inferenceFull inference call lifecycleai.model.id, ai.prompt, ai.response.text, ai.usage.promptTokens, ai.usage.completionTokens
ai.toolCallIndividual tool executionsai.toolCall.name, ai.toolCall.args, ai.toolCall.result
ai.streamStreaming response metricsai.response.msToFirstChunk, ai.response.msToFinish, ai.response.avgCompletionTokensPerSecond

Span Attributes

Each span contains detailed attributes: Model Information:
  • ai.model.id - Model identifier (e.g., “gpt-4o-mini”)
  • ai.model.provider - Provider name (e.g., “openai”)
Token Usage:
  • ai.usage.promptTokens - Input tokens
  • ai.usage.completionTokens - Output tokens
Telemetry Metadata:
  • ai.telemetry.functionId - Your function identifier
  • ai.telemetry.metadata.* - Custom metadata fields
Response Details:
  • ai.response.text - Generated text
  • ai.response.toolCalls - Tool call information
  • ai.response.finishReason - Why generation stopped

Grouping Traces

Group related operations using the trace function:
import { trace } from "@agentmark/sdk";

await trace('user-request-handler', async () => {
  // All spans created here are grouped under this trace
  const prompt = await client.loadTextPrompt('handler.prompt.mdx');
  const input = await prompt.format({
    props: { query: 'What is AgentMark?' },
    telemetry: { isEnabled: true }
  });

  const result = await generateText(input);
  return result;
});
Create sub-groups with the component function:
import { trace, component } from "@agentmark/sdk";

await trace('multi-step-workflow', async () => {

  await component('validate-input', async () => {
    // Validation logic
  });

  await component('process-request', async () => {
    // Main processing
    const prompt = await client.loadTextPrompt('process.prompt.mdx');
    // ...
  });

  await component('format-response', async () => {
    // Response formatting
  });
});

Graph View for Complex Workflows

For complex AI agent workflows, use graph metadata to visualize execution flow:
import { trace, component } from "@agentmark/sdk";

await trace(
  {
    name: "ai-agent-workflow",
    metadata: {
      "graph.node.id": "orchestrator",
      "graph.node.display_name": "Orchestrator",
      "graph.node.type": "router"
    }
  },
  async () => {

    // Step 1: Process input
    await component(
      {
        name: "input-processor",
        metadata: {
          "graph.node.id": "input-processor",
          "graph.node.parent_id": "orchestrator",
          "graph.node.display_name": "Input Processor",
          "graph.node.type": "agent"
        }
      },
      async () => {
        // Process and validate input
      }
    );

    // Step 2: Retrieve context
    await component(
      {
        name: "context-retrieval",
        metadata: {
          "graph.node.id": "context-retrieval",
          "graph.node.parent_id": "orchestrator",
          "graph.node.display_name": "Context Retrieval",
          "graph.node.type": "retrieval"
        }
      },
      async () => {
        // Fetch relevant context
      }
    );

    // Step 3: LLM reasoning (depends on both previous steps)
    await component(
      {
        name: "llm-reasoning",
        metadata: {
          "graph.node.id": "llm-reasoning",
          "graph.node.parent_ids": JSON.stringify(["input-processor", "context-retrieval"]),
          "graph.node.display_name": "LLM Reasoning",
          "graph.node.type": "llm"
        }
      },
      async () => {
        const prompt = await client.loadTextPrompt('reason.prompt.mdx');
        // ... reasoning logic
      }
    );
  }
);

Graph Metadata

PropertyDescriptionRequired
graph.node.idUnique node identifierYes
graph.node.display_nameHuman-readable nameYes
graph.node.typeVisual type: router, llm, tool, retrieval, memory, agentYes
graph.node.parent_idSingle parent node IDNo*
graph.node.parent_idsJSON array of parent IDsNo*
*Either parent_id or parent_ids required (except for root).

Node Types

  • router - Orchestration and routing
  • llm - LLM inference and reasoning
  • tool - External tool execution
  • retrieval - Information retrieval
  • memory - Memory operations
  • agent - Agent-specific processing

Best Practices

Use meaningful function IDs:
// ❌ Bad
telemetry: { functionId: "func1" }

// ✅ Good
telemetry: { functionId: "customer-support-greeting" }
Add relevant metadata:
telemetry: {
  isEnabled: true,
  functionId: "search-handler",
  metadata: {
    userId: user.id,
    environment: process.env.NODE_ENV,
    query: searchQuery,
    resultCount: results.length
  }
}
Monitor in production:
  • Always enable telemetry in production
  • Use the AgentMark dashboard to monitor performance
  • Set up alerts for anomalies
  • Review traces when debugging user issues
Clean up resources:
// For short-running scripts (like CLI tools)
await tracer.shutdown();

// For long-running servers, shutdown on process exit
process.on('SIGTERM', async () => {
  await tracer.shutdown();
  process.exit(0);
});

Viewing Traces

Traces are viewable in the AgentMark platform dashboard showing:
  • Complete execution timeline
  • Token usage and costs
  • Custom metadata
  • Error information
  • Graph visualization (when enabled)

Next Steps