Skip to main content
AgentMark uses OpenTelemetry for collecting telemetry data. This provides a vendor-agnostic way to collect distributed traces and metrics for your prompt executions. Traces

Enabling Tracing

Enable tracing in your AgentMark client:
import { AgentMarkSDK } from "@agentmark/sdk";
import {
  createAgentMarkClient,
  VercelAIModelRegistry
} from "@agentmark/vercel-ai-v4-adapter";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";


const sdk = new AgentMarkSDK({
  apiKey: process.env.AGENTMARK_API_KEY!,
  appId: process.env.AGENTMARK_APP_ID!
});

// Initialize tracing
const tracer = sdk.initTracing();

// Configure agentmark with vercel ai v4 adapter
const modelRegistry = new VercelAIModelRegistry();
modelRegistry.registerModels("gpt-4o-mini", (name: string) => {
    return openai(name);
});

const agentmark = createAgentMarkClient({
  loader: sdk.getFileLoader(),
  modelRegistry,
});

const myPrompt = await agentmark.loadTextPrompt("my-prompt.prompt.mdx");

// Format the prompt with props and telemetry
const vercelInput = await myPrompt.format({
  props,
  telemetry: { 
    isEnabled: true,
    functionId: "my-function",
    metadata: {
      userId: "123",
    }
  } 
});

const result = await generateText(vercelInput);

// Shutdown tracer (only needed for short-running scripts, local testing)
await tracer.shutdown();

Grouping Traces

You can group traces by using the trace function. This will create a new trace with the same name as the function.
import { trace } from "@agentmark/sdk";
...
trace('my-trace', async () => {
  // Your code here
});
Grouped Traces You can create sub-groups by using the component function. This will create a new sub group within the parent trace.
import { component } from "@agentmark/sdk";
...
trace('my-trace', async () => {
  component('my-component', async () => {
    // Your code here
  });
});

Collected Spans

AgentMark records the following OpenTelemetry spans:
Span TypeDescriptionAttributes
ai.inferenceFull length of the inference calloperation.name, ai.operationId, ai.prompt, ai.response.text, ai.response.toolCalls, ai.response.finishReason
ai.toolCallIndividual tool executionsoperation.name, ai.operationId, ai.toolCall.name, ai.toolCall.args, ai.toolCall.result
ai.streamStreaming response dataai.response.msToFirstChunk, ai.response.msToFinish, ai.response.avgCompletionTokensPerSecond

Basic LLM Span Information

Each LLM span contains:
AttributeDescription
ai.model.idModel identifier
ai.model.providerModel provider name
ai.usage.promptTokensNumber of prompt tokens
ai.usage.completionTokensNumber of completion tokens
ai.settings.maxRetriesMaximum retry attempts
ai.telemetry.functionIdFunction identifier
ai.telemetry.metadata.*Custom metadata

Graph View for Complex Traces

AgentMark provides a powerful graph visualization for complex traces with multiple components and dependencies. This is especially useful for AI agent workflows that involve multiple steps, parallel processing, and conditional logic. Graph View Traces

Setting Up Graph View

To enable graph visualization, add graph metadata to your trace and component calls:
import { trace, component } from "@agentmark/sdk";

trace(
  {
    name: "ai-agent-workflow",
    metadata: {
      "graph.node.id": "orchestrator",
      "graph.node.display_name": "AI Agent Orchestrator",
      "graph.node.type": "router",
    },
  },
  async () => {
    // Process and validate user input
    component(
      {
        name: "input-processor",
        metadata: {
          "graph.node.id": "input-processor",
          "graph.node.parent_id": "orchestrator",
          "graph.node.display_name": "Input Processor",
          "graph.node.type": "agent",
        },
      },
      async () => {
        // Validate and clean user input, extract intent
        console.log("Processing user input...");
      }
    );

    // Retrieve relevant context from memory/database
    component(
      {
        name: "context-retrieval",
        metadata: {
          "graph.node.id": "context-retrieval", 
          "graph.node.parent_id": "orchestrator",
          "graph.node.display_name": "Context Retrieval",
          "graph.node.type": "retrieval",
        },
      },
      async () => {
        // Fetch relevant context, conversation history, user preferences
        console.log("Retrieving context...");
      }
    );

    // Main LLM reasoning combining input and context
    component(
      {
        name: "llm-reasoning",
        metadata: {
          "graph.node.id": "llm-reasoning",
          "graph.node.parent_ids": JSON.stringify(["input-processor", "context-retrieval"]),
          "graph.node.display_name": "LLM Reasoning",
          "graph.node.type": "llm",
        },
      },
      async () => {
        // Perform main AI reasoning with processed input and retrieved context
        console.log("AI reasoning...");
      }
    );

    // Execute tools if needed based on LLM decision
    component(
      {
        name: "tool-executor",
        metadata: {
          "graph.node.id": "tool-executor",
          "graph.node.parent_id": "llm-reasoning",
          "graph.node.display_name": "Tool Executor",
          "graph.node.type": "tool",
        },
      },
      async () => {
        // Execute external tools (API calls, calculations, etc.)
        console.log("Executing tools...");
      }
    );

    // Format the final response
    component(
      {
        name: "response-formatter",
        metadata: {
          "graph.node.id": "response-formatter",
          "graph.node.parent_ids": JSON.stringify(["llm-reasoning", "tool-executor"]),
          "graph.node.display_name": "Response Formatter",
          "graph.node.type": "agent",
        },
      },
      async () => {
        // Format the response for the user
        console.log("Formatting response...");
      }
    );

    // Store interaction in memory for future reference
    component(
      {
        name: "memory-storage",
        metadata: {
          "graph.node.id": "memory-storage",
          "graph.node.parent_id": "response-formatter",
          "graph.node.display_name": "Memory Storage",
          "graph.node.type": "memory",
        },
      },
      async () => {
        // Store the interaction, learnings, and outcomes
        console.log("Storing in memory...");
      }
    );
  }
);

Graph Metadata Properties

PropertyDescriptionRequired
graph.node.idUnique identifier for the nodeYes
graph.node.display_nameHuman-readable name shown in the graphYes
graph.node.typeVisual node type: router, llm, tool, retrieval, memory, agentYes
graph.node.parent_idSingle parent node IDNo*
graph.node.parent_idsJSON array of parent node IDs for multi-parent nodesNo*
*Either parent_id or parent_ids is required for all nodes except the root trace.

Node Types

The graph view supports different visual representations for different node types:
  • router: Orchestration and decision-making nodes
  • llm: LLM inference and reasoning nodes
  • tool: External tool execution nodes
  • retrieval: Information retrieval and search nodes
  • memory: Memory and storage operation nodes
  • agent: Agent-specific processing nodes

Multi-Parent Dependencies

For nodes that depend on multiple parent nodes, use the graph.node.parent_ids property with a JSON-stringified array:
component({
  name: "multi-parent-node",
  metadata: {
    "graph.node.id": "aggregator",
    "graph.node.parent_ids": JSON.stringify(["node1", "node2", "node3"]),
    "graph.node.display_name": "Data Aggregator",
    "graph.node.type": "router",
  },
}, async () => {
  // This node processes outputs from multiple parent nodes
});

Viewing Traces

Traces can be viewed in the AgentMark dashboard under the “Traces” tab. Each trace shows:
  • Complete prompt execution timeline
  • Tool calls and their durations
  • Token usage and costs
  • Custom metadata and attributes
  • Error information (if any)
  • Graph visualization (when graph metadata is present)

Best Practices

  1. Use meaningful function IDs for easy filtering
  2. Add relevant metadata for debugging context
  3. Monitor token usage and costs regularly
  4. Enable tracing in production environments
  5. Use the dashboard’s filtering capabilities to debug specific issues

Learn More

Have Questions?

We’re here to help! Choose the best way to reach us:

I