This guide walks you through setting up observability and viewing your first trace in the AgentMark dashboard.
Prerequisites
Before you begin, make sure you have:
An AgentMark account (sign up )
An app created in the AgentMark dashboard
An API key from Settings in your app
Install the SDK
npm install @agentmark-ai/sdk
pip install agentmark-sdk
Initialize tracing
import { AgentMarkSDK } from "@agentmark-ai/sdk" ;
const sdk = new AgentMarkSDK ({
apiKey: process . env . AGENTMARK_API_KEY ,
appId: process . env . AGENTMARK_APP_ID ,
});
sdk . initTracing ();
import os
from agentmark_sdk import AgentMarkSDK
sdk = AgentMarkSDK(
api_key = os.environ[ "AGENTMARK_API_KEY" ],
app_id = os.environ[ "AGENTMARK_APP_ID" ],
)
sdk.init_tracing()
Run a prompt
Execute a prompt to generate your first trace.
import { generateText } from "ai" ;
import { createAgentMarkClient , VercelAIModelRegistry } from "@agentmark-ai/ai-sdk-v5-adapter" ;
import { openai } from "@ai-sdk/openai" ;
const modelRegistry = new VercelAIModelRegistry ()
. registerModels ([ "gpt-4o-mini" ], ( name ) => openai ( name ));
const client = createAgentMarkClient ({
loader: sdk . getApiLoader (),
modelRegistry ,
});
const prompt = await client . loadTextPrompt ( "greeting.prompt.mdx" );
const input = await prompt . format ({
props: { name: "Alice" },
telemetry: { isEnabled: true , functionId: "greeting" },
});
const result = await generateText ( input );
console . log ( result . text );
from agentmark_sdk import trace
from agentmark_pydantic_ai_v0 import run_text_prompt
from agentmark_client import client
prompt = await client.load_text_prompt( "greeting.prompt.mdx" )
params = await prompt.format(
props = { "name" : "Alice" },
telemetry = { "isEnabled" : True , "functionId" : "greeting" },
)
result = await run_text_prompt(params)
print (result.text)
View your trace
Open the AgentMark dashboard, navigate to your app, and click Traces . You should see your trace appear within seconds, showing the execution timeline, token usage, cost, and model information.
For short-running scripts, call await tracer.shutdown() (TypeScript) or await tracer.shutdown() (Python) before the process exits to ensure all traces are flushed.
Next steps
Traces and Logs Explore trace details and span attributes
Sessions Group related traces together
Metadata Add custom context to traces
Filtering and Search Find specific traces across dimensions
Have Questions? We’re here to help! Choose the best way to reach us: