Vercel AI Integration
Agentmark can be used with Vercel AI SDK to create AI agents with built-in observability support. This integration provides a seamless way to run prompts with Vercel’s AI infrastructure.
Installation
npm install @agentmark/vercel-ai-v4-adapter
Core Concepts
1. Initialize File Loader
Prompts are loaded from files using the FileLoader
:
import { FileLoader } from "@agentmark/agentmark-core";
const loader = new FileLoader('./path/to/prompts');
2. Register Models
Configure your models using the Vercel AI model registry:
import { VercelAIModelRegistry } from "@agentmark/vercel-ai-v4-adapter";
import { openai } from "@ai-sdk/openai";
const modelRegistry = new VercelAIModelRegistry(['gpt-4o'], (name: string) => {
return openai(name);
});
3. Initialize Client
Create an Agentmark client with your loader and model registry:
import { createAgentMarkClient } from "@agentmark/vercel-ai-v4-adapter";
const agentmark = createAgentMarkClient({
loader,
modelRegistry,
});
Running Prompts
Basic Usage
// Load a prompt
const prompt = await agentmark.loadTextPrompt('./path/to/prompt.mdx');
// Format with props
const input = await prompt.format({
props: {
name: "Alice",
items: ["apple", "banana"]
}
});
// Generate text using Vercel AI
import { generateText } from "ai";
const result = await generateText(input);
console.log(result);
Advanced Configuration
The format
method accepts several configuration options:
const input = await prompt.format({
// Props to pass to the prompt
props: {
name: "Alice"
},
// Tool context for prompt tools
toolContext: {
// Tool-specific context
},
// Telemetry configuration
telemetry: {
isEnabled: true,
functionId: "example-function",
metadata: {
userId: "123",
environment: "production"
}
},
// API configuration
apiKey: "sk-1234567890",
baseURL: "https://api.openai.com/v1"
});
Observability
AgentMark uses OpenTelemetry for collecting telemetry data. The Vercel AI integration includes built-in support for observability.
Enabling Telemetry
Enable telemetry when calling format
:
const result = await prompt.format({
props: {
// prompt props
},
telemetry: {
isEnabled: true,
functionId: "calculate-price",
metadata: {
userId: "123",
environment: "production"
}
}
});
Telemetry with Agentmark Cloud
The easiest way to get started with observability is to use Agentmark Cloud. Agentmark Cloud automatically collects and visualizes telemetry data from your prompts:
import { AgentMarkSDK } from "@agentmark/sdk";
const sdk = new AgentMarkSDK({
apiKey: process.env.AGENTMARK_API_KEY!,
appId: process.env.AGENTMARK_APP_ID!
});
sdk.initTracing({
// disable batch only in local environment
disableBatch: true
})
Collected Spans
AgentMark records the following span types:
Span Type | Description | Attributes |
---|
ai.inference | Full length of the inference call | operation.name , ai.operationId , ai.prompt , ai.response.text , ai.response.toolCalls , ai.response.finishReason |
ai.toolCall | Individual tool executions | operation.name , ai.operationId , ai.toolCall.name , ai.toolCall.args , ai.toolCall.result |
ai.stream | Streaming response data | ai.response.msToFirstChunk , ai.response.msToFinish , ai.response.avgCompletionTokensPerSecond |
Each LLM span contains:
Attribute | Description |
---|
ai.model.id | Model identifier |
ai.model.provider | Model provider name |
ai.usage.promptTokens | Number of prompt tokens |
ai.usage.completionTokens | Number of completion tokens |
ai.settings.maxRetries | Maximum retry attempts |
ai.telemetry.functionId | Function identifier |
ai.telemetry.metadata.* | Custom metadata |
Custom OpenTelemetry Setup
For custom OpenTelemetry configuration:
import { NodeSDK } from '@opentelemetry/sdk-node';
import { ConsoleSpanExporter } from '@opentelemetry/sdk-trace-node';
const sdk = new NodeSDK({
traceExporter: new ConsoleSpanExporter(),
serviceName: 'my-agentmark-app',
});
sdk.start();
Best Practices
-
Error Handling
- Always handle potential errors in your prompt execution
- Use try-catch blocks for API calls
- Implement proper error logging
-
Type Safety
- Use appropriate types for props
- Enable TypeScript strict mode
- Validate input data before processing
-
Configuration
- Store API keys in environment variables
- Use different configurations for development and production
- Configure telemetry appropriately for your environment
-
Performance
- Cache prompt results when appropriate
- Monitor token usage and costs
- Use streaming for long-running prompts
-
Security
- Never expose API keys in client-side code
- Validate and sanitize all inputs
- Use appropriate authentication methods
Have Questions?
We’re here to help! Choose the best way to reach us: