Vercel AI Integration
Use Agentmark with Vercel AI SDK to create AI agents with observability.
Vercel AI Integration
Agentmark can be used with Vercel AI SDK to create AI agents with built-in observability support. This integration provides a seamless way to run prompts with Vercel’s AI infrastructure.
Installation
Core Concepts
1. Initialize File Loader
Prompts are loaded from files using the FileLoader
:
2. Register Models
Configure your models using the Vercel AI model registry:
3. Initialize Client
Create an Agentmark client with your loader and model registry:
Running Prompts
Basic Usage
Advanced Configuration
The format
method accepts several configuration options:
Observability
AgentMark uses OpenTelemetry for collecting telemetry data. The Vercel AI integration includes built-in support for observability.
Enabling Telemetry
Enable telemetry when calling format
:
Telemetry with Agentmark Cloud
The easiest way to get started with observability is to use Agentmark Cloud. Agentmark Cloud automatically collects and visualizes telemetry data from your prompts:
Collected Spans
AgentMark records the following span types:
Span Type | Description | Attributes |
---|---|---|
ai.inference | Full length of the inference call | operation.name , ai.operationId , ai.prompt , ai.response.text , ai.response.toolCalls , ai.response.finishReason |
ai.toolCall | Individual tool executions | operation.name , ai.operationId , ai.toolCall.name , ai.toolCall.args , ai.toolCall.result |
ai.stream | Streaming response data | ai.response.msToFirstChunk , ai.response.msToFinish , ai.response.avgCompletionTokensPerSecond |
Basic LLM Span Information
Each LLM span contains:
Attribute | Description |
---|---|
ai.model.id | Model identifier |
ai.model.provider | Model provider name |
ai.usage.promptTokens | Number of prompt tokens |
ai.usage.completionTokens | Number of completion tokens |
ai.settings.maxRetries | Maximum retry attempts |
ai.telemetry.functionId | Function identifier |
ai.telemetry.metadata.* | Custom metadata |
Custom OpenTelemetry Setup
For custom OpenTelemetry configuration:
Best Practices
-
Error Handling
- Always handle potential errors in your prompt execution
- Use try-catch blocks for API calls
- Implement proper error logging
-
Type Safety
- Use appropriate types for props
- Enable TypeScript strict mode
- Validate input data before processing
-
Configuration
- Store API keys in environment variables
- Use different configurations for development and production
- Configure telemetry appropriately for your environment
-
Performance
- Cache prompt results when appropriate
- Monitor token usage and costs
- Use streaming for long-running prompts
-
Security
- Never expose API keys in client-side code
- Validate and sanitize all inputs
- Use appropriate authentication methods
Have Questions?
We’re here to help! Choose the best way to reach us:
Join our Discord community for quick answers and discussions
Email us at hello@agentmark.co for support
Schedule an Enterprise Demo to learn about our business solutions