Vercel AI Integration

Agentmark can be used with Vercel AI SDK to create AI agents with built-in observability support. This integration provides a seamless way to run prompts with Vercel’s AI infrastructure.

Installation

npm install @agentmark/vercel-ai-v4-adapter

Core Concepts

1. Initialize File Loader

Prompts are loaded from files using the FileLoader:

import { FileLoader } from "@agentmark/agentmark-core";

const loader = new FileLoader('./path/to/prompts');

2. Register Models

Configure your models using the Vercel AI model registry:

import { VercelAIModelRegistry } from "@agentmark/vercel-ai-v4-adapter";
import { openai } from "@ai-sdk/openai";

const modelRegistry = new VercelAIModelRegistry(['gpt-4o'], (name: string) => {
  return openai(name);
});

3. Initialize Client

Create an Agentmark client with your loader and model registry:

import { createAgentMarkClient } from "@agentmark/vercel-ai-v4-adapter";

const agentmark = createAgentMarkClient({
  loader,
  modelRegistry,
});

Running Prompts

Basic Usage

// Load a prompt
const prompt = await agentmark.loadTextPrompt('./path/to/prompt.mdx');

// Format with props
const input = await prompt.format({
  props: {
    name: "Alice",
    items: ["apple", "banana"]
  }
});

// Generate text using Vercel AI
import { generateText } from "ai";
const result = await generateText(input);

console.log(result);

Advanced Configuration

The format method accepts several configuration options:

const input = await prompt.format({
  // Props to pass to the prompt
  props: {
    name: "Alice"
  },
  
  // Tool context for prompt tools
  toolContext: {
    // Tool-specific context
  },
  
  // Telemetry configuration
  telemetry: {
    isEnabled: true,
    functionId: "example-function",
    metadata: {
      userId: "123",
      environment: "production"
    }
  },
  
  // API configuration
  apiKey: "sk-1234567890",
  baseURL: "https://api.openai.com/v1"
});

Observability

AgentMark uses OpenTelemetry for collecting telemetry data. The Vercel AI integration includes built-in support for observability.

Enabling Telemetry

Enable telemetry when calling format:

const result = await prompt.format({
  props: {
    // prompt props
  },
  telemetry: {
    isEnabled: true,
    functionId: "calculate-price",
    metadata: {
      userId: "123",
      environment: "production"
    }
  }
});

Telemetry with Agentmark Cloud

The easiest way to get started with observability is to use Agentmark Cloud. Agentmark Cloud automatically collects and visualizes telemetry data from your prompts:

import { AgentMarkSDK } from "@agentmark/sdk";

const sdk = new AgentMarkSDK({
  apiKey: process.env.AGENTMARK_API_KEY!,
  appId: process.env.AGENTMARK_APP_ID!
});

sdk.initTracing({
  // disable batch only in local environment
  disableBatch: true
})

Collected Spans

AgentMark records the following span types:

Span TypeDescriptionAttributes
ai.inferenceFull length of the inference calloperation.name, ai.operationId, ai.prompt, ai.response.text, ai.response.toolCalls, ai.response.finishReason
ai.toolCallIndividual tool executionsoperation.name, ai.operationId, ai.toolCall.name, ai.toolCall.args, ai.toolCall.result
ai.streamStreaming response dataai.response.msToFirstChunk, ai.response.msToFinish, ai.response.avgCompletionTokensPerSecond

Basic LLM Span Information

Each LLM span contains:

AttributeDescription
ai.model.idModel identifier
ai.model.providerModel provider name
ai.usage.promptTokensNumber of prompt tokens
ai.usage.completionTokensNumber of completion tokens
ai.settings.maxRetriesMaximum retry attempts
ai.telemetry.functionIdFunction identifier
ai.telemetry.metadata.*Custom metadata

Custom OpenTelemetry Setup

For custom OpenTelemetry configuration:

import { NodeSDK } from '@opentelemetry/sdk-node';
import { ConsoleSpanExporter } from '@opentelemetry/sdk-trace-node';

const sdk = new NodeSDK({
  traceExporter: new ConsoleSpanExporter(),
  serviceName: 'my-agentmark-app',
});

sdk.start();

Best Practices

  1. Error Handling

    • Always handle potential errors in your prompt execution
    • Use try-catch blocks for API calls
    • Implement proper error logging
  2. Type Safety

    • Use appropriate types for props
    • Enable TypeScript strict mode
    • Validate input data before processing
  3. Configuration

    • Store API keys in environment variables
    • Use different configurations for development and production
    • Configure telemetry appropriately for your environment
  4. Performance

    • Cache prompt results when appropriate
    • Monitor token usage and costs
    • Use streaming for long-running prompts
  5. Security

    • Never expose API keys in client-side code
    • Validate and sanitize all inputs
    • Use appropriate authentication methods

Have Questions?

We’re here to help! Choose the best way to reach us: