AgentMark works with multiple AI SDKs through adapters. Choose the adapter that fits your tech stack, or build your own.
What Are Adapters?
Adapters connect AgentMark prompts to AI SDKs. They translate AgentMark’s prompt format into the format your AI SDK expects.
The pattern is always the same:
- Load prompt:
client.loadTextPrompt() / loadObjectPrompt()
- Format with props:
await prompt.format({ props: {...} })
- Pass to your AI SDK’s generation function
Available Adapters
AI SDK (Recommended)
The most popular adapter for Next.js and Node.js applications. Supports text, object, image, and speech generation with streaming.
import { createAgentMarkClient, VercelAIModelRegistry } from "@agentmark-ai/ai-sdk-v5-adapter";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
const modelRegistry = new VercelAIModelRegistry()
.registerModels(["gpt-4o"], (name) => openai(name));
const client = createAgentMarkClient({ loader: fileLoader, modelRegistry });
const prompt = await client.loadTextPrompt("greeting.prompt.mdx");
const input = await prompt.format({ props: { name: "Alice" } });
const result = await generateText(input);
Learn more →
Claude Agent SDK
Run AgentMark prompts as agentic tasks with Anthropic’s Claude Agent SDK. Supports tool use, budget controls, and tracing.
import { createAgentMarkClient, ClaudeAgentModelRegistry } from "@agentmark-ai/claude-agent-sdk-adapter";
const modelRegistry = new ClaudeAgentModelRegistry()
.registerModels(["claude-sonnet-4-20250514"]);
const client = createAgentMarkClient({ loader: fileLoader, modelRegistry });
const prompt = await client.loadTextPrompt("task.prompt.mdx");
const input = await prompt.format({ props: { task: "Refactor auth module" } });
const result = await runAgent(input);
Learn more →
Mastra
Built for agentic workflows and multi-step LLM applications with Mastra’s framework.
import { createAgentMarkClient, MastraModelRegistry } from "@agentmark-ai/mastra-v0-adapter";
const modelRegistry = new MastraModelRegistry()
.registerModels(["claude-3-5-sonnet-20241022"], (name) => createModel(name));
const client = createAgentMarkClient({ loader: fileLoader, modelRegistry });
const prompt = await client.loadTextPrompt("greeting.prompt.mdx");
const agent = await prompt.formatAgent({ props: { name: "Alice" } });
const result = await agent.generate(...agent.formatMessages());
Learn more →
Pydantic AI
The recommended adapter for Python applications. Supports text, object generation, streaming, and type-safe outputs via Pydantic models.
from agentmark_pydantic_ai_v0 import create_pydantic_ai_client, create_default_model_registry, run_text_prompt
from agentmark.prompt_core import FileLoader
loader = FileLoader(base_dir="./")
client = create_pydantic_ai_client(model_registry=create_default_model_registry(), loader=loader)
prompt = await client.load_text_prompt("greeting.prompt.mdx")
params = await prompt.format(props={"name": "Alice"})
result = await run_text_prompt(params)
Learn more →
LlamaIndex (Coming Soon)
Use AgentMark prompts with LlamaIndex’s data framework and agent capabilities.
Learn more →
Default (Fallback)
Returns raw prompt configuration without SDK-specific formatting. Useful for mapping to any provider directly.
import { createAgentMarkClient } from "@agentmark-ai/fallback-adapter";
const client = createAgentMarkClient({ loader: fileLoader });
const prompt = await client.loadTextPrompt("greeting.prompt.mdx");
const result = await prompt.format({ props: { name: "Alice" } });
// Returns raw config — pass to your own provider
Learn more →
Custom Adapter
Build your own adapter for any AI SDK by implementing the model registry interface.
Learn more →
How to Choose
| Adapter | Best For | Language | Streaming | Image/Speech |
|---|
| AI SDK | Next.js, Node.js apps with broad model support | TypeScript | Yes | Yes |
| Claude Agent SDK | Agentic tasks with Claude (tool use, budget controls) | TypeScript | No | No |
| Mastra | Complex agentic workflows and orchestration | TypeScript | Yes | No |
| Pydantic AI | Python applications with type-safe outputs | Python | Yes | No |
| LlamaIndex | Data-heavy apps with indexing and retrieval | Python | TBD | TBD |
| Default | Direct provider mapping or unsupported SDKs | TypeScript | N/A | N/A |
| Custom | Any SDK with specific requirements | TypeScript | You decide | You decide |
For Python Developers
Recommended: Pydantic AI — Best for most Python projects. Provides:
- Type-safe outputs via Pydantic models
- Streaming support
- Sync and async tool functions
- All major LLM providers (OpenAI, Anthropic, Google)
Alternative: Claude Agent SDK — Use when you need agentic capabilities with Claude:
- Multi-turn tool use
- Budget controls
- Permission management
Image and speech generation are not available in Python adapters. If you need these features, consider using the AI SDK adapter via a Node.js service, or use provider SDKs directly.
Switching Adapters
Switch between adapters without changing your prompts. Only your client configuration changes:
// Switch from AI SDK to Mastra — your prompts stay exactly the same
import { createAgentMarkClient, MastraModelRegistry } from "@agentmark-ai/mastra-v0-adapter";
# Switch from Pydantic AI to Claude Agent SDK — your prompts stay exactly the same
from agentmark_claude_agent_sdk import create_claude_agent_client, ClaudeAgentModelRegistry
Your prompts are adapter-agnostic. The same .prompt.mdx files work with any adapter. Only your client configuration (agentmark.client.ts or agentmark_client.py) needs to change.
Package Versioning
Adapter packages use a -v0 suffix (e.g., agentmark-pydantic-ai-v0, @agentmark-ai/mastra-v0-adapter). This indicates the adapter API version, not stability:
v0 = current stable API
- Future breaking changes would be released as
v1, v2, etc.
This allows you to pin to a specific adapter API version while still receiving bug fixes.
Next Steps