The Pydantic AI adapter allows you to use AgentMark prompts with Pydantic AI in Python applications. This is the recommended adapter for Python projects.
Installation
pip install agentmark-pydantic-ai-v0 agentmark-prompt-core agentmark-loader-api
Package names vs import names:
agentmark-prompt-core → from agentmark.prompt_core import ...
agentmark-loader-api → from agentmark.loader_api import ...
agentmark-pydantic-ai-v0 → from agentmark_pydantic_ai_v0 import ...
For specific providers, install the optional extras:
pip install "pydantic-ai[openai]" # OpenAI
pip install "pydantic-ai[anthropic]" # Anthropic
pip install "pydantic-ai[gemini]" # Google Gemini
Setup
Create your AgentMark client in agentmark_client.py:
from pathlib import Path
from dotenv import load_dotenv
from agentmark.prompt_core import FileLoader
from agentmark_pydantic_ai_v0 import (
create_pydantic_ai_client,
create_default_model_registry,
PydanticAIToolRegistry,
)
load_dotenv()
# Default registry supports gpt-*, claude-*, gemini-*, mistral-*
model_registry = create_default_model_registry()
tool_registry = PydanticAIToolRegistry()
loader = FileLoader(base_dir=str(Path(__file__).parent.resolve()))
client = create_pydantic_ai_client(
model_registry=model_registry,
tool_registry=tool_registry,
loader=loader,
)
Custom Model Registry
For more control over model resolution, create a custom registry with exact names, regex patterns, or a default fallback:
from agentmark_pydantic_ai_v0 import PydanticAIModelRegistry
import re
model_registry = PydanticAIModelRegistry()
# Exact match
model_registry.register_models(
["gpt-4o", "gpt-4o-mini"],
lambda name, _: f"openai:{name}"
)
# Regex pattern
model_registry.register_models(
re.compile(r"^claude-"),
lambda name, _: f"anthropic:{name}"
)
# Default fallback for unmatched names
model_registry.set_default(lambda name, _: name)
Running Prompts
Load and run prompts using run_text_prompt:
import asyncio
from agentmark_pydantic_ai_v0 import run_text_prompt
from agentmark_client import client
async def main():
prompt = await client.load_text_prompt("greeting.prompt.mdx")
params = await prompt.format(props={"name": "Alice"})
result = await run_text_prompt(params)
print(result.output)
print(f"Tokens: {result.usage.total_tokens}")
asyncio.run(main())
Object Generation
For structured output, the adapter automatically converts JSON Schema to Pydantic models:
from agentmark_pydantic_ai_v0 import run_object_prompt
from agentmark_client import client
prompt = await client.load_object_prompt("sentiment.prompt.mdx")
params = await prompt.format(props={"text": "This product is amazing!"})
result = await run_object_prompt(params)
print(result.output) # Typed Pydantic model instance
print(result.output.sentiment) # 'positive'
Streaming
Stream text responses for real-time output:
from agentmark_pydantic_ai_v0 import stream_text_prompt
params = await prompt.format(props={"query": "Explain quantum computing"})
async for chunk in stream_text_prompt(params):
print(chunk, end="", flush=True)
Register sync or async tool functions:
from agentmark_pydantic_ai_v0 import PydanticAIToolRegistry
tool_registry = PydanticAIToolRegistry()
# Sync tool
tool_registry.register(
"search",
lambda args, ctx: f"Results for: {args['query']}"
)
# Async tool
async def fetch_data(args, ctx):
return await api.get(args["url"])
tool_registry.register("fetch", fetch_data)
# Tool that receives Pydantic AI's RunContext
tool_registry.register("db_query", db_tool_fn, takes_ctx=True)
Then reference tools in your prompts:
---
name: search
text_config:
model_name: gpt-4o
tools:
- search
---
<System>You are a helpful search assistant.</System>
<User>Search for {props.query}</User>
MCP Servers
Configure MCP servers for extended tool capabilities:
from agentmark_pydantic_ai_v0 import McpServerRegistry
mcp_registry = McpServerRegistry()
# URL-based server
mcp_registry.register("search", {
"url": "http://localhost:8000/mcp",
})
# Stdio-based server
mcp_registry.register("python-runner", {
"command": "python",
"args": ["-m", "mcp_server"],
"cwd": "/app",
})
client = create_pydantic_ai_client(
model_registry=model_registry,
mcp_registry=mcp_registry,
loader=loader,
)
Reference MCP tools in prompts with the mcp:// prefix:
---
name: task
text_config:
model_name: gpt-4o
tools:
- mcp://search/web_search
- mcp://python-runner/*
---
Getting Started
The fastest way to scaffold a Python project:
npm create agentmark@latest my-app
# Select "Python" when prompted for language
# Select "Pydantic AI" as the adapter
Run the dev server:
The CLI automatically detects Python projects via pyproject.toml or agentmark_client.py.
Limitations
- No image generation — Use the AI SDK adapter (TypeScript) for
experimental_generateImage
- No speech generation — Use the AI SDK adapter (TypeScript) for
experimental_generateSpeech
Next Steps
Have Questions?
We’re here to help! Choose the best way to reach us: