The Pydantic AI adapter allows you to use AgentMark prompts with Pydantic AI in Python applications. This is the recommended adapter for Python projects.
Installation
pip install agentmark-pydantic-ai-v0 agentmark-prompt-core agentmark-loader-api
Package names vs import names:
agentmark-prompt-core → from agentmark.prompt_core import ...
agentmark-loader-api → from agentmark.loader_api import ...
agentmark-pydantic-ai-v0 → from agentmark_pydantic_ai_v0 import ...
For specific providers, install the optional extras:
pip install "pydantic-ai[openai]" # OpenAI
pip install "pydantic-ai[anthropic]" # Anthropic
pip install "pydantic-ai[gemini]" # Google Gemini
Setup
Create your AgentMark client in agentmark_client.py:
from pathlib import Path
from dotenv import load_dotenv
from agentmark.prompt_core import FileLoader
from agentmark_pydantic_ai_v0 import (
create_pydantic_ai_client,
create_default_model_registry,
)
load_dotenv()
# Default registry supports gpt-*, claude-*, gemini-*, mistral-*
model_registry = create_default_model_registry()
loader = FileLoader( base_dir = str (Path( __file__ ).parent.resolve()))
client = create_pydantic_ai_client(
model_registry = model_registry,
loader = loader,
)
Custom Model Registry
For more control over model resolution, create a custom registry with exact names, regex patterns, or a default fallback:
from agentmark_pydantic_ai_v0 import PydanticAIModelRegistry
import re
model_registry = PydanticAIModelRegistry()
# Exact match
model_registry.register_models(
[ "gpt-4o" , "gpt-4o-mini" ],
lambda name , _ : f "openai: { name } "
)
# Regex pattern
model_registry.register_models(
re.compile( r " ^ claude-" ),
lambda name , _ : f "anthropic: { name } "
)
# Default fallback for unmatched names
model_registry.set_default( lambda name , _ : name)
Running Prompts
Load and run prompts using run_text_prompt:
import asyncio
from agentmark_pydantic_ai_v0 import run_text_prompt
from agentmark_client import client
async def main ():
prompt = await client.load_text_prompt( "greeting.prompt.mdx" )
params = await prompt.format( props = { "name" : "Alice" })
result = await run_text_prompt(params)
print (result.output)
print ( f "Tokens: { result.usage.total_tokens } " )
asyncio.run(main())
Object Generation
For structured output, the adapter automatically converts JSON Schema to Pydantic models:
from agentmark_pydantic_ai_v0 import run_object_prompt
from agentmark_client import client
prompt = await client.load_object_prompt( "sentiment.prompt.mdx" )
params = await prompt.format( props = { "text" : "This product is amazing!" })
result = await run_object_prompt(params)
print (result.output) # Typed Pydantic model instance
print (result.output.sentiment) # 'positive'
Streaming
Stream text responses for real-time output:
from agentmark_pydantic_ai_v0 import stream_text_prompt
params = await prompt.format( props = { "query" : "Explain quantum computing" })
async for chunk in stream_text_prompt(params):
print (chunk, end = "" , flush = True )
Tools
Pass native Python tool functions as a dictionary to create_pydantic_ai_client. Each key is the tool name referenced in your prompts, and the value is the function:
from agentmark_pydantic_ai_v0 import create_pydantic_ai_client
# Sync tool
def search ( query : str ) -> str :
return f "Results for: { query } "
# Async tool
async def fetch_data ( url : str ) -> str :
return await api.get(url)
client = create_pydantic_ai_client(
model_registry = model_registry,
tools = { "search" : search, "fetch" : fetch_data},
loader = loader,
)
Then reference tools in your prompts:
---
name : search
text_config :
model_name : gpt-4o
tools :
- search
---
< System > You are a helpful search assistant. </ System >
< User > Search for { props . query } </ User >
MCP Servers
Configure MCP servers by passing a plain dictionary to create_pydantic_ai_client. Each key is the server name, and the value is its connection configuration:
from agentmark_pydantic_ai_v0 import create_pydantic_ai_client
client = create_pydantic_ai_client(
model_registry = model_registry,
loader = loader,
mcp_servers = {
# URL-based server
"search" : {
"url" : "http://localhost:8000/mcp" ,
},
# Stdio-based server
"python-runner" : {
"command" : "python" ,
"args" : [ "-m" , "mcp_server" ],
"cwd" : "/app" ,
},
},
)
Reference MCP tools in prompts with the mcp:// prefix:
---
name : task
text_config :
model_name : gpt-4o
tools :
- mcp://search/web_search
- mcp://python-runner/*
---
Getting Started
The fastest way to scaffold a Python project:
npm create agentmark@latest my-app
# Select "Python" when prompted for language
# Select "Pydantic AI" as the adapter
Run the dev server:
The CLI automatically detects Python projects via pyproject.toml or agentmark_client.py.
Limitations
No image generation — Use the AI SDK adapter (TypeScript) for experimental_generateImage
No speech generation — Use the AI SDK adapter (TypeScript) for experimental_generateSpeech
Next Steps
AI SDK TypeScript adapter for Node.js
Claude Agent SDK Agentic tasks with Claude
Prompts Learn about prompt syntax
Observability Monitor prompts in production
Have Questions? We’re here to help! Choose the best way to reach us: