Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.agentmark.co/llms.txt

Use this file to discover all available pages before exploring further.

The Pydantic AI adapter lets you use AgentMark prompts with Pydantic AI in Python applications. It’s the recommended adapter for Python projects.

Installation

pip install agentmark-pydantic-ai-v0 agentmark-prompt-core
Package names vs import names:
  • agentmark-prompt-corefrom agentmark.prompt_core import ...
  • agentmark-pydantic-ai-v0from agentmark_pydantic_ai_v0 import ...
ApiLoader and FileLoader both ship with agentmark-prompt-core — there’s no separate agentmark-loader-api package.
For specific providers, install the Pydantic AI provider extras you need:
pip install "pydantic-ai[openai]"     # OpenAI
pip install "pydantic-ai[anthropic]"  # Anthropic
pip install "pydantic-ai[google]"     # Google Gemini

Setup

The Python adapters don’t ship a “default” model registry — you register provider prefixes explicitly. The "<provider>:<model>" string format tells Pydantic AI which provider to use at runtime:
agentmark_client.py
import os
from dotenv import load_dotenv
from agentmark.prompt_core import ApiLoader
from agentmark_pydantic_ai_v0 import (
    create_pydantic_ai_client,
    PydanticAIModelRegistry,
)

load_dotenv()

model_registry = PydanticAIModelRegistry()
model_registry.register_models(
    ["gpt-4o", "gpt-4o-mini"],
    lambda name, opts=None: f"openai:{name}",
)
model_registry.register_models(
    ["claude-sonnet-4-20250514"],
    lambda name, opts=None: f"anthropic:{name}",
)

if os.getenv("NODE_ENV") == "development":
    loader = ApiLoader.local(
        base_url=os.getenv("AGENTMARK_BASE_URL", "http://localhost:9418")
    )
else:
    loader = ApiLoader.cloud(
        api_key=os.environ["AGENTMARK_API_KEY"],
        app_id=os.environ["AGENTMARK_APP_ID"],
    )

client = create_pydantic_ai_client(
    model_registry=model_registry,
    loader=loader,
)

Registering models with patterns

register_models accepts an exact string, a re.Pattern, or a list of strings. You can also register a set_default fallback:
import re
from agentmark_pydantic_ai_v0 import PydanticAIModelRegistry

model_registry = PydanticAIModelRegistry()

# Exact matches
model_registry.register_models(
    ["gpt-4o", "gpt-4o-mini"],
    lambda name, opts=None: f"openai:{name}",
)

# Regex pattern
model_registry.register_models(
    re.compile(r"^claude-"),
    lambda name, opts=None: f"anthropic:{name}",
)

# Fallback for unmatched names
model_registry.set_default(lambda name, opts=None: name)

Running prompts

Load and run prompts with run_text_prompt:
import asyncio
from agentmark_pydantic_ai_v0 import run_text_prompt
from agentmark_client import client

async def main():
    prompt = await client.load_text_prompt("greeting.prompt.mdx")
    params = await prompt.format(props={"name": "Alice"})

    result = await run_text_prompt(params)
    print(result.output)
    print(f"Tokens: {result.usage.total_tokens}")

asyncio.run(main())

Object generation

For structured output, the adapter automatically converts JSON Schema to Pydantic models:
from agentmark_pydantic_ai_v0 import run_object_prompt
from agentmark_client import client

prompt = await client.load_object_prompt("sentiment.prompt.mdx")
params = await prompt.format(props={"text": "This product is amazing!"})

result = await run_object_prompt(params)
print(result.output)              # Typed Pydantic model instance
print(result.output.sentiment)    # 'positive'

Streaming

Stream text responses for real-time output:
from agentmark_pydantic_ai_v0 import stream_text_prompt

params = await prompt.format(props={"query": "Explain quantum computing"})

async for chunk in stream_text_prompt(params):
    print(chunk, end="", flush=True)

Tools

Pass native Python tool functions as a list to create_pydantic_ai_client. The adapter derives the tool name from each function’s __name__, then matches that name against the prompt’s tools: frontmatter:
agentmark_client.py
from agentmark_pydantic_ai_v0 import create_pydantic_ai_client

# Sync tool — name comes from `search.__name__` = "search"
def search(query: str) -> str:
    return f"Results for: {query}"

# Async tool — name comes from `fetch_data.__name__` = "fetch_data"
async def fetch_data(url: str) -> str:
    return await api.get(url)

client = create_pydantic_ai_client(
    model_registry=model_registry,
    tools=[search, fetch_data],
    loader=loader,
)
The function name in your Python code must match the name used in the prompt’s tools: frontmatter. Use pydantic_ai.Tool(function=..., name="custom-name") if you need to rename.
Then reference tools in your prompts by name:
search.prompt.mdx
---
name: search
text_config:
  model_name: gpt-4o
  tools:
    - search
---

<System>You are a helpful search assistant.</System>
<User>Search for {props.query}</User>

MCP servers

MCP servers are passed via an McpServerRegistry, not a raw mcp_servers dict. Construct the registry and pass it as mcp_registry:
agentmark_client.py
from agentmark_pydantic_ai_v0 import create_pydantic_ai_client, McpServerRegistry

mcp_registry = McpServerRegistry()
mcp_registry.register_servers({
    # URL-based server
    "search": {
        "url": "http://localhost:8000/mcp",
    },
    # Stdio-based server
    "python-runner": {
        "command": "python",
        "args": ["-m", "mcp_server"],
        "cwd": "/app",
    },
})

client = create_pydantic_ai_client(
    model_registry=model_registry,
    loader=loader,
    mcp_registry=mcp_registry,
)
Reference MCP tools in prompts with the mcp:// prefix:
---
name: task
text_config:
  model_name: gpt-4o
  tools:
    - mcp://search/web_search
    - mcp://python-runner/*
---

Evals

Register evaluation functions to score prompt outputs during experiments. Pass an evals dictionary of plain functions to create_pydantic_ai_client. Score schemas are defined separately in agentmark.json — eval functions are connected to scores by name.
agentmark_client.py
from agentmark.prompt_core import EvalParams, EvalResult
from agentmark_pydantic_ai_v0 import create_pydantic_ai_client

def exact_match(params: EvalParams) -> EvalResult:
    match = str(params["output"]).strip() == str(params.get("expectedOutput", "")).strip()
    return {"passed": match, "score": 1.0 if match else 0.0}

evals = {
    "exact_match": exact_match,
}

client = create_pydantic_ai_client(
    model_registry=model_registry,
    loader=loader,
    evals=evals,
)
Each entry maps a score name to a sync or async function that receives EvalParams and returns EvalResult. Reference evals in your prompt frontmatter:
---
test_settings:
  dataset: ./datasets/test.jsonl
  evals:
    - exact_match
---
Learn more about evaluations

Getting started

The fastest way to scaffold a Python project:
npm create agentmark@latest my-app
# Select "Python" when prompted for language
# Select "Pydantic AI" as the adapter
Run the local dev server:
npx agentmark dev
The CLI automatically detects Python projects via pyproject.toml or agentmark_client.py.

Limitations

  • No image generation — use the AI SDK adapter (TypeScript) for experimental_generateImage.
  • No speech generation — use the AI SDK adapter (TypeScript) for experimental_generateSpeech.

Next steps

AI SDK

TypeScript adapter for Node.js

Claude Agent SDK

Agentic tasks with Claude

Prompts

Learn about prompt syntax

Observability

Monitor prompts in production

Have Questions?

We’re here to help! Choose the best way to reach us: