The Python adapters don’t ship a “default” model registry — you register provider prefixes explicitly. The "<provider>:<model>" string format tells Pydantic AI which provider to use at runtime:
Pass native Python tool functions as a list to create_pydantic_ai_client. The adapter derives the tool name from each function’s __name__, then matches that name against the prompt’s tools: frontmatter:
agentmark_client.py
from agentmark_pydantic_ai_v0 import create_pydantic_ai_client# Sync tool — name comes from `search.__name__` = "search"def search(query: str) -> str: return f"Results for: {query}"# Async tool — name comes from `fetch_data.__name__` = "fetch_data"async def fetch_data(url: str) -> str: return await api.get(url)client = create_pydantic_ai_client( model_registry=model_registry, tools=[search, fetch_data], loader=loader,)
The function name in your Python code must match the name used in the prompt’s tools: frontmatter. Use pydantic_ai.Tool(function=..., name="custom-name") if you need to rename.
Then reference tools in your prompts by name:
search.prompt.mdx
---name: searchtext_config: model_name: gpt-4o tools: - search---<System>You are a helpful search assistant.</System><User>Search for {props.query}</User>
Register evaluation functions to score prompt outputs during experiments. Pass an evals dictionary of plain functions to create_pydantic_ai_client. Score schemas are defined separately in agentmark.json — eval functions are connected to scores by name.
agentmark_client.py
from agentmark.prompt_core import EvalParams, EvalResultfrom agentmark_pydantic_ai_v0 import create_pydantic_ai_clientdef exact_match(params: EvalParams) -> EvalResult: match = str(params["output"]).strip() == str(params.get("expectedOutput", "")).strip() return {"passed": match, "score": 1.0 if match else 0.0}evals = { "exact_match": exact_match,}client = create_pydantic_ai_client( model_registry=model_registry, loader=loader, evals=evals,)
Each entry maps a score name to a sync or async function that receives EvalParams and returns EvalResult.Reference evals in your prompt frontmatter: