The Pydantic AI adapter allows you to use AgentMark prompts with Pydantic AI in Python applications. This is the recommended adapter for Python projects.
Pass native Python tool functions as a dictionary to create_pydantic_ai_client. Each key is the tool name referenced in your prompts, and the value is the function:
---name: searchtext_config: model_name: gpt-4o tools: - search---<System>You are a helpful search assistant.</System><User>Search for {props.query}</User>
Configure MCP servers by passing a plain dictionary to create_pydantic_ai_client. Each key is the server name, and the value is its connection configuration:
Register evaluation functions to score prompt outputs during experiments. Pass an evals dictionary of plain functions to create_pydantic_ai_client. Score schemas are defined separately in agentmark.json — eval functions are connected to scores by name.
agentmark_client.py
from agentmark.prompt_core import EvalParams, EvalResultfrom agentmark_pydantic_ai_v0 import create_pydantic_ai_clientdef exact_match(params: EvalParams) -> EvalResult: match = str(params["output"]).strip() == str(params.get("expectedOutput", "")).strip() return {"passed": match, "score": 1.0 if match else 0.0}evals = { "exact_match": exact_match,}client = create_pydantic_ai_client( model_registry=model_registry, loader=loader, evals=evals,)
Each entry maps a score name to a sync or async function that receives EvalParams and returns EvalResult.Reference evals in your prompt frontmatter: