Python Client Setup
The AgentMark client is configured in agentmark_client.py. It connects your prompts to AI models, tools, and prompt loading. This file is auto-generated by npm create agentmark@latest when you select Python.
Installation
pip install agentmark-pydantic-ai-v0 agentmark-loader-api
Configuration
import os
from agentmark_pydantic_ai_v0 import (
create_pydantic_ai_client,
create_default_model_registry,
)
from agentmark.loader_api import ApiLoader
if os.getenv("AGENTMARK_ENV") == "development":
loader = ApiLoader.local(
base_url=os.getenv("AGENTMARK_BASE_URL", "http://localhost:9418")
)
else:
loader = ApiLoader.cloud(
api_key=os.environ["AGENTMARK_API_KEY"],
app_id=os.environ["AGENTMARK_APP_ID"],
)
model_registry = create_default_model_registry()
client = create_pydantic_ai_client(
model_registry=model_registry,
loader=loader,
)
Model Registry
create_default_model_registry() auto-resolves model names to providers:
| Prefix | Provider |
|---|
gpt-* | OpenAI |
claude-* | Anthropic |
gemini-* | Google |
Model names in the registry must match the model_name in your prompt frontmatter.
Prompt Loading
The loader determines how prompts are fetched at runtime:
# Local — loads from dev server (development)
loader = ApiLoader.local(base_url="http://localhost:9418")
# Cloud — loads from AgentMark CDN (production)
loader = ApiLoader.cloud(
api_key=os.environ["AGENTMARK_API_KEY"],
app_id=os.environ["AGENTMARK_APP_ID"],
)
Prompts are cached client-side with a 60-second TTL.
Running Prompts
from agentmark_client import client
from agentmark_pydantic_ai_v0 import run_text_prompt
prompt = await client.load_text_prompt("agentmark/greeting.prompt.mdx")
params = await prompt.format(props={"name": "Alice"})
result = await run_text_prompt(params)
print(result.text)
Dev Server
Start the Python dev server for local development:
This starts the local API server on port 9418 and the dashboard on port 3000. See Dev Server for configuration options.
Evals
You can register evaluation functions to score prompt outputs during experiments. Pass an evals dictionary of plain functions:
from agentmark.prompt_core import EvalParams, EvalResult
evals = {
"exact_match": lambda params: {
"passed": params["output"] == params.get("expectedOutput"),
},
}
client = create_pydantic_ai_client(
model_registry=model_registry,
loader=loader,
evals=evals,
)
Score schemas are defined separately in agentmark.json and deployed to the platform. Eval functions are connected to scores by name.
See Evaluations for the full guide on writing eval functions and configuring score schemas.
Full Reference
For all configuration options, see Client Config.
Have Questions?
We’re here to help! Choose the best way to reach us: