The Claude Agent SDK adapter allows you to run AgentMark prompts as agentic tasks using Anthropic’s Claude Agent SDK. This adapter is suited for agentic execution with tool use, budget controls, and built-in tracing. Available for both TypeScript and Python.
Installation
npm install @agentmark-ai/claude-agent-sdk-adapter @anthropic-ai/claude-agent-sdk
pip install agentmark-claude-agent-sdk agentmark-prompt-core claude-agent-sdk
Setup
Create your AgentMark client with a ClaudeAgentModelRegistry. The registry supports maxThinkingTokens for extended thinking:import { createAgentMarkClient, ClaudeAgentModelRegistry } from "@agentmark-ai/claude-agent-sdk-adapter";
const modelRegistry = new ClaudeAgentModelRegistry()
.registerModels(["claude-sonnet-4-20250514"])
.registerModels(["claude-opus-4-20250514"], {
maxThinkingTokens: 10000,
});
export const client = createAgentMarkClient({
loader: fileLoader,
modelRegistry,
});
Create your AgentMark client in agentmark_client.py:from pathlib import Path
from dotenv import load_dotenv
from agentmark.prompt_core import FileLoader
from agentmark_claude_agent_sdk import (
create_claude_agent_client,
ClaudeAgentModelRegistry,
ClaudeAgentToolRegistry,
ClaudeAgentAdapterOptions,
ModelConfig,
)
load_dotenv()
# Default registry passes model names through directly
model_registry = ClaudeAgentModelRegistry.create_default()
# Or configure with extended thinking
model_registry = ClaudeAgentModelRegistry()
model_registry.register_models(
["claude-sonnet-4-20250514"],
lambda name, _: ModelConfig(model=name)
)
model_registry.register_models(
["claude-opus-4-20250514"],
lambda name, _: ModelConfig(model=name, max_thinking_tokens=10000)
)
loader = FileLoader(base_dir=str(Path(__file__).parent.resolve()))
client = create_claude_agent_client(
model_registry=model_registry,
loader=loader,
)
Running Prompts
The Claude Agent SDK adapter runs prompts as agentic tasks. The agent executes autonomously, using tools and multi-turn reasoning:
import { client } from "./agentmark.client";
const prompt = await client.loadTextPrompt("task.prompt.mdx");
const input = await prompt.format({
props: { task: "Analyze the auth module and suggest improvements" },
});
const result = await runAgent(input);
console.log(result.text);
import asyncio
from agentmark_claude_agent_sdk import run_text_prompt
from agentmark_client import client
async def main():
prompt = await client.load_text_prompt(ast)
params = await prompt.format(props={
"task": "Analyze the auth module and suggest improvements"
})
result = await run_text_prompt(params)
print(result.output)
asyncio.run(main())
Adapter Options
Configure agent behavior through adapter options:
const input = await prompt.format({
props: { task: "Refactor the database layer" },
adapterOptions: {
permissionMode: "auto",
maxTurns: 10,
cwd: "/path/to/project",
maxBudgetUsd: 5.00,
allowedTools: ["read", "write", "bash"],
disallowedTools: ["browser"],
systemPromptPreset: "default",
onWarning: (warning) => {
console.warn("Agent warning:", warning);
},
},
});
from agentmark_claude_agent_sdk import ClaudeAgentAdapterOptions
client = create_claude_agent_client(
model_registry=model_registry,
loader=loader,
adapter_options=ClaudeAgentAdapterOptions(
permission_mode="bypassPermissions",
max_turns=10,
cwd="/path/to/project",
max_budget_usd=5.00,
allowed_tools=["read", "write", "bash"],
disallowed_tools=["browser"],
system_prompt_preset=False,
on_warning=lambda w: print(f"Warning: {w}"),
),
)
| Option | TypeScript | Python | Description |
|---|
| Permission mode | permissionMode | permission_mode | How the agent requests permission for actions |
| Max turns | maxTurns | max_turns | Maximum number of agentic reasoning turns |
| Working directory | cwd | cwd | Working directory for file system operations |
| Budget limit | maxBudgetUsd | max_budget_usd | Maximum USD budget for the task |
| Allowed tools | allowedTools | allowed_tools | Whitelist of tools the agent can use |
| Disallowed tools | disallowedTools | disallowed_tools | Blacklist of tools the agent cannot use |
| System prompt | systemPromptPreset | system_prompt_preset | System prompt preset to use |
| Warning handler | onWarning | on_warning | Callback for agent warnings |
Object Generation
For structured output, use object prompts:
import { client } from "./agentmark.client";
import { z } from "zod";
const prompt = await client.loadObjectPrompt("extract.prompt.mdx", {
schema: z.object({
sentiment: z.enum(["positive", "negative", "neutral"]),
confidence: z.number(),
}),
});
const input = await prompt.format({
props: { text: "This product is amazing!" },
});
const result = await runAgent(input);
console.log(result.object);
from agentmark_claude_agent_sdk import run_object_prompt
prompt = await client.load_object_prompt(ast)
params = await prompt.format(props={"text": "This product is amazing!"})
result = await run_object_prompt(params)
print(result.output)
Configure tools that are converted to MCP servers internally:
import { createAgentMarkClient, ClaudeAgentModelRegistry, ClaudeAgentToolRegistry } from "@agentmark-ai/claude-agent-sdk-adapter";
const tools = new ClaudeAgentToolRegistry({
weather: {
description: "Get current weather for a location",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "City name" },
},
required: ["location"],
},
execute: async ({ location }) => {
return `The weather in ${location} is sunny and 72°F`;
},
},
});
export const client = createAgentMarkClient({
loader: fileLoader,
modelRegistry,
tools,
});
from agentmark_claude_agent_sdk import ClaudeAgentToolRegistry
tool_registry = ClaudeAgentToolRegistry()
# Sync tool
tool_registry.register(
"weather",
lambda args, ctx: f"The weather in {args['location']} is sunny and 72°F"
)
# Async tool
async def search_docs(args, ctx):
return await search(args["query"])
tool_registry.register("search", search_docs)
client = create_claude_agent_client(
model_registry=model_registry,
tool_registry=tool_registry,
loader=loader,
)
Then reference tools in your prompts:
---
name: task
text_config:
model_name: claude-sonnet-4-20250514
tools:
- weather
---
<System>You are a helpful assistant with access to weather data.</System>
<User>{props.task}</User>
Tracing
Enable tracing using the /traced export:import { createAgentMarkClient, ClaudeAgentModelRegistry } from "@agentmark-ai/claude-agent-sdk-adapter/traced";
const client = createAgentMarkClient({
loader: fileLoader,
modelRegistry,
});
// All prompt runs are now automatically traced
const prompt = await client.loadTextPrompt("task.prompt.mdx");
const input = await prompt.format({ props: { task: "..." } });
const result = await runAgent(input);
The Python adapter integrates with OpenTelemetry for tracing via hooks:from agentmark_claude_agent_sdk import (
create_telemetry_hooks,
TelemetryConfig,
)
telemetry_config = TelemetryConfig(
is_enabled=True,
prompt_name="my-task",
props={"task": "..."},
function_id="task-handler",
)
hooks = create_telemetry_hooks(telemetry_config)
Learn more in the Observability documentation.
Getting Started (Python)
Scaffold a Python project with the Claude Agent SDK adapter:
npm create agentmark@latest my-app
# Select "Python" when prompted for language
# Select "Claude Agent SDK" as the adapter
Run the dev server:
Limitations
- No image generation — Use the AI SDK adapter for
experimental_generateImage
- No speech generation — Use the AI SDK adapter for
experimental_generateSpeech
- No streaming — Results are returned after the agent completes all turns
Next Steps
Have Questions?
We’re here to help! Choose the best way to reach us: