Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.agentmark.co/llms.txt

Use this file to discover all available pages before exploring further.

The Mastra adapter lets you use AgentMark prompts with Mastra’s agentic workflow framework.

Installation

npm install @agentmark-ai/mastra-v0-adapter @mastra/core @ai-sdk/openai

Setup

Create your AgentMark client with a MastraModelRegistry. Use .registerProviders() to register AI SDK provider packages; Mastra uses the same @ai-sdk/* providers under the hood, so model IDs written as "openai/gpt-4o" auto-resolve:
agentmark.client.ts
import { createAgentMarkClient, MastraModelRegistry } from "@agentmark-ai/mastra-v0-adapter";
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";

const modelRegistry = new MastraModelRegistry();
modelRegistry.registerProviders({ openai, anthropic });

// Or register models explicitly:
// modelRegistry.registerModels(["gpt-4o"], (name) => openai(name));

export const client = createAgentMarkClient({
  loader,
  modelRegistry,
});

Running prompts

Mastra prompts go through four steps: formatAgent() returns an AgentConfig, you construct a new Agent(agentConfig), formatMessages() (async) returns a [messages, options] tuple, and finally agent.generate(messages, options) runs the prompt:
import { client } from "./agentmark.client";
import { Agent } from "@mastra/core/agent";

const prompt = await client.loadTextPrompt("greeting.prompt.mdx");
const agentConfig = await prompt.formatAgent({
  props: { name: "Alice" },
});

const agent = new Agent(agentConfig);
const [messages, options] = await agentConfig.formatMessages();
const result = await agent.generate(messages, options);

console.log(result.text);
formatMessages() is async — always await it. The returned tuple is [messages, options], both required by agent.generate() / agent.stream().

Object generation

For structured output, use object prompts. The schema lives in your .prompt.mdx frontmatter (object_config.schema), not as a second argument to loadObjectPrompt():
import { client } from "./agentmark.client";
import { Agent } from "@mastra/core/agent";

const prompt = await client.loadObjectPrompt("sentiment.prompt.mdx");
const agentConfig = await prompt.formatAgent({
  props: { text: "This product is amazing!" },
});

const agent = new Agent(agentConfig);
const [messages, options] = await agentConfig.formatMessages();
const result = await agent.generate(messages, options);

console.log(result.object);
// { sentiment: 'positive', confidence: 0.95 }

Streaming

Stream responses using agent.stream():
import { Agent } from "@mastra/core/agent";

const prompt = await client.loadTextPrompt("story.prompt.mdx");
const agentConfig = await prompt.formatAgent({
  props: { topic: "space exploration" },
});

const agent = new Agent(agentConfig);
const [messages, options] = await agentConfig.formatMessages();
const stream = await agent.stream(messages, options);

for await (const chunk of stream.textStream) {
  process.stdout.write(chunk);
}

Tools

Mastra tools use the ai v4 tool() helper with parameters: (not inputSchema: — that’s AI SDK v5). This matches Mastra’s internals:
agentmark.client.ts
import { createAgentMarkClient, MastraModelRegistry } from "@agentmark-ai/mastra-v0-adapter";
import { tool } from "ai";
import { z } from "zod";

const weatherTool = tool({
  description: "Get current weather for a location",
  parameters: z.object({
    location: z.string(),
  }),
  execute: async ({ location }) => {
    return `The weather in ${location} is sunny and 72°F`;
  },
});

export const client = createAgentMarkClient({
  loader,
  modelRegistry,
  tools: {
    weather: weatherTool,
  },
});
Then reference tools in your prompts:
weather.prompt.mdx
---
name: weather
text_config:
  model_name: openai/gpt-4o
  tools:
    - weather
---

<System>You are a helpful weather assistant.</System>
<User>What's the weather in {props.location}?</User>

MCP servers

Configure MCP servers for extended capabilities:
agentmark.client.ts
import { createAgentMarkClient, MastraModelRegistry } from "@agentmark-ai/mastra-v0-adapter";

export const client = createAgentMarkClient({
  loader,
  modelRegistry,
  mcpServers: {
    filesystem: {
      command: "npx",
      args: ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/files"],
    },
  },
});

Limitations

  • No image generation — use the AI SDK adapter for experimental_generateImage.
  • No speech generation — use the AI SDK adapter for experimental_generateSpeech.

Next steps

Prompts

Learn about prompt syntax

Testing

Test your prompts with datasets

Observability

Monitor your agents in production

Other integrations

Explore other AI frameworks

Have Questions?

We’re here to help! Choose the best way to reach us: