There are three main approaches to migrate your existing LLM application to AgentMark:
1. Using Serialization
The fastest way to migrate is using AgentMark’s serialization utilities to convert your existing completion parameters into AgentMark files:
import { serialize } from "@puzzlet/agentmark";
// Your existing completion params
const params = {
model: "gpt-4",
messages: [
{ role: "system", content: "You are a helpful assistant" },
{ role: "user", content: "Hello!" }
],
temperature: 0.7
};
// Convert to AgentMark
const mdx = serialize(params, "my-prompt");
// Save to file
fs.writeFileSync("my-prompt.prompt.mdx", mdx);
2. Importing Markdown Files
If you have existing markdown files containing prompts (without frontmatter), you can convert them to AgentMark format by importing them into your prompt file:
---
name: example
metadata:
model:
name: gpt-4
settings:
temperature: 0.7
---
import ExistingSystemPrompt from "./existing-system-prompt.md";
import ExistingUserPrompt from "./existing-user-prompt.md";
<System>
<ExistingSystemPrompt />
</System>
<User>
<ExistingUserPrompt />
</User>
3. Manual Migration
For more control, you can manually migrate your prompts following these steps:
- Create a new
.prompt.mdx
file
- Add frontmatter configuration:
---
name: my-prompt
metadata:
model:
name: gpt-4
settings:
temperature: 0.7
---
- Convert messages to AgentMark components:
<System>Your system message here</System>
<User>Your user message here</User>
<Assistant>Your assistant message here</Assistant>
- Replace hardcoded values with props:
<User>Hello {props.name}!</User>
Example Migration
Before (OpenAI format):
const completion = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{ role: "system", content: "You are a helpful assistant" },
{ role: "user", content: `Hello ${username}!` }
],
temperature: 0.7
});
After (AgentMark):
---
name: greeting
metadata:
model:
name: gpt-4
settings:
temperature: 0.7
---
<System>You are a helpful assistant</System>
<User>Hello {props.username}!</User>
import { runInference, load } from "@puzzlet/agentmark";
const prompt = await load("./greeting.prompt.mdx");
const result = await runInference(prompt, { username: "Alice" });
Best Practices
- Start with a small subset of prompts to test the migration
- Use serialization when possible to quickly convert existing prompts
- Manually migrate complex prompts with custom logic
- Make use of reusable components to keep your prompts clean and organized
- Test prompts after migration to ensure behavior matches
Have Questions?
We’re here to help! Choose the best way to reach us:
Responses are generated using AI and may contain mistakes.