Loaders
AgentMark provides two loader implementations for fetching prompts: ApiLoader for API-based loading and FileLoader for static file loading.
Overview
| Loader | Package | Use Case |
|---|
ApiLoader | @agentmark-ai/loader-api | Cloud deployment or local development with dev server |
FileLoader | @agentmark-ai/loader-file | Self-hosted/static deployment with pre-built prompts |
ApiLoader
The ApiLoader fetches prompts from the AgentMark API (cloud) or a local development server.
Installation
npm install @agentmark-ai/loader-api
Cloud Mode (Production)
Use cloud mode when deploying to production with the AgentMark platform:
import { ApiLoader } from "@agentmark-ai/loader-api";
const loader = ApiLoader.cloud({
apiKey: process.env.AGENTMARK_API_KEY!,
appId: process.env.AGENTMARK_APP_ID!,
baseUrl: "https://api.agentmark.co", // optional, this is the default
});
Configuration:
| Option | Type | Required | Description |
|---|
apiKey | string | Yes | Your AgentMark API key |
appId | string | Yes | Your AgentMark application ID |
baseUrl | string | No | API base URL (default: https://api.agentmark.co) |
Local Mode (Development)
Use local mode during development with the agentmark dev server:
import { ApiLoader } from "@agentmark-ai/loader-api";
const loader = ApiLoader.local({
baseUrl: "http://localhost:9418",
});
Configuration:
| Option | Type | Required | Description |
|---|
baseUrl | string | Yes | Local dev server URL (default port is 9418) |
Usage with Client
import { createAgentMarkClient, VercelAIModelRegistry } from "@agentmark-ai/ai-sdk-v4-adapter";
import { ApiLoader } from "@agentmark-ai/loader-api";
import { openai } from "@ai-sdk/openai";
// Choose loader based on environment
const loader = process.env.NODE_ENV === "production"
? ApiLoader.cloud({
apiKey: process.env.AGENTMARK_API_KEY!,
appId: process.env.AGENTMARK_APP_ID!,
})
: ApiLoader.local({
baseUrl: "http://localhost:9418",
});
const modelRegistry = new VercelAIModelRegistry()
.registerModels(["gpt-4o-mini"], (name) => openai(name));
const client = createAgentMarkClient({
loader,
modelRegistry,
});
// Load and use prompts
const prompt = await client.loadTextPrompt("greeting.prompt.mdx");
Caching
The ApiLoader includes built-in caching with a default TTL of 60 seconds. You can customize caching behavior when loading prompts:
// With custom cache TTL
const ast = await loader.load("prompt.prompt.mdx", "text", {
cache: { ttl: 1000 * 60 * 5 }, // 5 minutes
});
// Disable caching
const ast = await loader.load("prompt.prompt.mdx", "text", {
cache: false,
});
Loading Datasets
The ApiLoader can also stream datasets for experiments:
const stream = await loader.loadDataset("my-dataset.jsonl");
const reader = stream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log(value.input, value.expected_output);
}
FileLoader
The FileLoader loads pre-built prompts from JSON files generated by agentmark build. Use this for self-hosted deployments where you don’t want runtime API calls.
Installation
npm install @agentmark-ai/loader-file
Building Prompts
First, compile your prompts using the CLI:
agentmark build --out ./dist/agentmark
This creates JSON files containing pre-parsed ASTs:
dist/agentmark/
manifest.json
greeting.prompt.json
nested/
helper.prompt.json
Usage
import { FileLoader } from "@agentmark-ai/loader-file";
// Point to the build output directory
const loader = new FileLoader("./dist/agentmark");
Configuration:
| Parameter | Type | Description |
|---|
builtDir | string | Path to the directory containing built prompt JSON files |
Path Resolution
The FileLoader accepts prompt paths with or without extensions:
// All of these work:
await client.loadTextPrompt("greeting");
await client.loadTextPrompt("greeting.prompt");
await client.loadTextPrompt("greeting.prompt.mdx");
Usage with Client
import { createAgentMarkClient, VercelAIModelRegistry } from "@agentmark-ai/ai-sdk-v4-adapter";
import { FileLoader } from "@agentmark-ai/loader-file";
import { openai } from "@ai-sdk/openai";
const loader = new FileLoader("./dist/agentmark");
const modelRegistry = new VercelAIModelRegistry()
.registerModels(["gpt-4o-mini"], (name) => openai(name));
const client = createAgentMarkClient({
loader,
modelRegistry,
});
// Load pre-built prompts
const prompt = await client.loadTextPrompt("greeting");
const input = await prompt.format({ props: { name: "Alice" } });
Loading Datasets
The FileLoader can also load dataset files (.jsonl):
const stream = await loader.loadDataset("test-data.jsonl");
const reader = stream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log(value.input, value.expected_output);
}
Security
The FileLoader includes path traversal protection:
- Rejects absolute paths
- Validates that resolved paths stay within the base directory
- Prevents access to files outside the build directory
Choosing a Loader
| Scenario | Recommended Loader |
|---|
| Production with AgentMark platform | ApiLoader.cloud() |
| Local development | ApiLoader.local() |
| Self-hosted/edge deployment | FileLoader |
| Serverless functions (cold start optimization) | FileLoader |
| Air-gapped environments | FileLoader |
Trade-offs
ApiLoader (Cloud)
- Prompts managed via AgentMark platform
- Real-time updates without redeployment
- Requires network connectivity
- Built-in caching
ApiLoader (Local)
- Fast development iteration
- Hot reloading with
agentmark dev
- No authentication required
FileLoader
- Zero network latency
- Works offline/air-gapped
- Requires rebuild for prompt changes
- Smaller bundle (no API client code)
Have Questions?
We’re here to help! Choose the best way to reach us: