What You Get
Prompt Management - Write prompts in readable Markdown + JSX- Declarative syntax that shows exactly what your LLM sees
- Reusable components and templating
- Version control-friendly format
- Hot-reload during development
- Datasets for systematic testing
- Custom evaluation functions
- CLI and SDK for running experiments
- CI/CD integration
- Distributed tracing with OpenTelemetry
- Session tracking for multi-step workflows
- Token usage and cost monitoring
- Integration with AgentMark platform
- No vendor lock-in
- Full control over your data
- Works offline
- Optional cloud sync with AgentMark platform
Core Features
Readable Prompts:Why AgentMark?
For Development:- Decouple prompts from application code
- Test prompts systematically with datasets
- Iterate quickly with hot-reload
- Works with any AI SDK (Vercel AI, Mastra, LlamaIndex)
- Monitor performance and costs
- Track token usage
- Debug user issues
- Analyze usage patterns
Supported Languages
| Language | Support Status |
|---|---|
| TypeScript | ✅ Supported |
| Others | Need something else? Open an Issue |
Have Questions?
We’re here to help! Choose the best way to reach us:
- Join our Discord community for quick answers and discussions
- Email us at [email protected] for support
- Schedule an Enterprise Demo to learn about our business solutions