Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.agentmark.co/llms.txt

Use this file to discover all available pages before exploring further.

Run from the Dashboard

Open any prompt in the Dashboard editor, fill in your input variables, and click Run. Results stream back in real time.Running a prompt in the DashboardThe animation shows the Dashboard’s prompt editor running a prompt: the user fills input variables in the right-hand panel, clicks Run, and the response streams back in the output pane while tokens, cost, and model information appear in the footer.Every run is automatically traced. Navigate to Traces to see the execution timeline, token usage, cost, and model information for each run.

Run from the playground

The Playground lets you run the same prompt across multiple models and parameter configurations side-by-side. Compare outputs, tweak prompt text per variant, and apply the winning configuration back to your editor.

Next steps

Running Experiments

Test prompts against datasets

Generation Types

Text, objects, images, and audio

Version Control

Track changes and rollback to previous versions

Integrations

Vercel AI SDK, Pydantic AI, Mastra, and more

Have Questions?

We’re here to help! Choose the best way to reach us: