Agentmark supports generating text using text prompts. Text prompts are defined by specifying text_config in the text prompt’s frontmatter.

Example Configuration

example.prompt.mdx
---
name: example
text_config:
  model_name: gpt-4
---

<System>You are a math tutor that can perform calculations.</System>
<User>What's 235 * 18?</User>

Tags

TagDescription
<System>System-level instructions
<User>User message
<Assistant>Assistant message

Available Configuration

PropertyTypeDescriptionOptional/Required
model_namestringThe name of the model to use for text generation.Required
max_tokensnumberMaximum number of tokens to generate.Optional
temperaturenumberControls the randomness of the output; higher values result in more random outputs.Optional
max_callsnumberMaximum number of LLM calls allowed.Optional
top_pnumberControls the cumulative probability for nucleus sampling.Optional
top_knumberLimits the next token selection to the top k tokens.Optional
presence_penaltynumberPenalizes new tokens based on their presence in the text so far, encouraging the model to discuss new topics.Optional
frequency_penaltynumberPenalizes new tokens based on their frequency in the text so far, reducing the likelihood of repeating the same line verbatim.Optional
stop_sequencesstring[]Array of strings where the generation will stop if any of the strings are encountered.Optional
seednumberSeed value for random number generation, ensuring reproducibility.Optional
max_retriesnumberMaximum number of retries for the request in case of failures.Optional
tool_choice"auto" | "none" | "required" | { type: "tool", tool_name: string }Controls how tools are used in the generation process.Optional
toolsRecord<string, { description: string, parameters: JSONSchema }>Defines available tools and their parameters for the model to use.Optional

Have Questions?

We’re here to help! Choose the best way to reach us: