object_config
in the object prompt’s frontmatter and defining schema for the object.
Tag | Description |
---|---|
<System> | System-level instructions |
<User> | User message |
<Assistant> | Assistant message |
Property | Type | Description | Optional/Required |
---|---|---|---|
model_name | string | The name of the model to use for object generation. | Required |
max_tokens | number | Maximum number of tokens to generate. | Optional |
temperature | number | Controls the randomness of the output; higher values result in more random outputs. | Optional |
max_calls | number | Maximum number of LLM calls allowed. | Optional |
top_p | number | Controls the cumulative probability for nucleus sampling. | Optional |
top_k | number | Limits the next token selection to the top k tokens. | Optional |
presence_penalty | number | Penalizes new tokens based on their presence in the text so far, encouraging the model to discuss new topics. | Optional |
frequency_penalty | number | Penalizes new tokens based on their frequency in the text so far, reducing the likelihood of repeating the same line verbatim. | Optional |
stop_sequences | string[] | Array of strings where the generation will stop if any of the strings are encountered. | Optional |
seed | number | Seed value for random number generation, ensuring reproducibility. | Optional |
max_retries | number | Maximum number of retries for the request in case of failures. | Optional |
schema | JSONSchema | A schema defining the expected structure of the model’s output. | Required |
schema_name | string | A name for the schema. | Optional |
schema_description | string | A description of the schema. | Optional |
We’re here to help! Choose the best way to reach us: