POST
/
conduit
/
v1
/
chat
curl --location --request PUT 'https://api.mintlify.com/api/user' \
--header 'Content-Type: application/json' \
--header 'Authorization: Token <token>' \
--data-raw '{
    "user_group_id": "example_1",
    "name": "Example 1",
    "mapping": {"40": "213", "134": "386"},
    "properties": {"filterValue": "value"}
}'
{
  "id": id,
  "object": "chat.completion",
  "created": 1698980165,
  "choices": [
    {"role": "assistant", "text": "hello world"}
  ],
}

Body

prompt
string
The user to give the model.
system_prompt
string
The system prompt to give the model.
history
string[]
The history to give the model. This is optional.
max_tokens
int
The maximum number of tokens to generate. Defaults to 256 tokens.
top_k
int
The top k tokens to sample from. Defaults to 50 tokens.
top_p
float
The top p sampling parameter. Defaults to 0.9.
temperature
float
The temperature sampling parameter, 0 being greedy sampling. Defaults to 1.0.
presence_penalty
float
The presence penalty parameter. Defaults to 1.0.
frequency_penalty
float
The frequency penalty parameter. Defaults to 1.0.
stream
boolean
Whether to stream the response or not. Defaults to false.
model
string
The id of the model to use for inference.

Response

id
string
The id of the completion.
object
string
The type of the completion. Always “chat.completion”.
created
int
The unix timestamp of the completion.
choices
choice[]
The completion choices. Each choice is a JSON object with the fields “role” (either “assistant” or “user”) and “content” (the text of the choice).
curl --location --request PUT 'https://api.mintlify.com/api/user' \
--header 'Content-Type: application/json' \
--header 'Authorization: Token <token>' \
--data-raw '{
    "user_group_id": "example_1",
    "name": "Example 1",
    "mapping": {"40": "213", "134": "386"},
    "properties": {"filterValue": "value"}
}'
{
  "id": id,
  "object": "chat.completion",
  "created": 1698980165,
  "choices": [
    {"role": "assistant", "text": "hello world"}
  ],
}