Skip to main content
POST
/
openai
/
v1
/
chat
/
completions
Create chat completion
curl --request POST \
  --url https://api.example.com/openai/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "messages": [
    {
      "content": "<unknown>",
      "role": "user",
      "tool_call_id": "<string>",
      "tool_calls": [
        {
          "function": {
            "arguments": "<string>",
            "name": "<string>"
          },
          "type": "function",
          "id": "<string>"
        }
      ]
    }
  ],
  "model": "<string>",
  "include_stop_str_in_output": true,
  "logprobs": true,
  "max_completion_tokens": 123,
  "max_tokens": 123,
  "min_p": 0.5,
  "return_tokens_as_token_ids": true,
  "seed": 123,
  "skip_special_tokens": true,
  "stop": [
    "<string>"
  ],
  "stream": true,
  "stream_options": {
    "include_usage": true
  },
  "temperature": 1,
  "tools": [
    {
      "function": {
        "name": "<string>",
        "description": "<string>",
        "parameters": {}
      },
      "type": "function",
      "cache_control": {}
    }
  ],
  "top_k": 123,
  "top_p": 0
}
'
{
  "choices": [
    {
      "finish_reason": "<string>",
      "index": 123,
      "delta": {
        "content": "<unknown>",
        "role": "user",
        "tool_call_id": "<string>",
        "tool_calls": [
          {
            "function": {
              "arguments": "<string>",
              "name": "<string>"
            },
            "type": "function",
            "id": "<string>"
          }
        ]
      },
      "message": {
        "content": "<unknown>",
        "role": "user",
        "tool_call_id": "<string>",
        "tool_calls": [
          {
            "function": {
              "arguments": "<string>",
              "name": "<string>"
            },
            "type": "function",
            "id": "<string>"
          }
        ]
      }
    }
  ],
  "created": 123,
  "model": "<string>",
  "object": "<string>",
  "$schema": "<string>",
  "usage": {
    "completion_tokens": 123,
    "prompt_tokens": 123,
    "total_tokens": 123
  }
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
messages
object[]
required

An array of messages representing the conversation.

Minimum array length: 1
model
string
required

The model to use for the completion.

include_stop_str_in_output
boolean

Whether to include the stop strings in output text. Defaults to false.

logprobs
boolean

Whether to return log probabilities of the output tokens or not.

max_completion_tokens
integer<int64>

The maximum number of tokens to generate in the completion.

max_tokens
integer<int64>

The maximum number of tokens to generate in the completion.

min_p
number<double>

Sets a minimum probability threshold relative to the most likely token.

Required range: 0 <= x <= 1
return_tokens_as_token_ids
boolean

Whether to return the generated tokens as token IDs instead of text. Defaults to false.

seed
integer<int64>

If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed.

skip_special_tokens
boolean

Whether to skip special tokens in the output.

stop
string[]

An array of sequences where the API will stop generating further tokens.

stream
boolean

If true, the response will be streamed as a series of events instead of a single JSON object.

stream_options
object

Options for streaming response. Only set this when you set stream: true.

temperature
number<double>

What sampling temperature to use, between 0 and 2.

Required range: 0 <= x <= 2
tools
object[]

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.

top_k
integer<int64>

Limits the model to consider only the top K most likely tokens at each step.

top_p
number<double>

An alternative to sampling with temperature, called nucleus sampling.

Required range: x <= 1

Response

Successful response - JSON when stream=false, SSE when stream=true

choices
object[]
required

A list of chat completions from the model.

Minimum array length: 1
created
integer<int64>
required

The Unix timestamp (in seconds) when the completion was created.

model
string
required

The model used for the chat completion.

object
string
required

The object type 'chat.completion' or ''chat.completion.chunk'

$schema
string<uri>

A URL to the JSON Schema for this object.

Example:

"https://example.com/openai/schemas/ChatCompletion.json"

usage
object

Usage statistics for the chat completion.