Skip to main content
POST
/
openai
/
v1
/
completions
Create completion
curl --request POST \
  --url https://api.example.com/openai/v1/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "<string>",
  "prompt": "<string>",
  "include_stop_str_in_output": true,
  "max_completion_tokens": 123,
  "max_tokens": 123,
  "min_p": 0.5,
  "return_tokens_as_token_ids": true,
  "seed": 123,
  "skip_special_tokens": true,
  "stop": [
    "<string>"
  ],
  "stream": true,
  "stream_options": {
    "include_usage": true
  },
  "temperature": 1,
  "top_k": 123,
  "top_p": 0
}
'
{
  "choices": [
    {
      "finish_reason": "<string>",
      "index": 123,
      "text": "<string>"
    }
  ],
  "created": 123,
  "model": "<string>",
  "object": "<string>",
  "$schema": "<string>",
  "usage": {
    "completion_tokens": 123,
    "prompt_tokens": 123,
    "total_tokens": 123
  }
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
model
string
required

The model to use for the completion.

prompt
string
required

The prompt to generate completions for.

include_stop_str_in_output
boolean

Whether to include the stop strings in output text. Defaults to false.

max_completion_tokens
integer<int64>

The maximum number of tokens to generate in the completion.

max_tokens
integer<int64>

The maximum number of tokens to generate in the completion.

min_p
number<double>

Sets a minimum probability threshold relative to the most likely token.

Required range: 0 <= x <= 1
return_tokens_as_token_ids
boolean

Whether to return the generated tokens as token IDs instead of text. Defaults to false.

seed
integer<int64>

If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed.

skip_special_tokens
boolean

Whether to skip special tokens in the output.

stop
string[]

An array of sequences where the API will stop generating further tokens.

stream
boolean

If true, the response will be streamed as a series of events instead of a single JSON object.

stream_options
object

Options for streaming response. Only set this when you set stream: true.

temperature
number<double>

What sampling temperature to use, between 0 and 2.

Required range: 0 <= x <= 2
top_k
integer<int64>

Limits the model to consider only the top K most likely tokens at each step.

top_p
number<double>

An alternative to sampling with temperature, called nucleus sampling.

Required range: x <= 1

Response

Successful response - JSON when stream=false, SSE when stream=true

choices
object[]
required

The list of completion choices.

created
integer<int64>
required

The Unix timestamp (in seconds) of when the completion was created.

model
string
required

The model used for the completion.

object
string
required

The object type, which is always 'text_completion'.

$schema
string<uri>

A URL to the JSON Schema for this object.

Example:

"https://example.com/openai/schemas/Completion.json"

usage
object

Usage statistics for the completion request.