ai-gateway
    ai-gateway
    • Suno
      • Suno Generate
        POST
      • Suno Feed
        GET
      • Suno Concat
        POST
      • Suno Generate (Extend)
        POST
    • Stable Diffusion
      • Stable Diffusion 文生图
        POST
      • Stable Diffusion 图生图
        POST
      • ReActor image
        POST
    • Create chat completion
      POST
    • Create chat completion with stream
      POST
    • Create image
      POST
    • Create speech
      POST
    • Create transcription
      POST
    • asr
      POST
    • asr Copy
      POST
    • Create transcription Copy
      POST

      Create chat completion

      POST
      /v1/chat/completions
      原文参考:https://platform.openai.com/docs/api-reference/chat/create

      请求参数

      Authorization
      在 Header 添加参数
      Authorization
      ,其值为在 Bearer 之后拼接 Token
      示例:
      Authorization: Bearer ********************
      Body 参数application/json
      messages
      array [object {3}] 
      必需
      A list of messages comprising the conversation so far. Example Python code.
      role
      string 
      必需
      The role of the messages author, in this case system./assistant/user
      content
      string 
      必需
      The contents of the system/assistant/user message.
      name
      string 
      可选
      An optional name for the participant. Provides the model information to differentiate between participants of the same role.
      model
      string 
      必需
      ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.
      frequency_penalty
      number 
      可选
      Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
      默认值:
      0
      logit_bias
      object 
      可选
      Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
      logprobs
      boolean 
      可选
      Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message. This option is currently not available on the gpt-4-vision-preview model.
      默认值:
      false
      top_logprobs
      integer 
      可选
      An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
      max_tokens
      integer 
      可选
      The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. Example Python code for counting tokens.
      n
      integer 
      可选
      How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
      默认值:
      1
      presence_penalty
      number 
      可选
      Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
      默认值:
      0
      response_format
      object 
      可选
      An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106.
      seed
      integer 
      可选
      This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.
      stop
      object 
      可选
      Up to 4 sequences where the API will stop generating further tokens.
      stream
      boolean 
      可选
      If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. Example Python code.
      默认值:
      false
      temperature
      number 
      可选
      What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.
      默认值:
      1
      top_p
      number 
      可选
      An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
      默认值:
      1
      tools
      array [object {2}] 
      可选
      A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.
      type
      string 
      必需
      The type of the tool. Currently, only function is supported.
      function
      object 
      必需
      tool_choice
      object 
      可选
      Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type": "function", "function": {"name": "my_function"}} forces the model to call that function. none is the default when no functions are present. auto is the default if functions are present.
      user
      string 
      可选
      A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.
      function_call
      object 
      可选
      Deprecated in favor of tool_choice.
      functions
      array[string]
      可选
      Deprecated in favor of tools.
      示例
      {
        "model": "claude-3-7-sonnet-20250219",
        "stream": true,
        "stream_options": {
          "include_usage": true
        },
        "messages": [
          {
            "role": "system",
            "content": "You are a helpful assistant."
          },
          {
            "role": "user",
            "content": "你好!"
          }
        ]
      }

      示例代码

      Shell
      JavaScript
      Java
      Swift
      Go
      PHP
      Python
      HTTP
      C
      C#
      Objective-C
      Ruby
      OCaml
      Dart
      R
      请求示例请求示例
      Shell
      JavaScript
      Java
      Swift
      curl --location --request POST 'https://api.aigateway.work/v1/chat/completions' \
      --header 'Content-Type: application/json' \
      --data-raw '{
        "model": "claude-3-7-sonnet-20250219",
        "stream": true,
        "stream_options": {
          "include_usage": true
        },
        "messages": [
          {
            "role": "system",
            "content": "You are a helpful assistant."
          },
          {
            "role": "user",
            "content": "你好!"
          }
        ]
      }'

      返回响应

      🟢200成功
      application/json
      Body
      id
      string 
      必需
      A unique identifier for the chat completion.
      object
      string 
      必需
      The object type, which is always chat.completion.
      created
      integer 
      必需
      The Unix timestamp (in seconds) of when the chat completion was created.
      model
      string 
      必需
      The model used for the chat completion.
      choices
      array [object {4}] 
      必需
      A list of chat completion choices. Can be more than one if n is greater than 1.
      index
      integer 
      可选
      The index of the choice in the list of choices.
      message
      object 
      可选
      A chat completion message generated by the model.
      logprobs
      null 
      可选
      Log probability information for the choice.
      finish_reason
      string 
      可选
      The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool, or function_call (deprecated) if the model called a function.
      usage
      object 
      可选
      Usage statistics for the completion request.
      prompt_tokens
      integer 
      必需
      Number of tokens in the prompt.
      completion_tokens
      integer 
      必需
      Number of tokens in the generated completion.
      total_tokens
      integer 
      必需
      Total number of tokens used in the request (prompt + completion).
      system_fingerprint
      string 
      可选
      This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.
      示例
      {
        "id": "chatcmpl-8uxQ3O1u0Z79h7u9JjZ4FOEZapsko",
        "object": "chat.completion",
        "created": 1708585467,
        "model": "gpt-3.5-turbo-0125",
        "choices": [
          {
            "index": 0,
            "message": {
              "role": "assistant",
              "content": "你好!有什么我可以帮助你的吗?"
            },
            "logprobs": null,
            "finish_reason": "stop"
          }
        ],
        "usage": {
          "prompt_tokens": 20,
          "completion_tokens": 18,
          "total_tokens": 38
        },
        "system_fingerprint": "fp_cbdb91ce3f"
      }
      
      修改于 2025-06-30 03:20:35
      上一页
      ReActor image
      下一页
      Create chat completion with stream
      Built with