1. Bypass Interface
Nexconn-AI
  • Nexconn LLM API Service
  • Quick Start
  • Key Concepts and Terminology
  • Try Nexconn Large Model API Online
  • How to Get API-KEY
  • FAQ
  • Model
    • Model List
      GET
  • Chat
    • Bypass Interface
      • Bypass Anthropic Protocol
        POST
      • bypass Vertex/Gemini protocol
        POST
      • bypass Responses protocol
        POST
    • gemini-3.1-pro-preview
      • gemini-3.1-pro-preview reasoning
      • gemini-3.1-pro-preview qfile
    • gemini-2.5-pro
      • gemini-2.5-pro reasoning
    • openai/gpt-5
      • openai/gpt-5 reasoning
    • openai/gpt-5.2
      • openai/gpt-5.2 Reasoning
    • openai/gpt-5.2-codex
      • openai/gpt-5.2-codex
    • gemini-2.5-flash
      • gemini-2.5-flash Reasoning
    • claude-4.5-sonnet
      • claude-4.5-sonnet reasoning
    • claude-4.6-opus
      • claude-4.6-opus reasoning
    • deepseek/deepseek-v3.2-251201
      • deepseek/deepseek-v3.2-251201 reasoning
    • deepseek-v3
      • Chat
    • claude-3.7-sonnet
      • Chat completion
    • doubao-seed-1.6
      • doubao-seed-1.6
    • qwen3-max-2026-01-23
      • Chat completion
    • moonshotai/kimi-k2.5
      • Chat completion
    • Chat completion
      POST
    • Anthropic Protocol
      POST
  • Video
    • Veo
      • Create Video Generation Task
      • Query Video Generation Task
    • sora-2
      • Create Video Generation Task
      • Query Video Generation Status
      • Video Remix
    • sora-2-pro
      • Create Video Generation Task
      • Query Video Generation Status
      • Video Remix
    • kling-v2-1
      • Create Video Task
      • Query Video Generation Status
    • kling-v2-5-turbo
      • Create Video Task
      • Query Video Generation Status
    • kling-v2-6
      • Create Video Task
      • Query Video Generation Status
    • kling-v3
      • Create Video Task
      • Query Video Generation Status
    • kling-video-o1
      • Create Video Task
      • Query Video Generation Status
    • kling-v3-omni
      • Create Video Task
      • Query Video Generation Status
    • bytedance/doubao-seedance-2-0-260128
      • Create Video Generation Task
      • Query Video Generation Task
    • vidu
      • viduq1
        • Create text-to-video task
        • Create reference to video task - Non-subject invocation (video generation)
        • Create reference to video task - Subject invocation (supports video with audio function)
        • Query task status
        • Query task results
      • viduq2
        • Create text-to-video task
        • Create reference to video task - Non-subject invocation (video generation
        • Create reference to video task - Subject invocation (supports video with audio function)
        • Query task status
        • Query task results
      • viduq2-pro
        • Create Image-to-Video Task
        • Create First and Last Frame to Video Task
        • Create reference video generation task - non-subject invocation (video generation)
        • Query task status
        • Query task results
      • viduq2-turbo
        • Create Image-to-Video Task
        • Create First and Last Frame to Video Task
        • Query task status
        • Query task results
      • viduq3-pro
        • Create Image-to-Video Task
        • Create text-to-video task
        • Create First and Last Frame to Video Task
        • Query task status
        • Query task results
      • viduq3-turbo
        • Create Image-to-Video Task
        • Create text-to-video task
        • Create First and Last Frame to Video Task
        • Query task status
        • Query task results
  • Image Generation
    • kling-v1
      • Create text-to-image or single image-to-image task
      • Query task status
    • kling-v1-5
      • Create text-to-image or single image-to-image task
      • Query task status
    • kling-v2
      • Create text-to-image or single image-to-image task
      • Create multi-image-to-image task
      • Query task status
    • kling-v2-new
      • Create single image-to-image task
      • Query task status
    • kling-v2-1
      • Create text-to-image or single image-to-image task
      • Create multi-image-to-image task
      • Query task status
    • gemini-2.5-flash-image
      • Chat interface - supports text-to-image, image-to-image, and pure conversation
      • Text-to-image API - Generate images from text descriptions
      • Image-to-image API - Generate new images based on input images
    • gemini-3.0-pro-image-preview
      • Chat interface - supports text-to-image, image-to-image, and pure conversation
      • Text-to-image API - Generate images from text descriptions
      • Image-to-image API - Generate new images based on input images
    • gemini-3.1-flash-image-preview
      • Chat interface - supports text-to-image, image-to-image, and pure conversation
      • Text-to-image API - Generate images from text descriptions
      • Image-to-image API - Generate new images based on input images
    • kling-image-o1
      • Create image generation task
      • Query image generation task
      • Get Result
  • Files
    • Create file upload task
    • Query file status
    • List user files
  • Schemas
    • Chat
      • ChatCompletionRequest
      • ChatCompletionRequestMessage
      • ToolObject
      • ChatTemplateKwargs
      • ThinkType
      • ReasoningType
      • ImageConfig
      • SafetySetting
      • MessageContent
      • ImageUrl
      • VideoUrl
      • InputAudio
      • FileUrl
      • CacheControl
      • FunctionCall
      • ToolCall
      • ToolCallFunction
      • Image
      • ThinkingBlock
      • ToolFunction
      • ToolParameters
    • Video
      • kling-v2-1
        • KlingV21CreateRequest
        • KlingV21CreateResponse
        • KlingV21StatusResponse
      • kling-v2-5-turbo
        • KlingV25TurboCreateRequest
        • KlingV25TurboCreateResponse
        • KlingV25TurboStatusResponse
      • kling-video-o1
        • KlingVideoO1CreateRequest
        • KlingVideoO1CreateResponse
        • KlingVideoO1StatusResponse
      • kling-v3-omni
        • KlingV3OmniCreateRequest
      • kling-v3
        • KlingV3CreateRequest
      • kling-v2-6
        • KlingV26CreateRequest
        • KlingV26CreateResponse
        • KlingV26VideoStatusResponse
      • Veo
        • CreateVideoGenerationRequest
        • CreateVideoGenerationResponse
        • VideoGenerationJobInfo
        • ErrorResponse
        • Instance
        • ImageInput
        • LastFrameInput
        • VideoInput
        • ReferenceImage
        • Parameters
        • VideoGenerationData
        • VideoResult
      • VideoCreateResponse
    • Image Generation
      • kling-v1
        • KlingV1CreateImageRequest
      • kling-v1-5
        • KlingV15CreateImageRequest
      • kling-v2
        • KlingV2CreateImageRequest
        • KlingV2EditImageRequest
      • kling-v2-new
        • KlingV2NewCreateImageRequest
      • kling-v2-1
        • KlingV21CreateImageRequest
        • KlingV21EditImageRequest
      • gemini-2.5-flash-image
        • Gemini25FlashImageChatCompletionRequest
        • Gemini25FlashImageGenerationRequest
        • Gemini25FlashImageEditRequest
        • Gemini25FlashImageConfig
      • gemini-3.0-pro-image-preview
        • Gemini30ProImageChatCompletionRequest
        • Gemini30ProImageGenerationRequest
        • Gemini30ProImageEditRequest
        • Gemini30ProImageConfig
      • KlingImageTaskResponse
      • KlingImageTaskStatusResponse
      • ChatCompletionResponse
      • ImageGenerationResponse
      • FalOmniImageRequest
      • Gemini31FlashImageChatCompletionRequest
      • Gemini31FlashImageGenerationRequest
      • Gemini31FlashImageEditRequest
      • ChatMessage
      • Gemini31FlashImageConfig
    • CreateFileRequest
    • QueueStatus
    • ContentItem
    • FileResponse
    • Veo31FirstLastFrameToVideoInput
    • ToolItem
    • CreateVideoTaskResponse
    • FileListResponse
    • ApiErrorBody
    • FileDeleteResponse
    • GetVideoTaskResponse
    • Veo31ImageToVideoInput
    • Veo31ImageToVideoOutput
    • ChatCompletionResponse
    • FileError
    • ImageGenerationResponse
    • ErrorResponse
    • KodoSource
    • File
    • ImageUrlObject
    • VideoUrlObject
    • AudioUrlObject
    • DraftTaskObject
    • ChatMessage
    • VideoTaskOutputContent
    • VideoTaskToolUsageItem
    • VideoTaskUsage
  1. Bypass Interface

bypass Responses protocol

Developing
POST
/bypass/openai/v1/responses
This interface allows direct invocation of GPT-series models using the response API protocol
Currently supported models:
openai/gpt-5
openai/gpt-5.2
openai/gpt-5.4
openai/gpt-5.2-codex
openai/gpt-5.3-codex
openai/gpt-5-nano
openai/gpt-5.2-chat
openai/gpt-5-pro
openai/gpt-5-chat
openai/gpt-5.4-pro
openai/gpt-5-mini
openai/gpt-5.4-mini
openai/gpt-5.4-nano

Request

Authorization
API Key
Add parameter in header
Authorization
Example:
Authorization: ********************
or
Body Params application/jsonRequired

Example
{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "type": "object",
  "title": "Create Response Request",
  "description": "Request body parameters for creating a model response",
  "properties": {
    "background": {
      "type": "boolean",
      "description": "Whether to run the model response in the background."
    },
    "context_management": {
      "type": "array",
      "description": "Context management configuration for this request.",
      "items": {
        "type": "object",
        "properties": {
          "type": {
            "type": "string",
            "description": "Context management entry type. Currently only 'compaction' is supported."
          },
          "compact_threshold": {
            "type": "number",
            "description": "Token threshold that triggers compaction for this entry."
          }
        },
        "required": [
          "type"
        ]
      }
    },
    "conversation": {
      "anyOf": [
        {
          "type": "string",
          "description": "Unique ID of the conversation."
        },
        {
          "type": "object",
          "properties": {
            "id": {
              "type": "string",
              "description": "Unique ID of the conversation."
            }
          },
          "required": [
            "id"
          ]
        }
      ],
      "description": "The conversation this response belongs to. Items from this conversation are prepended to `input_items`, and input/output items are automatically added to this conversation after response completion."
    },
    "include": {
      "type": "array",
      "description": "Specifies additional output data to include in the model response.",
      "items": {
        "type": "string",
        "enum": [
          "file_search_call.results",
          "web_search_call.results",
          "web_search_call.action.sources",
          "message.input_image.image_url",
          "computer_call_output.output.image_url",
          "code_interpreter_call.outputs",
          "reasoning.encrypted_content",
          "message.output_text.logprobs"
        ]
      }
    },
    "input": {
      "anyOf": [
        {
          "type": "string",
          "description": "Plain text input as user role."
        },
        {
          "type": "array",
          "description": "List of one or more input items containing different content types.",
          "items": {
            "type": "object",
            "properties": {
              "role": {
                "type": "string",
                "enum": [
                  "user",
                  "assistant",
                  "system",
                  "developer"
                ],
                "description": "Role of the message input."
              },
              "content": {
                "anyOf": [
                  {
                    "type": "string"
                  },
                  {
                    "type": "array",
                    "items": {
                      "type": "object",
                      "properties": {
                        "type": {
                          "type": "string",
                          "description": "Content type, e.g. 'input_text', 'input_image', 'input_file', etc."
                        },
                        "text": {
                          "type": "string"
                        },
                        "image_url": {
                          "type": "string"
                        },
                        "file_id": {
                          "type": "string"
                        }
                      }
                    }
                  }
                ],
                "description": "Text, image, or audio input for generating the response."
              },
              "type": {
                "type": "string",
                "default": "message",
                "description": "Input type, defaults to 'message'."
              },
              "phase": {
                "type": "string",
                "enum": [
                  "commentary",
                  "final_answer"
                ],
                "description": "Marks assistant message as intermediate commentary or final answer (for subsequent retries)."
              },
              "status": {
                "type": "string",
                "enum": [
                  "in_progress",
                  "completed",
                  "incomplete"
                ],
                "description": "Item status."
              }
            }
          }
        }
      ],
      "description": "Text, image, or file input to the model for generating the response."
    },
    "instructions": {
      "type": "string",
      "description": "System (or developer) message inserted into the model context. When used with `previous_response_id`, can easily replace the system message for a new response."
    },
    "max_output_tokens": {
      "type": "number",
      "description": "Maximum number of tokens the response can generate (including visible output tokens and reasoning tokens)."
    },
    "max_tool_calls": {
      "type": "number",
      "description": "Maximum number of tool calls in the response

Request Code Samples

Shell
JavaScript
Java
Swift
Go
PHP
Python
HTTP
C
C#
Objective-C
Ruby
OCaml
Dart
R
Request Request Example
Shell
JavaScript
Java
Swift
curl --location 'https://api-xmodel.nexconn.ai/bypass/openai/v1/responses' \
--header 'Authorization: <api-key>' \
--header 'Content-Type: application/json' \
--data '{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "type": "object",
  "title": "Create Response Request",
  "description": "Request body parameters for creating a model response",
  "properties": {
    "background": {
      "type": "boolean",
      "description": "Whether to run the model response in the background."
    },
    "context_management": {
      "type": "array",
      "description": "Context management configuration for this request.",
      "items": {
        "type": "object",
        "properties": {
          "type": {
            "type": "string",
            "description": "Context management entry type. Currently only '\''compaction'\'' is supported."
          },
          "compact_threshold": {
            "type": "number",
            "description": "Token threshold that triggers compaction for this entry."
          }
        },
        "required": [
          "type"
        ]
      }
    },
    "conversation": {
      "anyOf": [
        {
          "type": "string",
          "description": "Unique ID of the conversation."
        },
        {
          "type": "object",
          "properties": {
            "id": {
              "type": "string",
              "description": "Unique ID of the conversation."
            }
          },
          "required": [
            "id"
          ]
        }
      ],
      "description": "The conversation this response belongs to. Items from this conversation are prepended to `input_items`, and input/output items are automatically added to this conversation after response completion."
    },
    "include": {
      "type": "array",
      "description": "Specifies additional output data to include in the model response.",
      "items": {
        "type": "string",
        "enum": [
          "file_search_call.results",
          "web_search_call.results",
          "web_search_call.action.sources",
          "message.input_image.image_url",
          "computer_call_output.output.image_url",
          "code_interpreter_call.outputs",
          "reasoning.encrypted_content",
          "message.output_text.logprobs"
        ]
      }
    },
    "input": {
      "anyOf": [
        {
          "type": "string",
          "description": "Plain text input as user role."
        },
        {
          "type": "array",
          "description": "List of one or more input items containing different content types.",
          "items": {
            "type": "object",
            "properties": {
              "role": {
                "type": "string",
                "enum": [
                  "user",
                  "assistant",
                  "system",
                  "developer"
                ],
                "description": "Role of the message input."
              },
              "content": {
                "anyOf": [
                  {
                    "type": "string"
                  },
                  {
                    "type": "array",
                    "items": {
                      "type": "object",
                      "properties": {
                        "type": {
                          "type": "string",
                          "description": "Content type, e.g. '\''input_text'\'', '\''input_image'\'', '\''input_file'\'', etc."
                        },
                        "text": {
                          "type": "string"
                        },
                        "image_url": {
                          "type": "string"
                        },
                        "file_id": {
                          "type": "string"
                        }
                      }
                    }
                  }
                ],
                "description": "Text, image, or audio input for generating the response."
              },
              "type": {
                "type": "string",
                "default": "message",
                "description": "Input type, defaults to '\''message'\''."
              },
              "phase": {
                "type": "string",
                "enum": [
                  "commentary",
                  "final_answer"
                ],
                "description": "Marks assistant message as intermediate commentary or final answer (for subsequent retries)."
              },
              "status": {
                "type": "string",
                "enum": [
                  "in_progress",
                  "completed",
                  "incomplete"
                ],
                "description": "Item status."
              }
            }
          }
        }
      ],
      "description": "Text, image, or file input to the model for generating the response."
    },
    "instructions": {
      "type": "string",
      "description": "System (or developer) message inserted into the model context. When used with `previous_response_id`, can easily replace the system message for a new response."
    },
    "max_output_tokens": {
      "type": "number",
      "description": "Maximum number of tokens the response can generate (including visible output tokens and reasoning tokens)."
    },
    "max_tool_calls": {
      "type": "number",
      "description": "Maximum number of tool calls in the response'

Responses

🟢200Success
application/json
Bodyapplication/json

Example
{
    "id": "string",
    "object": "response",
    "created_at": 0,
    "completed_at": 0,
    "status": "completed",
    "error": {
        "code": "string",
        "message": "string"
    },
    "incomplete_details": {
        "reason": "max_output_tokens"
    },
    "output_text": "string",
    "output": [
        {
            "id": "string",
            "type": "string",
            "status": "string",
            "role": "assistant",
            "content": [
                {
                    "type": "output_text",
                    "text": "string",
                    "refusal": "string"
                }
            ],
            "name": "string",
            "arguments": "string",
            "call_id": "string"
        }
    ],
    "usage": {
        "input_tokens": 0,
        "input_tokens_details": {
            "cached_tokens": 0
        },
        "output_tokens": 0,
        "output_tokens_details": {
            "reasoning_tokens": 0
        },
        "total_tokens": 0
    },
    "conversation": {
        "id": "string"
    },
    "previous_response_id": "string",
    "model": "string",
    "instructions": "string",
    "metadata": {
        "property1": "string",
        "property2": "string"
    },
    "prompt_cache_key": "string",
    "prompt_cache_retention": "string",
    "safety_identifier": "string",
    "service_tier": "string",
    "parallel_tool_calls": true,
    "temperature": 0,
    "top_p": 0,
    "top_logprobs": 0,
    "max_output_tokens": 0,
    "max_tool_calls": 0,
    "truncation": "string",
    "user": "string",
    "background": true
}
Modified at 2026-04-24 10:39:56
Previous
bypass Vertex/Gemini protocol
Next
gemini-3.1-pro-preview reasoning
Built with