Nexconn-AI
    • Nexconn LLM API Service
    • Quick Start
    • Key Concepts and Terminology
    • Try Nexconn Large Model API Online
    • How to Get API-KEY
    • FAQ
    • Model
      • Model List
        GET
    • Chat
      • Bypass Interface
        • Bypass Anthropic Protocol
        • bypass Vertex/Gemini protocol
        • bypass Responses protocol
      • gemini-3.1-pro-preview
        • gemini-3.1-pro-preview reasoning
        • gemini-3.1-pro-preview qfile
      • gemini-2.5-pro
        • gemini-2.5-pro reasoning
      • openai/gpt-5
        • openai/gpt-5 reasoning
      • openai/gpt-5.2
        • openai/gpt-5.2 Reasoning
      • openai/gpt-5.2-codex
        • openai/gpt-5.2-codex
      • gemini-2.5-flash
        • gemini-2.5-flash Reasoning
      • claude-4.5-sonnet
        • claude-4.5-sonnet reasoning
      • claude-4.6-opus
        • claude-4.6-opus reasoning
      • deepseek/deepseek-v3.2-251201
        • deepseek/deepseek-v3.2-251201 reasoning
      • deepseek-v3
        • Chat
      • claude-3.7-sonnet
        • Chat completion
      • doubao-seed-1.6
        • doubao-seed-1.6
      • qwen3-max-2026-01-23
        • Chat completion
      • moonshotai/kimi-k2.5
        • Chat completion
      • Chat completion
        POST
      • Anthropic Protocol
        POST
    • Video
      • Veo
        • Create Video Generation Task
        • Query Video Generation Task
      • sora-2
        • Create Video Generation Task
        • Query Video Generation Status
        • Video Remix
      • sora-2-pro
        • Create Video Generation Task
        • Query Video Generation Status
        • Video Remix
      • kling-v2-1
        • Create Video Task
        • Query Video Generation Status
      • kling-v2-5-turbo
        • Create Video Task
        • Query Video Generation Status
      • kling-v2-6
        • Create Video Task
        • Query Video Generation Status
      • kling-v3
        • Create Video Task
        • Query Video Generation Status
      • kling-video-o1
        • Create Video Task
        • Query Video Generation Status
      • kling-v3-omni
        • Create Video Task
        • Query Video Generation Status
      • bytedance/doubao-seedance-2-0-260128
        • Create Video Generation Task
        • Query Video Generation Task
      • vidu
        • viduq1
          • Create text-to-video task
          • Create reference to video task - Non-subject invocation (video generation)
          • Create reference to video task - Subject invocation (supports video with audio function)
          • Query task status
          • Query task results
        • viduq2
          • Create text-to-video task
          • Create reference to video task - Non-subject invocation (video generation
          • Create reference to video task - Subject invocation (supports video with audio function)
          • Query task status
          • Query task results
        • viduq2-pro
          • Create Image-to-Video Task
          • Create First and Last Frame to Video Task
          • Create reference video generation task - non-subject invocation (video generation)
          • Query task status
          • Query task results
        • viduq2-turbo
          • Create Image-to-Video Task
          • Create First and Last Frame to Video Task
          • Query task status
          • Query task results
        • viduq3-pro
          • Create Image-to-Video Task
          • Create text-to-video task
          • Create First and Last Frame to Video Task
          • Query task status
          • Query task results
        • viduq3-turbo
          • Create Image-to-Video Task
          • Create text-to-video task
          • Create First and Last Frame to Video Task
          • Query task status
          • Query task results
    • Image Generation
      • kling-v1
        • Create text-to-image or single image-to-image task
        • Query task status
      • kling-v1-5
        • Create text-to-image or single image-to-image task
        • Query task status
      • kling-v2
        • Create text-to-image or single image-to-image task
        • Create multi-image-to-image task
        • Query task status
      • kling-v2-new
        • Create single image-to-image task
        • Query task status
      • kling-v2-1
        • Create text-to-image or single image-to-image task
        • Create multi-image-to-image task
        • Query task status
      • gemini-2.5-flash-image
        • Chat interface - supports text-to-image, image-to-image, and pure conversation
        • Text-to-image API - Generate images from text descriptions
        • Image-to-image API - Generate new images based on input images
      • gemini-3.0-pro-image-preview
        • Chat interface - supports text-to-image, image-to-image, and pure conversation
        • Text-to-image API - Generate images from text descriptions
        • Image-to-image API - Generate new images based on input images
      • gemini-3.1-flash-image-preview
        • Chat interface - supports text-to-image, image-to-image, and pure conversation
        • Text-to-image API - Generate images from text descriptions
        • Image-to-image API - Generate new images based on input images
      • kling-image-o1
        • Create image generation task
        • Query image generation task
        • Get Result
    • Files
      • Create file upload task
      • Query file status
      • List user files
    • Schemas
      • Chat
        • ChatCompletionRequest
        • ChatCompletionRequestMessage
        • ToolObject
        • ChatTemplateKwargs
        • ThinkType
        • ReasoningType
        • ImageConfig
        • SafetySetting
        • MessageContent
        • ImageUrl
        • VideoUrl
        • InputAudio
        • FileUrl
        • CacheControl
        • FunctionCall
        • ToolCall
        • ToolCallFunction
        • Image
        • ThinkingBlock
        • ToolFunction
        • ToolParameters
      • Video
        • kling-v2-1
          • KlingV21CreateRequest
          • KlingV21CreateResponse
          • KlingV21StatusResponse
        • kling-v2-5-turbo
          • KlingV25TurboCreateRequest
          • KlingV25TurboCreateResponse
          • KlingV25TurboStatusResponse
        • kling-video-o1
          • KlingVideoO1CreateRequest
          • KlingVideoO1CreateResponse
          • KlingVideoO1StatusResponse
        • kling-v3-omni
          • KlingV3OmniCreateRequest
        • kling-v3
          • KlingV3CreateRequest
        • kling-v2-6
          • KlingV26CreateRequest
          • KlingV26CreateResponse
          • KlingV26VideoStatusResponse
        • Veo
          • CreateVideoGenerationRequest
          • CreateVideoGenerationResponse
          • VideoGenerationJobInfo
          • ErrorResponse
          • Instance
          • ImageInput
          • LastFrameInput
          • VideoInput
          • ReferenceImage
          • Parameters
          • VideoGenerationData
          • VideoResult
        • VideoCreateResponse
      • Image Generation
        • kling-v1
          • KlingV1CreateImageRequest
        • kling-v1-5
          • KlingV15CreateImageRequest
        • kling-v2
          • KlingV2CreateImageRequest
          • KlingV2EditImageRequest
        • kling-v2-new
          • KlingV2NewCreateImageRequest
        • kling-v2-1
          • KlingV21CreateImageRequest
          • KlingV21EditImageRequest
        • gemini-2.5-flash-image
          • Gemini25FlashImageChatCompletionRequest
          • Gemini25FlashImageGenerationRequest
          • Gemini25FlashImageEditRequest
          • Gemini25FlashImageConfig
        • gemini-3.0-pro-image-preview
          • Gemini30ProImageChatCompletionRequest
          • Gemini30ProImageGenerationRequest
          • Gemini30ProImageEditRequest
          • Gemini30ProImageConfig
        • KlingImageTaskResponse
        • KlingImageTaskStatusResponse
        • ChatCompletionResponse
        • ImageGenerationResponse
        • FalOmniImageRequest
        • Gemini31FlashImageChatCompletionRequest
        • Gemini31FlashImageGenerationRequest
        • Gemini31FlashImageEditRequest
        • ChatMessage
        • Gemini31FlashImageConfig
      • CreateFileRequest
      • QueueStatus
      • ContentItem
      • FileResponse
      • Veo31FirstLastFrameToVideoInput
      • ToolItem
      • CreateVideoTaskResponse
      • FileListResponse
      • ApiErrorBody
      • FileDeleteResponse
      • GetVideoTaskResponse
      • Veo31ImageToVideoInput
      • Veo31ImageToVideoOutput
      • ChatCompletionResponse
      • FileError
      • ImageGenerationResponse
      • ErrorResponse
      • KodoSource
      • File
      • ImageUrlObject
      • VideoUrlObject
      • AudioUrlObject
      • DraftTaskObject
      • ChatMessage
      • VideoTaskOutputContent
      • VideoTaskToolUsageItem
      • VideoTaskUsage

    FAQ

    ā“

    FAQ#

    Answers to the most common questions from users

    šŸš€ Getting Started#

    How do I get started with Nexconn AI services?#

    Getting started is simple, just 4 steps:
    Step 1: Register an account
    Visit the Nexconn Developer Console.
    Click the Register button and fill in basic information.
    Verify your email or phone number.
    Step 2: Real-name verification (required)
    Individual users: Provide ID card information
    Enterprise users: Provide business license and other documents
    Verification is required before purchasing services
    Step 3: Purchase services and obtain API Key
    Log in to the Nexconn console, select Large Model Services > Service Purchase.
    After purchasing and topping up, enter the large model console.
    Create a new API Key: https://console-xmodel.nexconn.ai/apikey
    Step 4: Start calling
    Refer to the documentation sample code.
    Or visit the model marketplace to browse available models.

    I don't know how to code, can I still use Nexconn AI?#

    Absolutely! Nexconn AI offers multiple ways to use:

    āœ… No coding required#

    Third-party clients: Use tools like ChatBox, Cherry Studio, etc., configure Nexconn API and start using
    API testing tools: Use Postman and other tools to test the API

    šŸ’» Better if you know coding#

    API integration: Integrate AI into your own applications
    Custom development: Build your own AI assistant
    Batch processing: Automate processing of large volumes of tasks

    Which AI models are supported?#

    Nexconn provides a rich selection of models, supporting 70+ top global large models:
    Model SeriesRepresentative ModelsFeatures
    GPT SeriesGPT-5.2OpenAI's latest model
    Kimi SeriesKimiUltra-long context, Chinese optimized
    DeepSeek SeriesDeepSeek-V3.1Cost-effective, strong coding ability
    Tongyi QianwenQwenMultimodal, good Chinese understanding
    Doubao SeriesDoubaoFast response, low cost
    Zhipu AIGLM 4.7Multimodal, tool calling
    MinimaxMinimax M2Strong creative generation and conversation
    Open Source ModelsGPT-OSS-120b/20bOpen source, controllable, high flexibility
    CHECK
    Model library is continuously updated, more models coming soon! Visit Model Marketplace to see the complete list.

    šŸ’° Billing & Pricing#

    How is billing calculated? What are the pricing standards?#

    Nexconn provides pay-as-you-go billing:

    Pay-as-you-go#

    šŸ“Š Billing principle
    Charged based on actual token usage
    šŸ’” Billing formula
    Cost = Input tokens Ɨ Input unit price + Output tokens Ɨ Output unit price
    āœ“ Suitable for
    All users
    Users with unstable usage
    First-time trial users
    Users seeking flexibility
    TIP
    Top-up recommendations:
    Enterprise users can contact sales for bulk discounts

    How are tokens calculated? How many characters is 10,000 tokens approximately?#

    Token calculation rules#

    Language TypeCalculation MethodExample
    ChineseUsually 1-2 Chinese characters = 1 token"The weather is nice today" ā‰ˆ 6 tokens
    EnglishUsually 1 word = 1-2 tokens"Hello World" ā‰ˆ 2 tokens

    10,000 tokens is approximately equivalent to#

    Content TypeApproximate character count/content volume
    Chinese text7,000 - 10,000 characters
    English text5,000 - 7,500 words
    NovelAbout 15-20 pages (A4 paper)
    CodeAbout 400-600 lines of code
    šŸ’”
    Example: A conversation containing a 100-character question and a 500-character answer consumes approximately 600-800 tokens.

    Are prices the same for different models?#

    No! Pricing varies for different models:
    Reasons for price differences:
    šŸ’° Model cost: Models with larger parameter counts have higher computational costs
    šŸŽÆ Capability differences: More capable models typically have higher prices
    šŸŒ Different sources: International and domestic models have different pricing strategies
    šŸ“Š Market positioning: Premium models vs. economy models
    General patterns:
    Most expensive: Current top domestic and international models
    Medium: Mainstream models like Kimi, Tongyi Qianwen, etc.
    Economy: Cost-effective models like DeepSeek, Doubao, etc.
    Cheapest: Open source small models
    TIP
    Recommendation: Choose appropriate models based on task complexity
    Simple tasks (like classification, summarization) → Use economy models
    Complex tasks (like deep reasoning, code generation) → Use premium models
    For specific model pricing, please check the Model Marketplace.

    šŸ”’ Security & Privacy#

    Is my data secure? Will it be used to train models?#

    āœ… Data security guarantees#

    šŸ”’ End-to-end encryption
    All data transmission uses HTTPS encryption to prevent man-in-the-middle attacks
    🚫 No storage of sensitive data
    Nexconn does not store your conversation content and sensitive information
    šŸ“ Complete audit logs
    All API calls have detailed logs for traceability
    āš–ļø Compliance certification
    Complies with data security and privacy protection laws and regulations
    DANGER
    Clear commitment: Data you submit through Nexconn AI services will not be used to train models!

    What should I do if my API Key is leaked?#

    āš ļø Take the following measures immediately:
    Step 1: Immediately disable the leaked key
    Log in to the Nexconn Large Model API Console, on the API Key management page, disable or delete the key.
    image.png
    Step 2: Generate a new key
    Create a new API Key and update it in your application.
    Step 3: Check usage records
    Check for abnormal calls and assess losses.
    Log in to the console to view usage statistics.
    Step 4: Contact customer service
    If you find abnormal charges, submit a ticket to contact us promptly.
    Preventive measures:
    āœ“ Do not hardcode API Keys in your code.
    āœ“ Do not upload API Keys to public Git repositories.
    āœ“ Use environment variables or configuration files to store keys.
    āœ“ Rotate keys regularly.
    āœ“ Use different keys for different projects.

    āš™ļø Technical Issues#


    What should I do if API calls are slow?#

    Possible causes and solutions#

    Cause 1: Network latency
    High latency due to distant servers
    Solution:
    Choose nearby service regions
    Check local network quality
    Cause 2: Input content too long
    Slow processing due to overly long context or input text
    Solution:
    Streamline input content, remove irrelevant information
    Use summarization to compress long text
    Process large amounts of data in batches
    Cause 3: Complex model computation
    Long computation time for models with large parameter counts
    Solution:
    Use lightweight models for simple tasks
    Use streaming output for faster first-token response
    Adjust max_tokens parameter to limit output length
    Cause 4: Peak hour congestion
    Request queuing during peak usage periods
    Solution:
    Use during off-peak hours
    Use asynchronous calling methods
    TIP
    If none of the above methods work, please contact technical support for diagnosis!

    What does OpenAI API compatibility mean?#

    Simple explanation:
    "OpenAI API compatibility" means if you've previously used OpenAI services (like ChatGPT API), you can seamlessly switch to Nexconn with almost no code changes!

    Practical advantages#

    āœ… Low migration cost: Only need to modify API address and key
    āœ… Low learning curve: Use the same documentation and examples
    āœ… Rich ecosystem: Can use OpenAI's third-party tools

    Switching example#

    :::

    šŸŽÆ Application Scenarios#

    Can it be used for commercial projects?#

    āœ… Absolutely!#

    Nexconn AI services support commercial use, and you can integrate them into various commercial products and services.

    Common commercial application scenarios#

    šŸ’¼ Enterprise internal systems
    Intelligent customer service systems
    Knowledge base Q&A
    Automatic document generation
    Data analysis assistants
    🌐 User-facing products
    AI writing assistants
    Intelligent education platforms
    Content creation tools
    Chatbots
    šŸ›ļø E-commerce & Marketing
    Product description generation
    Personalized recommendations
    Marketing copywriting
    User review analysis
    šŸ„ Professional services
    Legal document assistants
    Medical consultation support
    Financial analysis tools
    Translation services
    WARNING
    Notes:
    Comply with service agreements and terms of use
    Must not be used for illegal purposes
    Certain industries (such as healthcare, finance) need to pay attention to compliance requirements
    AI-generated content should be reviewed by humans before publication

    Who owns the copyright of AI-generated content?#

    Copyright ownership#

    āœ“ You own the usage rights of generated content
    Can freely use, modify, and commercialize AI-generated content
    ā„¹ļø Copyright of AI-generated content is complex
    According to laws in different countries/regions, there is no unified conclusion on copyright ownership of AI-generated content
    āš ļø Recommend human review and modification
    Make appropriate modifications to AI-generated content to enhance originality
    TIP
    Best practices:
    1.
    Use AI-generated content as creative assistance, not direct use
    2.
    Conduct human review and polishing of important content
    3.
    Consult professional legal advice for critical commercial scenarios
    4.
    Label content as AI-assisted generation (when necessary)

    Can it process real-time data?#

    āœ… Supported real-time scenarios#

    Real-time conversation: Supports streaming output, returns as it generates
    Web search: Obtain real-time web information through MCP tools
    API data: Call external APIs to obtain real-time data (through web search)
    Real-time analysis: Analyze and process real-time incoming data

    āš ļø Limited scenarios#

    Model knowledge cutoff date: Model training data has time limitations (e.g., 2024)
    Unconfigured tools: Cannot actively obtain external real-time data without configured MCP tools
    Proprietary databases: Cannot directly access your private databases (need to configure through tools)
    Solutions:
    Use web search functionality to obtain latest information
    Connect to real-time data sources through MCP protocol
    Provide real-time data in prompts as context
    šŸ’”
    Example: To have AI analyze today's stock prices, first obtain real-time stock price data through an API, then provide it to the AI in the prompt for analysis.

    šŸ†˜ Troubleshooting#

    What should I do if API calls return errors?#

    Common errors and solutions#

    Error 401: Unauthorized
    Cause: API Key is incorrect or expired
    Solution:
    āœ“ Check if API Key is correct
    āœ“ Confirm key has not been disabled or deleted
    āœ“ Regenerate key and update
    Error 429: Too Many Requests
    Cause: Exceeded request rate limit
    Solution:
    āœ“ Reduce request frequency
    āœ“ Implement request retry mechanism (exponential backoff)
    āœ“ Contact customer service to increase quota
    Error 400: Bad Request
    Cause: Request parameter format error
    Solution:
    āœ“ Check if JSON format is correct
    āœ“ Confirm all required parameters are complete
    āœ“ Refer to API documentation to verify parameter types
    Error 500: Internal Server Error
    Cause: Server internal error
    Solution:
    āœ“ Retry later
    āœ“ Check service status page
    āœ“ If it persists, contact technical support
    Error 402: Payment Required
    Cause: Insufficient account balance or resource package
    Solution:
    āœ“ Top up account or purchase resource package
    āœ“ Check billing details
    TIP
    If you encounter errors you cannot resolve, please save complete error information and request logs, and contact technical support for help!

    Why are AI responses inaccurate?#

    Possible causes and improvement methods#

    Cause 1: Prompt not clear enough
    TypeExample
    āŒ Vague prompt"Write something"
    āœ… Clear prompt"Please write a 500-word product introduction for a smartwatch targeting young people"
    Cause 2: Lack of context information
    AI lacks necessary background knowledge
    Improvement methods:
    āœ“ Provide sufficient background information in the prompt
    āœ“ Upload relevant documents as reference
    āœ“ Use multi-turn conversations to gradually supplement information
    Cause 3: Model capability limitations
    The model used is not suitable for the current task
    Improvement methods:
    āœ“ Use more powerful models for complex tasks
    āœ“ Try different models and compare results
    Cause 4: Training data cutoff date
    Model doesn't know the latest information
    Improvement methods:
    āœ“ Use web search functionality
    āœ“ Provide latest data in the prompt
    āœ“ Explicitly tell AI that latest information is needed
    šŸ’”
    Tip: If the answer is unsatisfactory, you can ask AI to "think again" or "answer again", sometimes you'll get better results!

    How do I contact technical support?#

    Multiple contact methods, always at your service#

    šŸ“ Ticket system
    Recommended method, fast response time
    Log in to console → Submit ticket
    šŸ’¬ Online customer service
    Real-time response during business hours
    Chat window in the lower right corner of the official website
    šŸ¢ Enterprise exclusive
    Exclusive channel for enterprise customers
    Contact your account manager
    INFO
    When submitting issues, please provide:
    Detailed problem description
    Error message screenshots or logs
    Reproduction steps
    Model and parameters used
    Account information (do not include keys)
    This will help us resolve your issue faster!

    šŸ“Œ Other Questions#

    Are there usage limits?#

    The following limits exist:
    Limit TypeDescriptionHow to increase
    Request rate (QPS)Requests per second limitContact customer service to increase
    Context lengthMaximum tokens for single inputUse models that support longer context
    Concurrent connectionsNumber of simultaneous requestsContact customer service to increase quota
    Output lengthMaximum tokens for single generationConfigure through parameters (max_tokens)
    Content restrictionsProhibited illegal contentCannot be increased
    TIP
    Individual users vs. Enterprise users:
    Individual users have basic quotas
    Enterprise users can apply for higher quotas
    For large-scale use, recommend contacting sales for dedicated solutions

    Is batch processing supported?#

    āœ… Yes!#

    Nexconn AI provides batch processing capabilities, suitable for large-scale data processing scenarios.
    Batch processing methods:
    Method 1: Loop API calls
    Loop through multiple requests in code
    āœ“ Simple to implement āš ļø Watch for rate limits
    Method 2: Batch API
    Process multiple data items in one request
    āœ“ More efficient āœ“ Lower cost
    Method 3: Enterprise custom solutions
    Contact sales for custom batch processing solutions
    āœ“ Dedicated quota āœ“ Priority processing
    Applicable scenarios:
    Batch document translation
    Large-scale data classification and labeling
    Batch content moderation
    Massive text summarization generation

    Can it be used offline?#

    āŒ Cloud services do not support offline use#

    Nexconn AI is a cloud-based inference service that requires internet access.

    Summary#

    šŸ’”

    Still have questions?#

    This FAQ covers common questions about Nexconn Large Model API services. If you have other questions:
    šŸ’¬ Create a ticket - https://console.nexconn.ai/agile/formwork/ticket/create
    šŸŽÆ
    The Nexconn AI team is committed to providing you with quality AI services and technical support!
    Start using Nexconn AI →
    Modified atĀ 2026-04-30 07:38:22
    Previous
    How to Get API-KEY
    Next
    Model List
    Built with