FAQ#
Answers to the most common questions from users
š Getting Started#
How do I get started with Nexconn AI services?#
Getting started is simple, just 4 steps:Step 1: Register an accountClick the Register button and fill in basic information.
Verify your email or phone number.
Step 2: Real-name verification (required)Individual users: Provide ID card information
Enterprise users: Provide business license and other documents
Verification is required before purchasing services
Step 3: Purchase services and obtain API KeyLog in to the Nexconn console, select Large Model Services > Service Purchase.
After purchasing and topping up, enter the large model console.
Refer to the documentation sample code.
Or visit the model marketplace to browse available models.
I don't know how to code, can I still use Nexconn AI?#
Absolutely! Nexconn AI offers multiple ways to use:ā
No coding required#
Third-party clients: Use tools like ChatBox, Cherry Studio, etc., configure Nexconn API and start using
API testing tools: Use Postman and other tools to test the API
š» Better if you know coding#
API integration: Integrate AI into your own applications
Custom development: Build your own AI assistant
Batch processing: Automate processing of large volumes of tasks
Which AI models are supported?#
Nexconn provides a rich selection of models, supporting 70+ top global large models:| Model Series | Representative Models | Features |
|---|
| GPT Series | GPT-5.2 | OpenAI's latest model |
| Kimi Series | Kimi | Ultra-long context, Chinese optimized |
| DeepSeek Series | DeepSeek-V3.1 | Cost-effective, strong coding ability |
| Tongyi Qianwen | Qwen | Multimodal, good Chinese understanding |
| Doubao Series | Doubao | Fast response, low cost |
| Zhipu AI | GLM 4.7 | Multimodal, tool calling |
| Minimax | Minimax M2 | Strong creative generation and conversation |
| Open Source Models | GPT-OSS-120b/20b | Open source, controllable, high flexibility |
Model library is continuously updated, more models coming soon! Visit Model Marketplace to see the complete list.
š° Billing & Pricing#
How is billing calculated? What are the pricing standards?#
Nexconn provides pay-as-you-go billing:Pay-as-you-go#
Charged based on actual token usage
Cost = Input tokens Ć Input unit price + Output tokens Ć Output unit price
Users with unstable usage
Users seeking flexibility
Enterprise users can contact sales for bulk discounts
How are tokens calculated? How many characters is 10,000 tokens approximately?#
Token calculation rules#
| Language Type | Calculation Method | Example |
|---|
| Chinese | Usually 1-2 Chinese characters = 1 token | "The weather is nice today" ā 6 tokens |
| English | Usually 1 word = 1-2 tokens | "Hello World" ā 2 tokens |
10,000 tokens is approximately equivalent to#
| Content Type | Approximate character count/content volume |
|---|
| Chinese text | 7,000 - 10,000 characters |
| English text | 5,000 - 7,500 words |
| Novel | About 15-20 pages (A4 paper) |
| Code | About 400-600 lines of code |
Example: A conversation containing a 100-character question and a 500-character answer consumes approximately 600-800 tokens.
Are prices the same for different models?#
No! Pricing varies for different models:Reasons for price differences:š° Model cost: Models with larger parameter counts have higher computational costs
šÆ Capability differences: More capable models typically have higher prices
š Different sources: International and domestic models have different pricing strategies
š Market positioning: Premium models vs. economy models
Most expensive: Current top domestic and international models
Medium: Mainstream models like Kimi, Tongyi Qianwen, etc.
Economy: Cost-effective models like DeepSeek, Doubao, etc.
Cheapest: Open source small models
Recommendation: Choose appropriate models based on task complexitySimple tasks (like classification, summarization) ā Use economy models
Complex tasks (like deep reasoning, code generation) ā Use premium models
š Security & Privacy#
Is my data secure? Will it be used to train models?#
ā
Data security guarantees#
š End-to-end encryptionAll data transmission uses HTTPS encryption to prevent man-in-the-middle attacks
š« No storage of sensitive dataNexconn does not store your conversation content and sensitive information
All API calls have detailed logs for traceability
āļø Compliance certificationComplies with data security and privacy protection laws and regulations
Clear commitment: Data you submit through Nexconn AI services will not be used to train models!
What should I do if my API Key is leaked?#
ā ļø Take the following measures immediately:Step 1: Immediately disable the leaked keyStep 2: Generate a new keyCreate a new API Key and update it in your application.
Step 3: Check usage recordsCheck for abnormal calls and assess losses.
Log in to the console to view usage statistics.
Step 4: Contact customer serviceā Do not hardcode API Keys in your code.
ā Do not upload API Keys to public Git repositories.
ā Use environment variables or configuration files to store keys.
ā Rotate keys regularly.
ā Use different keys for different projects.
āļø Technical Issues#
What should I do if API calls are slow?#
Possible causes and solutions#
High latency due to distant servers
Choose nearby service regions
Check local network quality
Cause 2: Input content too longSlow processing due to overly long context or input text
Streamline input content, remove irrelevant information
Use summarization to compress long text
Process large amounts of data in batches
Cause 3: Complex model computationLong computation time for models with large parameter counts
Use lightweight models for simple tasks
Use streaming output for faster first-token response
Adjust max_tokens parameter to limit output length
Cause 4: Peak hour congestionRequest queuing during peak usage periods
Use during off-peak hours
Use asynchronous calling methods
If none of the above methods work, please contact technical support for diagnosis!
What does OpenAI API compatibility mean?#
"OpenAI API compatibility" means if you've previously used OpenAI services (like ChatGPT API), you can seamlessly switch to Nexconn with almost no code changes!Practical advantages#
ā
Low migration cost: Only need to modify API address and key
ā
Low learning curve: Use the same documentation and examples
ā
Rich ecosystem: Can use OpenAI's third-party tools
Switching example#
šÆ Application Scenarios#
Can it be used for commercial projects?#
ā
Absolutely!#
Nexconn AI services support commercial use, and you can integrate them into various commercial products and services.Common commercial application scenarios#
š¼ Enterprise internal systemsIntelligent customer service systems
Automatic document generation
š User-facing productsIntelligent education platforms