Chat Completions
Generate responses from any AI model. OpenAI-compatible with streaming support.
POST
/v1/chat/completions
Create a chat completion
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | One required | Model alias (e.g., "gpt-4o-mini", "claude-sonnet"). Specify this OR task_type. |
| task_type | string | One required | Let us pick the best model. Values: simple_chat, summarize, extract, translate, code_gen, tool_orchestrate, image_gen, file_ops, web_browse, creative, deep_reasoning, agentic_complex |
| messages | array | Yes | Array of message objects with role and content |
| max_tokens | integer | No | Maximum tokens to generate (default: 4096) |
| temperature | number | No | Sampling temperature 0–2 (default: 0.7) |
| stream | boolean | No | Enable SSE streaming (default: false) |
Messages Array
| Field | Type | Description |
|---|---|---|
| role | string | "system", "user", or "assistant" |
| content | string | The message content |
Request Example — Specify Model
bash
1
2
3
4
5
6
7
8
9
10
11
12
"docs-token-command">curl https://theconflux.com/v1/chat/completions \
-H "Authorization: Bearer cf_live_xxx" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
"max_tokens": "docs-token-number">100,
"temperature": "docs-token-number">0.7'
}'Request Example — Smart Routing
Use task_type instead of modeland we'll pick the best model for the job — optimized for cost, speed, and reliability.
bash
1
2
3
4
5
6
7
8
9
10
"docs-token-command">curl https://theconflux.com/v1/chat/completions \
-H "Authorization: Bearer cf_live_xxx" \
-H "Content-Type: application/json" \
-d '{
"task_type": "code_gen",
"messages": [
{"role": "user", "content": "Write a Python function that sorts a list"}
],
"max_tokens": "docs-token-number">500'
}'Available Task Types
| Task Type | Tier | Best For |
|---|---|---|
| simple_chat | Core | Greeting, basic Q&A, text formatting |
| summarize | Core | Text extraction, classification, summarization |
| extract | Core | Data extraction, parsing, structured output |
| translate | Core | Language translation, localization |
| code_gen | Pro | Write code, fix bugs, refactor |
| tool_orchestrate | Pro | Multi-step tool chaining, API calls |
| image_gen | Pro | Image creation, editing, vision tasks |
| file_ops | Pro | File read/write/edit, directory operations |
| web_browse | Pro | Web scraping, search, URL fetching |
| creative | Pro | Creative writing, content generation |
| deep_reasoning | Ultra | Research, analysis, complex reasoning |
| agentic_complex | Ultra | Full workflows, 20+ tool calls |
Higher tiers use more capable models. Credits cost scales with tier (Core=1, Pro=3, Ultra=8 per 1K input tokens).
Response
json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": "docs-token-number">1711660800,
"model": "gpt-4o-mini",
"choices": [
{
"index": "docs-token-number">0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": "docs-token-number">28,
"completion_tokens": "docs-token-number">9,
"total_tokens": "docs-token-number">37
}
}Streaming
Set stream: true to receive Server-Sent Events. Each chunk contains a delta with partial content.
Streaming Request
bash
1
2
3
4
5
6
7
8
"docs-token-command">curl https://theconflux.com/v1/chat/completions \
-H "Authorization: Bearer cf_live_xxx" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Count to 5"}],
"stream": true'
}'SSE Stream Chunks
text
1
2
3
4
5
6
7
data: {"id":"chatcmpl-abc","choices":[{"delta":{"content":"1"},"index":"docs-token-number">0}]}
data: {"id":"chatcmpl-abc","choices":[{"delta":{"content":", 2"},"index":"docs-token-number">0}]}
data: {"id":"chatcmpl-abc","choices":[{"delta":{"content":", 3"},"index":"docs-token-number">0}]}
data: [DONE]Errors
404 — Model Not Found
json
1
2
3
4
{
"error": "Model not found",
"message": "Unknown model: invalid-model-name"
}