Chat Completions

Generate responses from any AI model. OpenAI-compatible with streaming support.

POST
/v1/chat/completions
Create a chat completion

Request Body

ParameterTypeRequiredDescription
modelstringOne requiredModel alias (e.g., "gpt-4o-mini", "claude-sonnet"). Specify this OR task_type.
task_typestringOne requiredLet us pick the best model. Values: simple_chat, summarize, extract, translate, code_gen, tool_orchestrate, image_gen, file_ops, web_browse, creative, deep_reasoning, agentic_complex
messagesarrayYesArray of message objects with role and content
max_tokensintegerNoMaximum tokens to generate (default: 4096)
temperaturenumberNoSampling temperature 0–2 (default: 0.7)
streambooleanNoEnable SSE streaming (default: false)

Messages Array

FieldTypeDescription
rolestring"system", "user", or "assistant"
contentstringThe message content

Request Example — Specify Model

bash
1
2
3
4
5
6
7
8
9
10
11
12
"docs-token-command">curl https://theconflux.com/v1/chat/completions \
  -H "Authorization: Bearer cf_live_xxx" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "What is the capital of France?"}
    ],
    "max_tokens": "docs-token-number">100,
    "temperature": "docs-token-number">0.7'
  }'

Request Example — Smart Routing

Use task_type instead of modeland we'll pick the best model for the job — optimized for cost, speed, and reliability.

bash
1
2
3
4
5
6
7
8
9
10
"docs-token-command">curl https://theconflux.com/v1/chat/completions \
  -H "Authorization: Bearer cf_live_xxx" \
  -H "Content-Type: application/json" \
  -d '{
    "task_type": "code_gen",
    "messages": [
      {"role": "user", "content": "Write a Python function that sorts a list"}
    ],
    "max_tokens": "docs-token-number">500'
  }'

Available Task Types

Task TypeTierBest For
simple_chatCoreGreeting, basic Q&A, text formatting
summarizeCoreText extraction, classification, summarization
extractCoreData extraction, parsing, structured output
translateCoreLanguage translation, localization
code_genProWrite code, fix bugs, refactor
tool_orchestrateProMulti-step tool chaining, API calls
image_genProImage creation, editing, vision tasks
file_opsProFile read/write/edit, directory operations
web_browseProWeb scraping, search, URL fetching
creativeProCreative writing, content generation
deep_reasoningUltraResearch, analysis, complex reasoning
agentic_complexUltraFull workflows, 20+ tool calls

Higher tiers use more capable models. Credits cost scales with tier (Core=1, Pro=3, Ultra=8 per 1K input tokens).

Response

json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": "docs-token-number">1711660800,
  "model": "gpt-4o-mini",
  "choices": [
    {
      "index": "docs-token-number">0,
      "message": {
        "role": "assistant",
        "content": "The capital of France is Paris."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": "docs-token-number">28,
    "completion_tokens": "docs-token-number">9,
    "total_tokens": "docs-token-number">37
  }
}

Streaming

Set stream: true to receive Server-Sent Events. Each chunk contains a delta with partial content.

Streaming Request

bash
1
2
3
4
5
6
7
8
"docs-token-command">curl https://theconflux.com/v1/chat/completions \
  -H "Authorization: Bearer cf_live_xxx" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "Count to 5"}],
    "stream": true'
  }'

SSE Stream Chunks

text
1
2
3
4
5
6
7
data: {"id":"chatcmpl-abc","choices":[{"delta":{"content":"1"},"index":"docs-token-number">0}]}

data: {"id":"chatcmpl-abc","choices":[{"delta":{"content":", 2"},"index":"docs-token-number">0}]}

data: {"id":"chatcmpl-abc","choices":[{"delta":{"content":", 3"},"index":"docs-token-number">0}]}

data: [DONE]

Errors

404 — Model Not Found

json
1
2
3
4
{
  "error": "Model not found",
  "message": "Unknown model: invalid-model-name"
}