Getting Started

Welcome to the LumyxAI API documentation. Our API is compatible with both OpenAI and Anthropic SDKs, so you can use any existing SDK or HTTP client by simply changing the base URL and API key.

Quick Start

  1. Create an account at lumyx-ai.site/register
  2. Generate an API key from your dashboard
  3. Make your first API call using the examples below

Base URL: https://lumyx-ai.site/api/v1

Authentication

All API requests require a valid API key passed via the Authorization header using the Bearer scheme.

Header
Authorization: Bearer lx-xxxxxxxxxxxxxxxxxxxx

Security note: Never expose your API key in client-side code. Always make API calls from your backend server.

Models

LumyxAI offers a range of models optimized for different use cases. Pass the model name in the model field of your request.

Swipe to see more →

ModelContextInput/1MOutput/1MCapabilities
Loading models...

Explore the full catalog with filters on the models page.

Chat Completions

Create a chat completion by sending a POST request to:

Endpoint
POST https://lumyx-ai.site/api/v1/chat/completions

Request Body

ParameterTypeRequiredDescription
modelstringYesModel ID (e.g. "lumyx-plus_v1")
messagesarrayYesArray of message objects with role and content
streambooleanNoEnable streaming responses (default: false)
temperaturenumberNoSampling temperature 0-2 (default: 0.7)
max_tokensintegerNoMaximum number of tokens to generate
toolsarrayNoList of tool/function definitions for the model to call
tool_choicestring | objectNoControls tool use: "auto", "none", "required", or specific function
response_formatobjectNoForce JSON output: {"type":"json_object"}
stopstring | string[]NoStop sequences to halt generation
seedintegerNoSeed for deterministic sampling (best effort)
top_pnumberNoNucleus sampling (default: 1)
frequency_penaltynumberNoPenalize repeated tokens (-2.0 to 2.0)
reasoningobjectNoEnable reasoning/thinking tokens (see Reasoning section)
userstringNoUnique user identifier for abuse monitoring

Example Request

bash
curl https://lumyx-ai.site/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "lumyx-plus_v1",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "What is machine learning?"}
    ],
    "temperature": 0.7,
    "max_tokens": 512
  }'

Example Response

json
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1710000000,
  "model": "lumyx-plus_v1",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Machine learning is a subset of artificial intelligence..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 150,
    "total_tokens": 175
  }
}

Anthropic Messages API

Drop-in compatible with the Anthropic SDK

LumyxAI fully supports the Anthropic Messages API format, including streaming with Anthropic SSE events. Switch from Anthropic to LumyxAI by changing just two lines — the base URL and API key.

Base URL

https://lumyx-ai.site/api/v1

Authentication

x-api-key: YOUR_API_KEY
Endpoint
POST https://lumyx-ai.site/api/v1/messages

Request Body

ParameterTypeRequiredDescription
modelstringRequiredModel ID (e.g. "lumyx-plus_v1")
messagesarrayRequiredArray of message objects with role and content (supports content blocks)
max_tokensintegerRequiredMaximum number of tokens to generate
systemstringOptionalSystem prompt (top-level field, Anthropic style)
temperaturenumberOptionalSampling temperature 0–1 (default: 0.7)
top_pnumberOptionalNucleus sampling (default: 1)
streambooleanOptionalEnable streaming with Anthropic SSE events
stop_sequencesstring[]OptionalCustom stop sequences
toolsarrayOptionalTool definitions with input_schema (Anthropic format)
tool_choiceobjectOptionalTool selection: {"type":"auto"}, {"type":"any"}, or {"type":"tool","name":"..."}
thinkingobjectOptionalEnable extended thinking: {"type":"enabled","budget_tokens":1024}
top_kintegerOptionalTop-K sampling parameter

Example — Migrate from Anthropic

2-line migration: Replace api_key and base_url — everything else stays the same.

python
import anthropic

client = anthropic.Anthropic(
    api_key="YOUR_API_KEY",             # ← your LumyxAI key
    base_url="https://lumyx-ai.site/api/v1",  # ← LumyxAI endpoint
)

message = client.messages.create(
    model="lumyx-plus_v1",
    max_tokens=1024,
    system="You are a helpful assistant.",
    messages=[
        {"role": "user", "content": "What is machine learning?"}
    ],
)

print(message.content[0].text)

Example Response

json
{
  "id": "msg_abc123",
  "type": "message",
  "role": "assistant",
  "model": "lumyx-plus_v1",
  "content": [
    {
      "type": "text",
      "text": "Machine learning is a subset of artificial intelligence..."
    }
  ],
  "stop_reason": "end_turn",
  "stop_sequence": null,
  "usage": {
    "input_tokens": 25,
    "output_tokens": 150
  }
}

Supported Features

Messages API (create)
Streaming (native SSE events)
System prompt (string & array)
Content blocks (text, image, tool_use, tool_result)
Tool use / function calling
Vision / image inputs (base64 & URL)
Extended thinking (thinking blocks)
Stop sequences & top_k
Token usage tracking

Tip: The same API key works for both OpenAI and Anthropic endpoints. Use whichever SDK you prefer — no separate configuration needed.

Tool Use / Function Calling

Let models call your functions

Define tools that the model can call during a conversation. Works with both OpenAI and Anthropic formats, including streaming.

bash
curl https://lumyx-ai.site/api/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "lumyx-plus_v1",
    "messages": [{"role": "user", "content": "What is the weather in Paris?"}],
    "tools": [{
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get weather for a city",
        "parameters": {
          "type": "object",
          "properties": {
            "city": {"type": "string", "description": "City name"}
          },
          "required": ["city"]
        }
      }
    }],
    "max_tokens": 512
  }'

Tool Call Response

json
{
  "choices": [{
    "message": {
      "role": "assistant",
      "content": null,
      "tool_calls": [{
        "id": "call_abc123",
        "type": "function",
        "function": {
          "name": "get_weather",
          "arguments": "{\"city\": \"Paris\"}"
        }
      }]
    },
    "finish_reason": "tool_calls"
  }]
}

Multi-turn: After receiving a tool call, send the result back with role: "tool" and tool_call_id to continue the conversation.

Vision / Images

Send images to vision-capable models

Send images as base64 data or URLs in your messages. Supported on vision-capable models like lumyx-vision.

bash
curl https://lumyx-ai.site/api/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "lumyx-vision",
    "messages": [{
      "role": "user",
      "content": [
        {"type": "text", "text": "What is in this image?"},
        {"type": "image_url", "image_url": {
          "url": "data:image/png;base64,iVBORw0KGgo..."
        }}
      ]
    }],
    "max_tokens": 256
  }'

Supported formats: PNG, JPEG, GIF, WebP. Both base64 data URIs and external URLs are supported.

Reasoning / Thinking

Extended thinking and chain-of-thought

Some models support reasoning tokens — the model "thinks" step-by-step before responding, improving accuracy on complex tasks like math, logic, and coding. LumyxAI supports reasoning via both OpenAI and Anthropic formats, including streaming.

Effort Levels

Control how much the model thinks before responding. Higher effort uses more tokens but produces more accurate results.

EffortWhen to UseToken Usage
lowSimple tasks, quick answers, low-stakes queriesMinimal
mediumBalanced reasoning for general-purpose tasksModerate
highComplex math, logic puzzles, multi-step coding tasksHigh

OpenAI Format (Chat Completions)

Use the reasoning object or the reasoning_effort shorthand on the /v1/chat/completions endpoint.

bash
# Using reasoning object with effort level
curl https://lumyx-ai.site/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "lumyx-plus_v1",
    "messages": [{"role": "user", "content": "Solve: x^2 + 5x + 6 = 0"}],
    "reasoning": {"effort": "high"},
    "max_tokens": 1024
  }'

# Or with max_tokens budget for reasoning
curl https://lumyx-ai.site/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "lumyx-plus_v1",
    "messages": [{"role": "user", "content": "What is 15 * 23?"}],
    "reasoning": {"effort": "low", "max_tokens": 512},
    "max_tokens": 256
  }'

Anthropic Format (Messages)

Use the thinking object on the /v1/messages endpoint. Set type to "enabled" and optionally specify a budget_tokens limit.

bash
curl https://lumyx-ai.site/v1/messages \
  -H "x-api-key: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "lumyx-plus_v1",
    "max_tokens": 4096,
    "messages": [{"role": "user", "content": "Solve: x^2 + 5x + 6 = 0"}],
    "thinking": {
      "type": "enabled",
      "budget_tokens": 2048
    }
  }'

Response Format

When reasoning is enabled, the response includes thinking content before the final answer.

json
{
  "id": "msg_abc123",
  "type": "message",
  "role": "assistant",
  "model": "lumyx-plus_v1",
  "content": [
    {
      "type": "thinking",
      "thinking": "Let me solve x^2 + 5x + 6 = 0...\nI need to factor this quadratic..."
    },
    {
      "type": "text",
      "text": "The solutions are x = -2 and x = -3."
    }
  ],
  "stop_reason": "end_turn",
  "usage": {
    "input_tokens": 25,
    "output_tokens": 150
  }
}

All Reasoning Parameters

EndpointParameterTypeDescription
Chatreasoning.effortstring"low", "medium", or "high"
Chatreasoning.max_tokensintegerMax tokens the model can use for reasoning
Chatreasoning.enabledbooleanExplicitly enable or disable reasoning
Chatreasoning.excludebooleanExclude reasoning content from the response (still used internally)
Chatreasoning_effortstringShorthand — same as reasoning.effort
Messagesthinking.typestring"enabled" or "disabled"
Messagesthinking.budget_tokensintegerMax tokens for thinking (converted to reasoning.max_tokens internally)

Billing: Reasoning tokens are billed as output tokens. Higher effort levels consume more tokens and cost more credits. The thinking_used field in the Usage API tracks which requests used reasoning.

Compatibility: Not all models support reasoning. Models tagged with "reasoning" in the Models endpoint support this feature. Unsupported models will ignore reasoning parameters and respond normally.

Streaming

Set "stream": true to receive responses as Server-Sent Events (SSE). Each chunk contains a delta of the response.

bash
curl https://lumyx-ai.site/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "lumyx-plus_v1",
    "messages": [{"role": "user", "content": "Tell me a story"}],
    "stream": true
  }'

Stream Chunk Format

json
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}

data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":" world"},"finish_reason":null}]}

data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}

data: [DONE]

Credits & Billing

Pay-as-you-go pricing with credits

LumyxAI uses a credit-based billing system. Purchase credits upfront and they are consumed as you make API requests. Credits never expire. Each model has its own per-token pricing.

How It Works

1

Buy Credits

Purchase via Stripe from your dashboard ($2 min)

2

Make Requests

Use the API normally via OpenAI or Anthropic SDKs

3

Credits Deducted

Tokens used are billed at the model's per-token rate

Credit Packages

PackageCreditsPricePer Credit
Starter500$5$0.010
Popular Best Value2,000$15$0.0075
Pro5,000$30$0.006
Power15,000$75$0.005

You can also enter a custom amount ($2 – $1,000) at checkout. Rate: 100 credits per $1.

Credit Deduction

Credits are deducted after each request based on the tokens consumed and the model's pricing:

formula
cost = (input_tokens / 1,000,000) × input_price
     + (output_tokens / 1,000,000) × output_price

Note: If a model has no pricing configured (both input and output prices are 0), it is free to use and no credits are deducted. Requests to priced models will return a 402 error if your balance is 0.

Check Balance via API

View your credit balance and transaction history from the Credits dashboard.

Tip: Taxes are calculated automatically based on your location during Stripe checkout. All prices shown are before tax.

Usage

Track your API usage and costs

Query your API usage programmatically. Get aggregated totals, daily breakdowns, per-model stats, and recent request logs — all authenticated with your API key.

Get Usage

GET/v1/usage

bash
curl https://lumyx-ai.site/v1/usage \
  -H "Authorization: Bearer YOUR_API_KEY"

Query Parameters

ParameterTypeDescription
start_datestringFilter from this date (ISO 8601, e.g. 2026-03-01)
end_datestringFilter until this date (ISO 8601)
modelstringFilter by model ID
limitintegerNumber of recent logs to return (default: 50, max: 100)
pageintegerPage number for pagination (default: 1)

Response Format

json
{
  "object": "usage",
  "account": {
    "credits_remaining": 142.50,
    "vip_tier": 0
  },
  "free_model_daily_limit": {
    "limit": 50,
    "used": 12,
    "remaining": 38,
    "resets_at": "2026-03-20T00:00:00.000Z",
    "timezone": "America/New_York"
  },
  "summary": {
    "total_requests": 1284,
    "total_prompt_tokens": 512000,
    "total_completion_tokens": 256000,
    "total_tokens": 768000,
    "total_cost": 3.840000,
    "avg_latency_ms": 1250
  },
  "daily": [
    {
      "date": "2026-03-19",
      "requests": 45,
      "tokens": 32000,
      "cost": 0.160000
    }
  ],
  "by_model": [
    {
      "model_id": "gpt-5.2",
      "requests": 800,
      "tokens": 500000,
      "cost": 2.500000
    }
  ],
  "recent_activity": [
    {
      "id": "clx...",
      "model_id": "gpt-5.2",
      "prompt_tokens": 150,
      "completion_tokens": 300,
      "total_tokens": 450,
      "cost": 0.002250,
      "latency_ms": 1100,
      "status_code": 200,
      "thinking_used": false,
      "created_at": "2026-03-19T14:30:00.000Z"
    }
  ],
  "pagination": {
    "page": 1,
    "limit": 50,
    "total": 1284,
    "total_pages": 26
  }
}

Tip: The free_model_daily_limit object shows your remaining free model requests and when they reset (midnight in your timezone). The daily array always returns the last 30 days regardless of filters. Use start_date and end_date to filter the summary, model breakdown, and recent activity.

Rate Limits

Rate limits are applied per API key and vary by plan tier.

PlanRequests/minRequests/day
Free101,000
Pro6050,000
EnterpriseUnlimitedUnlimited

Rate limit headers are included in every response: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset

Errors

The API uses standard HTTP status codes. Error responses include a JSON body with details.

StatusCodeDescription
400invalid_requestMissing or invalid request parameters
401unauthorizedInvalid or missing API key
402insufficient_creditsInsufficient credits — top up your account
404model_not_foundModel not found or not available
429rate_limit_exceededToo many requests
500internal_errorInternal server error
503model_unavailableModel temporarily unavailable

Error Response Format

json
{
  "error": {
    "message": "Invalid API key provided.",
    "type": "unauthorized",
    "code": 401
  }
}

Integrations

LumyxAI is fully compatible with the OpenAI API format, so any tool, CLI, or SDK that supports a custom base URL will work out of the box. Below are step-by-step guides for the most popular integrations.

All integrations use the same base URL and your LumyxAI API key:

text
Base URL:  https://lumyx-ai.site/v1
API Key:   lx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

OpenAI Python SDK

The official openai Python package works with LumyxAI by simply changing the base URL.

bash
pip install openai
python
from openai import OpenAI

client = OpenAI(
    base_url="https://lumyx-ai.site/v1",
    api_key="lx-your-api-key"
)

response = client.chat.completions.create(
    model="gpt-5.2",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"}
    ],
    stream=True
)

for chunk in response:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

OpenAI Node.js SDK

bash
npm install openai
javascript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://lumyx-ai.site/v1",
  apiKey: "lx-your-api-key",
});

const response = await client.chat.completions.create({
  model: "gpt-5.2",
  messages: [{ role: "user", content: "Hello!" }],
  stream: true,
});

for await (const chunk of response) {
  process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

Anthropic Python SDK

LumyxAI also supports the native Anthropic Messages format via /v1/messages.

bash
pip install anthropic
python
from anthropic import Anthropic

client = Anthropic(
    base_url="https://lumyx-ai.site/api/v1",
    api_key="lx-your-api-key"
)

message = client.messages.create(
    model="gpt-5.2",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(message.content[0].text)

curl

bash
curl https://lumyx-ai.site/v1/chat/completions \
  -H "Authorization: Bearer lx-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.2",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Aider

Aider is an AI pair-programming tool that works in your terminal.

bash
export OPENAI_API_BASE=https://lumyx-ai.site/v1
export OPENAI_API_KEY=lx-your-api-key

aider --model openai/gpt-5.2

Continue.dev (VS Code / JetBrains)

Add this to your ~/.continue/config.json:

json
{
  "models": [
    {
      "title": "LumyxAI",
      "provider": "openai",
      "model": "gpt-5.2",
      "apiBase": "https://lumyx-ai.site/v1",
      "apiKey": "lx-your-api-key"
    }
  ]
}

Cursor

In Cursor, go to Settings → Models → OpenAI API Key, then set:

API Keylx-your-api-key
Base URLhttps://lumyx-ai.site/v1

Then select any LumyxAI model from the model picker.

LiteLLM

Use LiteLLM to route LumyxAI as a provider alongside others.

python
import litellm

response = litellm.completion(
    model="openai/gpt-5.2",
    messages=[{"role": "user", "content": "Hello!"}],
    api_base="https://lumyx-ai.site/v1",
    api_key="lx-your-api-key"
)

Claude Code (CLI)

To use LumyxAI as a provider with Claude Code, set these environment variables before launching. You can customize which models are used for each tier.

bash
export ANTHROPIC_AUTH_TOKEN=lx-your-api-key
export ANTHROPIC_API_KEY=""
export ANTHROPIC_BASE_URL=https://lumyx-ai.site/api/

# Optional: customize which models Claude Code uses
export ANTHROPIC_DEFAULT_OPUS_MODEL=lumyxai/hunter-alpha
export ANTHROPIC_DEFAULT_SONNET_MODEL=glm-5-turbo
export ANTHROPIC_DEFAULT_HAIKU_MODEL=gpt-5.4-openai
export CLAUDE_CODE_SUBAGENT_MODEL=gpt-5.4-openai

claude

Note: Use ANTHROPIC_AUTH_TOKEN for the API key and set ANTHROPIC_API_KEY to an empty string. The base URL should be https://lumyx-ai.site/api/ (without v1). Replace the model names with any models available on LumyxAI.

Environment Variables (Universal)

Many tools automatically pick up these standard environment variables. Add them to your .bashrc, .zshrc, or .env file:

bash
# Works with: aider, continue.dev, litellm, langchain, etc.
export OPENAI_API_BASE=https://lumyx-ai.site/v1
export OPENAI_API_KEY=lx-your-api-key

Available Endpoints

MethodEndpointDescription
GET/v1/modelsList all available models
GET/v1/models/{id}Get a specific model
POST/v1/chat/completionsChat completions (OpenAI format)
POST/v1/completionsText completions (legacy)
POST/v1/messagesMessages (Anthropic format)
POST/v1/embeddingsGenerate embeddings
GET/v1/usageGet usage stats and history
GET/v1/healthHealth check