mars da10467df3 refactor(transformer): 根据消息角色设置内容类型并优化代码格式
根据消息角色动态设置内容类型,将assistant消息标记为output_text而非input_text
同时优化代码格式,包括多行导入和长行拆分,提高可读性
2026-01-07 11:32:14 +08:00

ChatGPT Codex Router

An OpenAI-compatible API router that forwards requests to the ChatGPT backend with OAuth authentication.

Features

  • OpenAI Compatible API: Supports /v1/chat/completions and /v1/responses endpoints
  • OAuth Authentication: Uses OpenAI's official OAuth flow (same as Codex CLI)
  • GPT-5.x Models: Supports all GPT-5.1 and GPT-5.2 model variants
  • Streaming Support: Full support for streaming responses
  • Automatic Token Refresh: Automatically refreshes expired OAuth tokens
  • Detailed Logging: Configurable logging with request/response tracking
  • Docker Support: Ready-to-deploy Docker images

Supported Models

  • gpt-5.1 (none/low/medium/high)
  • gpt-5.2 (none/low/medium/high/xhigh)
  • gpt-5.1-codex (low/medium/high)
  • gpt-5.1-codex-max (low/medium/high/xhigh)
  • gpt-5.1-codex-mini (medium/high)
  • gpt-5.2-codex (low/medium/high/xhigh)

Features

  • OpenAI Compatible API: Supports /v1/chat/completions and /v1/responses endpoints
  • OAuth Authentication: Uses OpenAI's official OAuth flow (same as Codex CLI)
  • Auto Login: Automatically checks authentication on startup and initiates OAuth login if needed
  • GPT-5.x Models: Supports all GPT-5.1 and GPT-5.2 model variants
  • Streaming Support: Full support for streaming responses
  • Automatic Token Refresh: Automatically refreshes expired OAuth tokens
  • Detailed Logging: Configurable logging with request/response tracking
  • Docker Support: Ready-to-deploy Docker images

Installation

Using NPM

npm install -g chatgpt-codex-router

Using Docker

docker build -t chatgpt-codex-router -f docker/Dockerfile .

Quick Start

Development

# Install dependencies
npm install

# Start development server
npm run dev

Production

# Build the project
npm run build

# Start production server
npm start

Using Docker

# Start with docker-compose
docker-compose -f docker/docker-compose.yml up -d

# View logs
docker-compose -f docker/docker-compose.yml logs -f

Usage

The server now automatically checks authentication status on startup:

npm start

If no valid token is found, the server will:

  1. Automatically initiate OAuth flow
  2. Start local OAuth callback server (port 1455)
  3. Attempt to open browser automatically
  4. Display OAuth URL in logs for manual access

Startup Logs:

[WARN] No authentication token found. Initiating OAuth login...
[INFO] Starting OAuth flow with state: ...
[INFO] Local OAuth server started on port 1455
[INFO] OAuth login initiated.
[INFO] Please complete the OAuth flow in your browser.
[INFO] OAuth URL: https://auth.openai.com/oauth/authorize?...
[INFO] Browser should have opened automatically.
[INFO] Server started on http://0.0.0.0:3000

For detailed information about the auto-login feature, see AUTO_LOGIN.md.

1. Manual OAuth Login

If you need to manually trigger authentication after server is running:

curl -X POST http://localhost:3000/auth/login

This will return an authorization URL. Open it in your browser to complete the OAuth flow. After successful authentication, the token will be saved to ~/.chatgpt-codex-router/tokens.json.

2. Chat Completions

Use the standard OpenAI Chat Completions API:

curl http://localhost:3000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.2-codex",
    "messages": [
      {"role": "user", "content": "Hello, how are you?"}
    ],
    "stream": false
  }'

3. Streaming Responses

Enable streaming with stream: true:

curl http://localhost:3000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.2-codex",
    "messages": [
      {"role": "user", "content": "Write a Python function"}
    ],
    "stream": true
  }'

4. Responses API

Use the OpenAI Responses API:

curl http://localhost:3000/v1/responses \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.2-codex",
    "input": [
      {"type": "message", "role": "user", "content": [{"type": "input_text", "text": "Hello"}]}
    ],
    "stream": false
  }'

Configuration

Create a configuration file at ~/.chatgpt-codex-router/config.json:

{
  "server": {
    "port": 3000,
    "host": "0.0.0.0"
  },
  "oauth": {
    "clientId": "app_EMoamEEZ73f0CkXaXp7hrann",
    "redirectUri": "http://localhost:1455/auth/callback",
    "localServerPort": 1455
  },
  "backend": {
    "url": "https://chatgpt.com/backend-api",
    "timeout": 120000
  },
  "logging": {
    "level": "info",
    "dir": "./logs",
    "enableRequestLogging": false
  },
  "codex": {
    "mode": true,
    "defaultReasoningEffort": "medium",
    "defaultTextVerbosity": "medium"
  }
}

Environment Variables

  • PORT: Server port (default: 3000)
  • LOG_LEVEL: Logging level (error/warn/info/debug)
  • ENABLE_REQUEST_LOGGING: Enable detailed request logging (true/false)
  • CONFIG_PATH: Custom configuration file path
  • DATA_DIR: Custom data directory for tokens

API Endpoints

GET /health

Health check endpoint.

Response:

{
  "status": "healthy",
  "timestamp": 1736153600000
}

POST /auth/login

Initiate OAuth authentication flow.

Response:

{
  "status": "pending",
  "url": "https://auth.openai.com/oauth/authorize?...",
  "instructions": "Please complete the OAuth flow in your browser"
}

POST /v1/chat/completions

OpenAI-compatible chat completions endpoint.

Request:

{
  "model": "gpt-5.2-codex",
  "messages": [
    {"role": "user", "content": "Hello"}
  ],
  "stream": false,
  "temperature": 0.7
}

Response:

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1736153600,
  "model": "gpt-5.2-codex",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "Hello! How can I help you today?"
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 20,
    "total_tokens": 30
  }
}

POST /v1/responses

OpenAI Responses API endpoint.

Request:

{
  "model": "gpt-5.2-codex",
  "input": [
    {"type": "message", "role": "user", "content": [{"type": "input_text", "text": "Hello"}]}
  ],
  "stream": false
}

Project Structure

chatgpt-codex-router/
├── src/
│   ├── auth/              # OAuth authentication
│   ├── request/           # Request transformation
│   ├── response/          # Response handling
│   ├── prompts/           # Codex system prompts
│   ├── config.ts          # Configuration management
│   ├── logger.ts          # Logging system
│   ├── constants.ts       # Constants
│   ├── types.ts          # TypeScript types
│   ├── router.ts          # API routes
│   ├── server.ts          # Server setup
│   └── index.ts          # Entry point
├── public/
│   └── oauth-success.html # OAuth success page
├── docker/
│   ├── Dockerfile         # Docker image
│   ├── docker-compose.yml # Docker Compose
│   └── .dockerignore     # Docker ignore
├── data/                # Token storage
├── logs/                # Log files
├── package.json
├── tsconfig.json
└── README.md

Development

Build

npm run build

Type Check

npm run typecheck

Lint

npm run lint

License

MIT

Disclaimer

This project is for personal development use with your own ChatGPT Plus/Pro subscription. For production or multi-user applications, use the OpenAI Platform API.

Users are responsible for ensuring their usage complies with:

Description
No description provided
Readme 106 KiB
Languages
TypeScript 87.9%
JavaScript 8.5%
HTML 2.8%
Dockerfile 0.8%