feat: 实现ChatGPT Codex路由器的核心功能
- 添加完整的项目基础结构,包括配置、类型定义和常量 - 实现OAuth认证流程和令牌管理 - 开发请求转换和响应处理逻辑 - 添加SSE流处理和ChatCompletions API转换 - 实现模型映射和提示指令系统 - 包含Docker部署配置和快速启动文档 - 添加自动登录功能和测试脚本
This commit is contained in:
349
README.md
Normal file
349
README.md
Normal file
@@ -0,0 +1,349 @@
|
||||
# ChatGPT Codex Router
|
||||
|
||||
An OpenAI-compatible API router that forwards requests to the ChatGPT backend with OAuth authentication.
|
||||
|
||||
## Features
|
||||
|
||||
- **OpenAI Compatible API**: Supports `/v1/chat/completions` and `/v1/responses` endpoints
|
||||
- **OAuth Authentication**: Uses OpenAI's official OAuth flow (same as Codex CLI)
|
||||
- **GPT-5.x Models**: Supports all GPT-5.1 and GPT-5.2 model variants
|
||||
- **Streaming Support**: Full support for streaming responses
|
||||
- **Automatic Token Refresh**: Automatically refreshes expired OAuth tokens
|
||||
- **Detailed Logging**: Configurable logging with request/response tracking
|
||||
- **Docker Support**: Ready-to-deploy Docker images
|
||||
|
||||
## Supported Models
|
||||
|
||||
- `gpt-5.1` (none/low/medium/high)
|
||||
- `gpt-5.2` (none/low/medium/high/xhigh)
|
||||
- `gpt-5.1-codex` (low/medium/high)
|
||||
- `gpt-5.1-codex-max` (low/medium/high/xhigh)
|
||||
- `gpt-5.1-codex-mini` (medium/high)
|
||||
- `gpt-5.2-codex` (low/medium/high/xhigh)
|
||||
|
||||
## Features
|
||||
|
||||
- **OpenAI Compatible API**: Supports `/v1/chat/completions` and `/v1/responses` endpoints
|
||||
- **OAuth Authentication**: Uses OpenAI's official OAuth flow (same as Codex CLI)
|
||||
- **Auto Login**: Automatically checks authentication on startup and initiates OAuth login if needed
|
||||
- **GPT-5.x Models**: Supports all GPT-5.1 and GPT-5.2 model variants
|
||||
- **Streaming Support**: Full support for streaming responses
|
||||
- **Automatic Token Refresh**: Automatically refreshes expired OAuth tokens
|
||||
- **Detailed Logging**: Configurable logging with request/response tracking
|
||||
- **Docker Support**: Ready-to-deploy Docker images
|
||||
|
||||
## Installation
|
||||
|
||||
### Using NPM
|
||||
|
||||
```bash
|
||||
npm install -g chatgpt-codex-router
|
||||
```
|
||||
|
||||
### Using Docker
|
||||
|
||||
```bash
|
||||
docker build -t chatgpt-codex-router -f docker/Dockerfile .
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Development
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
npm install
|
||||
|
||||
# Start development server
|
||||
npm run dev
|
||||
```
|
||||
|
||||
### Production
|
||||
|
||||
```bash
|
||||
# Build the project
|
||||
npm run build
|
||||
|
||||
# Start production server
|
||||
npm start
|
||||
```
|
||||
|
||||
### Using Docker
|
||||
|
||||
```bash
|
||||
# Start with docker-compose
|
||||
docker-compose -f docker/docker-compose.yml up -d
|
||||
|
||||
# View logs
|
||||
docker-compose -f docker/docker-compose.yml logs -f
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Auto Login (Recommended)
|
||||
|
||||
The server now automatically checks authentication status on startup:
|
||||
|
||||
```bash
|
||||
npm start
|
||||
```
|
||||
|
||||
If no valid token is found, the server will:
|
||||
1. Automatically initiate OAuth flow
|
||||
2. Start local OAuth callback server (port 1455)
|
||||
3. Attempt to open browser automatically
|
||||
4. Display OAuth URL in logs for manual access
|
||||
|
||||
**Startup Logs:**
|
||||
```
|
||||
[WARN] No authentication token found. Initiating OAuth login...
|
||||
[INFO] Starting OAuth flow with state: ...
|
||||
[INFO] Local OAuth server started on port 1455
|
||||
[INFO] OAuth login initiated.
|
||||
[INFO] Please complete the OAuth flow in your browser.
|
||||
[INFO] OAuth URL: https://auth.openai.com/oauth/authorize?...
|
||||
[INFO] Browser should have opened automatically.
|
||||
[INFO] Server started on http://0.0.0.0:3000
|
||||
```
|
||||
|
||||
For detailed information about the auto-login feature, see [AUTO_LOGIN.md](AUTO_LOGIN.md).
|
||||
|
||||
### 1. Manual OAuth Login
|
||||
|
||||
If you need to manually trigger authentication after server is running:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3000/auth/login
|
||||
```
|
||||
|
||||
This will return an authorization URL. Open it in your browser to complete the OAuth flow. After successful authentication, the token will be saved to `~/.chatgpt-codex-router/tokens.json`.
|
||||
|
||||
### 2. Chat Completions
|
||||
|
||||
Use the standard OpenAI Chat Completions API:
|
||||
|
||||
```bash
|
||||
curl http://localhost:3000/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "gpt-5.2-codex",
|
||||
"messages": [
|
||||
{"role": "user", "content": "Hello, how are you?"}
|
||||
],
|
||||
"stream": false
|
||||
}'
|
||||
```
|
||||
|
||||
### 3. Streaming Responses
|
||||
|
||||
Enable streaming with `stream: true`:
|
||||
|
||||
```bash
|
||||
curl http://localhost:3000/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "gpt-5.2-codex",
|
||||
"messages": [
|
||||
{"role": "user", "content": "Write a Python function"}
|
||||
],
|
||||
"stream": true
|
||||
}'
|
||||
```
|
||||
|
||||
### 4. Responses API
|
||||
|
||||
Use the OpenAI Responses API:
|
||||
|
||||
```bash
|
||||
curl http://localhost:3000/v1/responses \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "gpt-5.2-codex",
|
||||
"input": [
|
||||
{"type": "message", "role": "user", "content": [{"type": "input_text", "text": "Hello"}]}
|
||||
],
|
||||
"stream": false
|
||||
}'
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Create a configuration file at `~/.chatgpt-codex-router/config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"server": {
|
||||
"port": 3000,
|
||||
"host": "0.0.0.0"
|
||||
},
|
||||
"oauth": {
|
||||
"clientId": "app_EMoamEEZ73f0CkXaXp7hrann",
|
||||
"redirectUri": "http://localhost:1455/auth/callback",
|
||||
"localServerPort": 1455
|
||||
},
|
||||
"backend": {
|
||||
"url": "https://chatgpt.com/backend-api",
|
||||
"timeout": 120000
|
||||
},
|
||||
"logging": {
|
||||
"level": "info",
|
||||
"dir": "./logs",
|
||||
"enableRequestLogging": false
|
||||
},
|
||||
"codex": {
|
||||
"mode": true,
|
||||
"defaultReasoningEffort": "medium",
|
||||
"defaultTextVerbosity": "medium"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
- `PORT`: Server port (default: 3000)
|
||||
- `LOG_LEVEL`: Logging level (error/warn/info/debug)
|
||||
- `ENABLE_REQUEST_LOGGING`: Enable detailed request logging (true/false)
|
||||
- `CONFIG_PATH`: Custom configuration file path
|
||||
- `DATA_DIR`: Custom data directory for tokens
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### `GET /health`
|
||||
|
||||
Health check endpoint.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"status": "healthy",
|
||||
"timestamp": 1736153600000
|
||||
}
|
||||
```
|
||||
|
||||
### `POST /auth/login`
|
||||
|
||||
Initiate OAuth authentication flow.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"status": "pending",
|
||||
"url": "https://auth.openai.com/oauth/authorize?...",
|
||||
"instructions": "Please complete the OAuth flow in your browser"
|
||||
}
|
||||
```
|
||||
|
||||
### `POST /v1/chat/completions`
|
||||
|
||||
OpenAI-compatible chat completions endpoint.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"model": "gpt-5.2-codex",
|
||||
"messages": [
|
||||
{"role": "user", "content": "Hello"}
|
||||
],
|
||||
"stream": false,
|
||||
"temperature": 0.7
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"id": "chatcmpl-123",
|
||||
"object": "chat.completion",
|
||||
"created": 1736153600,
|
||||
"model": "gpt-5.2-codex",
|
||||
"choices": [{
|
||||
"index": 0,
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": "Hello! How can I help you today?"
|
||||
},
|
||||
"finish_reason": "stop"
|
||||
}],
|
||||
"usage": {
|
||||
"prompt_tokens": 10,
|
||||
"completion_tokens": 20,
|
||||
"total_tokens": 30
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### `POST /v1/responses`
|
||||
|
||||
OpenAI Responses API endpoint.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"model": "gpt-5.2-codex",
|
||||
"input": [
|
||||
{"type": "message", "role": "user", "content": [{"type": "input_text", "text": "Hello"}]}
|
||||
],
|
||||
"stream": false
|
||||
}
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
chatgpt-codex-router/
|
||||
├── src/
|
||||
│ ├── auth/ # OAuth authentication
|
||||
│ ├── request/ # Request transformation
|
||||
│ ├── response/ # Response handling
|
||||
│ ├── prompts/ # Codex system prompts
|
||||
│ ├── config.ts # Configuration management
|
||||
│ ├── logger.ts # Logging system
|
||||
│ ├── constants.ts # Constants
|
||||
│ ├── types.ts # TypeScript types
|
||||
│ ├── router.ts # API routes
|
||||
│ ├── server.ts # Server setup
|
||||
│ └── index.ts # Entry point
|
||||
├── public/
|
||||
│ └── oauth-success.html # OAuth success page
|
||||
├── docker/
|
||||
│ ├── Dockerfile # Docker image
|
||||
│ ├── docker-compose.yml # Docker Compose
|
||||
│ └── .dockerignore # Docker ignore
|
||||
├── data/ # Token storage
|
||||
├── logs/ # Log files
|
||||
├── package.json
|
||||
├── tsconfig.json
|
||||
└── README.md
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
### Build
|
||||
|
||||
```bash
|
||||
npm run build
|
||||
```
|
||||
|
||||
### Type Check
|
||||
|
||||
```bash
|
||||
npm run typecheck
|
||||
```
|
||||
|
||||
### Lint
|
||||
|
||||
```bash
|
||||
npm run lint
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
## Disclaimer
|
||||
|
||||
This project is for **personal development use** with your own ChatGPT Plus/Pro subscription. For production or multi-user applications, use the OpenAI Platform API.
|
||||
|
||||
Users are responsible for ensuring their usage complies with:
|
||||
- OpenAI Terms of Use: https://openai.com/policies/terms-of-use/
|
||||
- OpenAI Usage Policies: https://openai.com/policies/usage-policies/
|
||||
Reference in New Issue
Block a user