π A high-performance, production-ready proxy server to use Sourcegraph's AI API in the OpenAI API format, complete with a full-featured Admin Panel.
This project allows you to use Sourcegraph's powerful AI capabilities (including over 35 models like Claude, Gemini, and GPT series) through the standard OpenAI API format. It comes with a built-in admin panel to manage Proxy API keys, Sourcegraph API Keys, users, and monitor usage statistics.
- Features
- Admin Panel
- Installation
- Configuration
- Usage
- API Endpoints
- Docker
- Supported Models
- Development
- License
- Full OpenAI Compatibility: Works seamlessly with existing OpenAI libraries and tools.
- Built-in Admin Panel: A comprehensive web interface to manage the entire proxy.
- Dynamic Sourcegraph API Keys & Proxy API Keys: Manage multiple Sourcegraph API Keys and generate Proxy API keys for your users, all from the UI.
- Usage Statistics & Metrics: Detailed dashboard with charts for requests, errors, and model usage.
- Broad Model Support: Access to over 35 of the latest AI models from Anthropic, Google, OpenAI, etc.
- Streaming Support: Full
stream: truesupport for real-time responses. - Enterprise Security: Rate limiting, IP blacklisting, and a robust user/API key authentication system.
- Production-Ready: Developed with TypeScript for stability and performance.
This project includes a powerful admin panel to manage and monitor your proxy server.
How to Access:
- Start the server.
- Open your browser and go to
http://localhost:7033/login. - Log in with the default credentials:
- Username:
admin - Password:
admin
- Username:
Security Note: It is highly recommended to change the default admin password immediately after your first login.
Panel Features:
- Dashboard: View real-time statistics, including total requests, error rates, and usage charts for models, Sourcegraph API Keys, and Proxy API keys.
- Sourcegraph API Key Management: Add, delete, and toggle multiple Sourcegraph API Keys to create a resilient request pool.
- Proxy API Key Management: Create, delete, and manage Proxy API keys for your users.
- User Management: Add or remove admin users who can access the panel.
- Usage Metrics: Browse through a detailed, paginated log of all API requests.
- Node.js:
v18.0.0or higher - npm:
v8.0.0or higher (oryarn)
-
Clone the Repository:
git clone https://github.com/hermesthecat/sourcegraph2api.git cd sourcegraph2api/nodejs -
Install Dependencies:
npm install
-
Set Up Environment Variables: Create a new file named
.envby copyingenv.exampleand edit the values within it.cp env.example .env
-
Run Migrations: Before starting the server for the first time, or after pulling new changes that include database schema updates, run the migrations:
npm run db:migrate
-
Start the Server:
-
Development Mode (with auto-reload):
npm run dev
-
Production Mode:
npm run build npm start
-
The application's configuration is managed in two ways:
.envFile (Startup Settings): These are core settings required to boot the server. They are only read once when the server starts.- Admin Panel (Dynamic Settings): All other settings are managed dynamically from the Admin Panel β Settings page. These settings are stored in the database and can be changed on-the-fly without restarting the server.
| Variable | Description | Default |
|---|---|---|
PORT |
The port the server will run on. | 7033 |
HOST |
The host address the server will bind to. | 0.0.0.0 |
NODE_ENV |
The operating environment (development or production). |
production |
DEBUG |
Enables detailed debug logging (true or false). |
false |
The following settings can be configured from the UI:
- Session Secret: A secret key for securing user sessions.
- Request Rate Limit: Max requests per minute per IP.
- Route Prefix: A global prefix for all API routes.
- Proxy URL: An HTTP/HTTPS proxy for outbound requests.
- IP Blacklist: Comma-separated IPs to block.
- Log Level: The verbosity of application logs (
info,debug, etc.). - User Agent: The User-Agent header sent with requests to Sourcegraph.
- Time Zone (TZ): The application's timezone.
- Reasoning Hide: Whether to hide the model's reasoning process.
- Sourcegraph Base URL: The base URL for the Sourcegraph API.
- Chat Endpoint: The endpoint path for Sourcegraph chat API.
Once the server is running, first create an API key in the Admin Panel. Then, use that key to make requests with standard OpenAI libraries.
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "http://localhost:7033/v1", // If you set a ROUTE_PREFIX, include it here
apiKey: "s2a-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", // Your API key generated from the admin panel
});
async function main() {
const stream = await client.chat.completions.create({
model: "claude-3-opus", // Any supported model
messages: [
{
role: "user",
content: "Can you write 5 interview questions about TypeScript?",
},
],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
}
main();# Replace with your API key from the admin panel
API_KEY="s2a-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
curl http://localhost:7033/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": false
}'POST /v1/chat/completions: The main endpoint for chat completion requests.GET /v1/models: Returns a list of all supported models.GET /health: A simple health check.GET /login: The login page for the admin panel.GET /admin/dashboard: The main dashboard for the admin panel.
-
Build the Docker image:
docker build -t sourcegraph2api-nodejs . -
Run the container: Make sure your
.envfile is created and configured.docker run -p 7033:7033 --env-file .env sourcegraph2api-nodejs
This proxy provides a wide variety of models supported by Sourcegraph in the OpenAI format.
| Brand | Popular Models |
|---|---|
| Claude (Anthropic) | claude-3-opus, claude-3.5-sonnet-latest, claude-3-haiku |
| Gemini (Google) | gemini-1.5-pro, gemini-2.0-flash |
| GPT (OpenAI) | gpt-4o, gpt-4o-mini, gpt-4-turbo |
| Other | mixtral-8x22b-instruct, deepseek-v3 |
claude-sonnet-4-latest, claude-sonnet-4-thinking-latest, claude-3-7-sonnet-latest, claude-3-7-sonnet-extended-thinking, claude-3-5-sonnet-latest, claude-3-opus, claude-3-5-haiku-latest, claude-3-haiku, claude-3.5-sonnet, claude-3-5-sonnet-20240620, claude-3-sonnet, claude-2.1, claude-2.0, deepseek-v3, gemini-1.5-pro, gemini-1.5-pro-002, gemini-2.0-flash-exp, gemini-2.0-flash, gemini-2.5-flash-preview-04-17, gemini-2.0-flash-lite, gemini-2.0-pro-exp-02-05, gemini-2.5-pro-preview-03-25, gemini-1.5-flash, gemini-1.5-flash-002, mixtral-8x7b-instruct, mixtral-8x22b-instruct, gpt-4o, gpt-4.1, gpt-4o-mini, gpt-4.1-mini, gpt-4.1-nano, o3-mini-medium, o3, o4-mini, o1, gpt-4-turbo, gpt-3.5-turbo
nodejs/
βββ config/ # Sequelize CLI configuration (config.json)
βββ migrations/ # Database migration files
βββ src/
β βββ config/ # Dynamic configuration manager and model list
β βββ controllers/ # Logic for handling incoming HTTP requests
β βββ middleware/ # Middleware for authentication, logging, etc.
β βββ models/ # Sequelize database models and relationships
β βββ routes/ # API and web routes (endpoints)
β βββ services/ # Core business logic (DB, Sourcegraph client, etc.)
β βββ types/ # TypeScript type definitions
β βββ utils/ # Helper functions and logger
β βββ app.ts # Main setup for the Express application
β βββ index.ts # The application's entry point
βββ views/ # EJS templates for the Admin Panel
βββ public/ # Static files (CSS, JS) for the Admin Panel
βββ database.sqlite # SQLite veritabanΔ± dosyasΔ± (migration'lar tarafΔ±ndan yΓΆnetilir)
βββ package.json
βββ .sequelizerc # Sequelize CLI konfigΓΌrasyonu
βββ .env.exampleThis project is licensed under the MIT License. See the LICENSE file for details.