A high-performance CDN node that serves and caches content based on content hash (CID) and client Ethereum addresses.
- High-Performance Web Framework: Built with Fiber for maximum performance and minimal memory footprint
- Memory-Efficient Streaming: Intelligent content streaming without loading large files into memory
- Dual Storage Strategy: Small files cached in-memory, large files stored as files (configurable 10MB threshold)
- Client-Based Egress Management: Individual quota tracking per Ethereum address with NATS-based updates
- Distributed Quota System: Real-time quota management via NATS events (set, top-up, reset)
- NATS-Based Content Distribution: Content info shared via NATS instead of HTTP endpoints
- Client-based Access Tracking: URLs include client ETH address as subdomain (
<client_eth_address>.<domain>.com/<CID>) - Advanced Caching: Efficient caching using BadgerDB for metadata and hybrid storage for content
- Real-time Streaming: Simultaneous content delivery and caching using io.TeeReader
- Quota Enforcement: Real-time quota checking with HTTP 429 responses for exceeded limits
- Non-blocking Request Processing: Immediate response serving with background database updates
- Event Logging: Logs all requests via NATS for distributed tracking
- Cache Miss Handling: Automatically fetches content from origin servers while streaming to client
- Health Monitoring: Built-in health check endpoints
- Graceful Shutdown: Processes remaining requests during shutdown
Client Request: 0x1234567890abcdef1234567890abcdef12345678.example.com/QmHash123
↓
1. Quota Check (immediate)
↓
2. Serve Response (immediate)
↓
3. Background Processing:
├─ Update BadgerDB (egress tracking)
├─ Update Cache Metadata
└─ Publish to NATS (distributed logging)
-
Immediate Processing:
- Extract client ID and content hash
- Check cache for content
- Verify egress quota
- Serve response immediately
-
Background Processing:
- Queue request for async processing
- Update local egress tracking
- Update cache access statistics
- Publish request event to NATS
-
Distributed Sync:
- Other nodes receive NATS events
- Update their local egress tracking
- Maintain consistent quota enforcement
Edit config.yaml:
port: 8080
log_level: "info"
database:
path: "./data/badger"
nats:
url: "nats://localhost:4222"
subject: "cdn.requests"
origin:
base_url: "https://origin.example.com"
timeout: 30- Install dependencies:
go mod tidy- Start NATS server:
docker run -p 4222:4222 -p 8222:8222 nats:2.10.7-alpine -js -m 8222- Run the CDN node:
go run main.godocker-compose up --buildGET /<CID>
Host: <client_eth_address>.<domain>.com
Example:
curl -H "Host: 0x1234567890abcdef1234567890abcdef12345678.example.com" \
http://localhost:8080/QmYourContentHashHereResponse Codes:
200: Content served successfully400: Invalid subdomain format or missing content hash429: Egress quota exceeded502: Failed to fetch from origin server
Response Headers:
X-Cache:HITorMISSContent-Type: Content MIME type
POST /content
Content-Type: application/json
Store content directly in the CDN node:
curl -X POST http://localhost:8080/content \
-H "Content-Type: application/json" \
-d '{
"content_info": {
"hash": "QmYourContentHash",
"location": "https://origin.example.com",
"content_type": "text/plain"
},
"content": "SGVsbG8gV29ybGQ="
}'Request Body:
content_info: Metadata about the contenthash: Content identifierlocation: Origin server URLcontent_type: MIME type
content: Raw content bytes
Response:
{
"success": true,
"message": "Content stored successfully",
"hash": "QmYourContentHash",
"size": 1024
}GET /health
The CDN expects requests in the format:
- Host:
<client_eth_address>.<domain>.com - Path:
/<CID>
Where:
client_eth_address: 42-character Ethereum address (0x + 40 hex chars)domain: Your domain nameCID: Content identifier/hash
Stored with key: content/<client_id>/<hash>
{
"hash": "QmContentHash",
"location": "https://origin.example.com/QmContentHash",
"size": 1024,
"content_type": "image/jpeg",
"cached_at": "2025-01-01T00:00:00Z",
"last_accessed": "2025-01-01T00:00:00Z",
"access_count": 42
}Stored with key: cache/<hash>
- Raw content bytes stored directly in BadgerDB
Stored with key: egress/<client_id>
{
"client_id": "0x1234567890abcdef1234567890abcdef12345678",
"total_bytes": 1048576,
"request_count": 100,
"last_request": "2025-01-01T00:00:00Z"
}Published to cdn.requests and used for distributed egress tracking:
{
"request_id": "0x1234-1735689600000000000",
"client_id": "0x1234567890abcdef1234567890abcdef12345678",
"domain": "example.com",
"content_hash": "QmContentHash",
"cache_hit": true,
"response_size": 1024,
"duration_ms": 50,
"timestamp": "2025-01-01T00:00:00Z",
"origin_url": "https://origin.example.com/QmContentHash"
}All nodes subscribe to the same cdn.requests subject to track egress across the distributed CDN network. When a node receives a RequestEvent, it updates its local egress tracking for the client.
.
├── main.go # Main server application (using Fiber)
├── config/
│ └── config.go # Configuration management
├── storage/
│ └── storage.go # BadgerDB operations
├── events/
│ ├── publisher.go # NATS event publishing
│ └── subscriber.go # NATS event subscription
├── config.yaml # Configuration file
├── docker-compose.yaml # Development environment
└── Dockerfile # Container build
- Content Storage: Extend
storage/storage.gofor actual file caching - Traefik Integration: Add middleware in
main.goto leverage Traefik's caching - Metrics: Add Prometheus metrics for monitoring
- Authentication: Add client authentication mechanisms
- Fiber Middleware: Leverage Fiber's extensive middleware ecosystem
- Health Check:
GET /health - NATS Management: http://localhost:8222 (when using docker-compose)
- Logs: JSON structured logs with configurable levels
- Storage: Configure appropriate BadgerDB settings for production
- Caching: Implement actual file storage and retrieval
- Security: Add rate limiting and authentication
- Monitoring: Add metrics collection and alerting
- Scaling: Deploy multiple nodes with shared NATS cluster
- Egress Quotas: Configure appropriate quota limits and enforcement policies
- Background Processing: Monitor request queue depth and processing latency
- Error Handling: Implement circuit breakers for origin server failures