A C++11 http proxy
This project uses CMake. To build, do the following:
- cd build
- cmake ..
- make
- This HTTP proxy caches pages and DNS lookups as they are requested by users.
- A blacklist blacklists requests as they arrive.
- Multithreaded
- Link prefetching
- Handles different types of HTTP encodings (chunked, normal, etc) efficiently for pipelining.
Start:
http_proxy [port] [page cache timeout (optional)]
The code is roughly organized into the following classes/modules:
- TCPSocket: A wrapper around the TCP socket setup/teardown
- FileCache: Manages the cache data structures for caching files to disk. Also handles launching threads to pre-fetch links.
- HTTPRequest: Handles parsing and forming correct HTTPRequests
- HTTPResponse: Handles parsing and forming correct HTTPResponses. Also handles identifying HTTPChunks in chunked encoding and determining when all data has been received.
- Session: Session manages an individual session with a client request. Handles processing client request, forwarding to server, and serving the response back to client.
- UrlCache: Handles looking making all DNS requests and caching the results. Also handles checking these results against the "blacklist".
Sequnce of events:
- Incoming GET request
- Launch Session in new thread, pass socket to session
- Socket receives Data, and Session creates a new HTTPRequest object, passing in the data at intiialization.
- Request object pulls out the required header information, and maniuplates the header as necessary (ensuring new request line and host fields are well-formed). Throws exceptions if anything is off.
- Check the target host against the UrlCache. This automatically checks against blacklist and checks for host availability. Exceptions are thrown by UrlCache if any issues.
- Check the requested URL against the cache. If it exista and is still good, we have a cache "hit", and we simply open that file and copy to client socket.
- If cache "miss", open up a new socket and initiate connection using address from UrlCache. Send the new request object.
- Monitor the incoming data stream from remote host, and copy to client socket. Also copy data to cache. We can handle both chunked and normal encodings.
- Once all data has been copied, tell the FileCache to initiate link-prefetch-processign for that file.
- Return to step 3 until client is finished.
I organized my code design around the objects I thought were most important: sessions, responses, requests, caches, and sockets. I wrapped the blacklist functionality into the UrlCache, which made things more efficient so another object did not have to interact with the incoming DNS functionality. I wrapped the pre-fetch functionality into the file cache since they really need to work hand-in-hand. I used regular expressions to efficiently (in terms of coding time and processing time) identify well formed links in the pages which are already cached. The cost complicated parts of the code were monitoring the HTTP download process so that I could quickly return to polling the user. I implemented both chunked-encoding and normal "single-shot" downloading mechanisms.