UVB - Ultimate Victory Battle
-----------------------------
Ultimate Victory Battle is a game created by Computer Science House. The basic
premise is this. There is a server, every one else tries to crash the server.
There are also counters and statistics, but thats just to rank how hard each
player is hitting the server.
Most people end up writing a client which floods a server with traffic. I
thought it would be more fun to write a server which can handle the flood of
traffic. That server is what I have attempted to build here.
Revision 4:LMDB Changelog
-------------------------
1. Store counters in LMDB
In the LMDB branch counters are persisted to disk using LMDB. In order
to achieve reasonable performance we use the MDB_WRITEMAP and MDB_MAPASYNC
options. This sacrifices write durability for speed, which in the case of
UVB is fine (for now).
2. Modified buffer_append and added buffer_fast_clear.
These changes make it possible to avoid reallocing and memseting the buffer
memory during clear, we still use the default realloc/memset way, but for
printing the status page we avoid that and just use buffer_fast_clear which
resets the buffer index to 0; These changes will probably be propogated out
the master soon.
3. Filter out non azAZ09 characters, limit names to 15 characters
Because people are shitlords
4. Pipelined requests.
Previously there was a 1-1 mapping between tcp stream and request. So when
a single request was finished, we clean everything up, and close the
connection. Now we don't assume this, and instead just keep handling
http requests from a connection until it stops.
Revision 4 Changelog
--------------------
1. Use SO_REUSEPORT
Previous connections were loadbalanced acrossed threads via EPOLL_ONESHOT
and a shared epoll fd. While this works fine, we might be able to get more
performance if SO_REUSEPORT is used since it allows multiple threads to
bind on the same port while the kernel balances connections acrossed them.
This way we hopefully get better cache locality because each connection is
pinned to a single thread which is pinned to a single cpu.
2. Each thread is pinned to a single CPU.
Connections are no longer shared among threads so if we pin threads to CPUs
we might improve performance.
Revision 3 Changelog
--------------------
1. No longer using libevent
libevent was a good choice to begin with, got me used to working in C, and
let me focus on the database that was backing the counter increment system.
Howver once I started to move to a multithreaded model libevent became way
harder to use. I ripped all that out and replaced it with a pure epoll
based event-loop. uvbserver2 was my first attempt at this, it
worked, kinda. I cleaned up the code and fixed those issues to create the
current version you see here.
2. joyent/http-parser instead of libevent builtin http library
Without libevent I had to rewrite all the quick and easy code I had to
handle http requests. I did this using a custom buffer implementation I
wrote and Joyent's excellent http-parser library.
3. Multithreaded epoll
This is the big one. By using straight epoll I was able to run an event
loop for handling connections on each CPU. The current system has roughly
uniform work distribution between the threads and is capable of handling
a massive amount of requests.
In Progress Work
----------------
1. Per-user counters
Right now there is a single global counter which I used to verify my ideas
about mutlithreaded counters and for testing the http server. Once I find
a hash table to my liking per-user counters will return.
2. Persistence
Since I rewrote the server from the ground up persistence no longer works.
I need to rebuild the database backend in order to handle multithreading.
3. Distributed UVB
I've been pretty focused on single machine performance. Eventually this
won't be enough. So I've been thinking about how to make UVB work acrossed
multiple machines.