Couchdb read & write benchmark

Using:

This benchmark tests the write/read speed that a single Couchdb server can handle, using IBM Cloud Functions with Pywren as the readers/writers clients.

It uses RabbitMQ to synchronize all the functions and to trigger the benchmark at the same time for all of them.

Also, each function deals with the same unique document, different among all other functions.

When the test starts, every function starts writing/reading for as long as a fixed amount of time (burst_time), saving the timestamps for every action performed.

Replication

Only one script is used to run this test, and that is bm_couchdb.py
You must also edit the variable COUCHDB_ENDPOINT which holds the endpoint to the couchdb server.

  python3 bm_couchdb.py -n <num_action_performers> -t <burst_time_sec> -s <msg_size_bytes>

Two graphs are plotted every time showing the average amount and the total amount
among all invokes of reads/writes actions on each second (or intervals [0,1), [1,2) ...).

Note: throughout the documentation, documents are also referred to as messages.

Examples

  python3 bm_couchdb.py -n 600 -t 20 -s 250
  python3 bm_couchdb.py -n 100 -t 20 -s 10000

Some results

Hardware used:

  • Instance: Balanced B1.2x4
  • CPU: 2x vCPU
  • RAM: 4GB
  • Bandwidth: 1Gbps
  • OS: Ubuntu Linux 18.04 LTS Bionic Beaver Minimal Install (64 bit)
    Both server and functions shared the same region (London).

Observations:

In this test we use document sizes of 250 bytes to test overhead limits or even simulate a (small) state update.
Many graphs show unstability in the server and that might be because it is very small in terms of resources, but it serves well enough for comparisons with other services.

We can conclude that the average throughput settles at around 1200 writes per second and 1300 reads per second.

Finally, these are the graphs showing the results of running the test using message/document sizes of 25kB. We can observe that the increase in size only affected the throughput by roughly a 10%.