xline-kv/Xline

[Feature]: Better testing framework

Opened this issue · 3 comments

Description about the feature

Background

Currently, our testing framework uses some bash scripts to start a cluster of containers and configure some network and other environments.

Xline/scripts/benchmark.sh

Lines 88 to 107 in 72f3cf9

run_cluster() {
server=${1}
echo cluster starting
case ${server} in
xline)
run_xline 1 &
run_xline 2 &
run_xline 3 &
sleep 3
;;
etcd)
run_etcd 1
sleep 3 # in order to let etcd node1 become leader
run_etcd 2 &
run_etcd 3 &
;;
esac
wait
echo cluster started
}

Xline/scripts/benchmark.sh

Lines 143 to 150 in 72f3cf9

set_latency() {
container_name=${1}
dst_ip=${2}
latency=${3}
idx=${4}
docker exec ${container_name} tc filter add dev eth0 protocol ip parent 1:0 u32 match ip dst ${dst_ip} flowid 1:${idx}
docker exec ${container_name} tc qdisc add dev eth0 parent 1:${idx} handle ${idx}0: netem delay ${latency}
}

And the best solution for these functions at present is to use docker-compose, just provide a docker-compose.yml file to manage these resources.

Issues

Due to different configurations, both benchmark.sh and quick_start.sh have their own startup scripts, which the script sections we really want to merge.
We experienced a failure of the benchmark.sh script because startup scripts are stale.

How to ?

Write a docker-compose script like the following in ci/docker-compose.yml to start a cluster.

version: '3.9'

networks:
  xline_network:
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: "172.18.0.0/16"

services:
  node1:
    image: ghcr.io/xline-kv/xline:latest
    networks:
      xline_network:
        ipv4_address: 172.18.0.2
    volumes:
      - .:/mnt
    ports:
      - "2379:2379"
    environment:
      RUST_LOG: curp=debug,xline=debug
    command: >
      xline
      --name node1
      --members node1=http://172.18.0.2:2379,node2=http://172.18.0.3:2379,node3=http://172.18.0.4:2379
      --storage-engine rocksdb
      --data-dir /usr/local/xline/data-dir
      --auth-public-key /mnt/public.pem
      --auth-private-key /mnt/private.pem
      --is-leader
  node2:
    image: ghcr.io/xline-kv/xline:latest
    networks:
      xline_network:
        ipv4_address: 172.18.0.3
    volumes:
      - .:/mnt
    ports:
      - "2380:2379"
    environment:
      RUST_LOG: curp=debug,xline=debug
    command: >
      xline
      --name node2
      --members node1=http://172.18.0.2:2379,node2=http://172.18.0.3:2379,node3=http://172.18.0.4:2379
      --storage-engine rocksdb
      --data-dir /usr/local/xline/data-dir
      --auth-public-key /mnt/public.pem
      --auth-private-key /mnt/private.pem
  node3:
    image: ghcr.io/xline-kv/xline:latest
    networks:
      xline_network:
        ipv4_address: 172.18.0.4
    volumes:
      - .:/mnt
    ports:
      - "2381:2379"
    environment:
      RUST_LOG: curp=debug,xline=debug
    command: >
      xline
      --name node3
      --members node1=http://172.18.0.2:2379,node2=http://172.18.0.3:2379,node3=http://172.18.0.4:2379
      --storage-engine rocksdb
      --data-dir /usr/local/xline/data-dir
      --auth-public-key /mnt/public.pem
      --auth-private-key /mnt/private.pem

Not only can it be provided to users as a quick start, but also during testing and deployment, only one command is needed: docker-compose -f ci/docker-compose.yml up -d. And you can destroy all resources by running docker-compose -f ci/docker-compose.yml down.

This is the first step that needs to be done. Subsequently, tools related to testing can be mounted in /mnt, and operations such as configuring container networks need to be configured in a separate script.
Finally, the quick start document needs to be updated, quick_start.sh should be deleted, and benchmark.sh and validation.sh need to be refactored.

Code of Conduct

  • I agree to follow this project's Code of Conduct

hi @iGxnon would like to take on this task

OK, this issue is assigned to you @rohansx. Thank you for your contribution.

@Phoenix500526, kindly review PR #699 and inform me if any modifications are necessary. Once confirmed, I will proceed with the documentation process.