UnixBench Benchmark This repository contains the setup and runtime configurations for the UnixBench benchmark. UnixBench is the original BYTE UNIX benchmark suite, updated and revised by many people over the years. The purpose of UnixBench is to provide a basic indicator of the performance of a Unix-like system; hence, multiple tests are used to test various aspects of the system's performance. These test results are then compared to the scores from a baseline system to produce an index value, which is generally easier to handle than the raw scores. The entire set of index values is then combined to make an overall index for the system. For more information, review the project website: https://code.google.com/p/byte-unixbench/ You may download the source there or from our S3 bucket here: https://s3.amazonaws.com/cloudbench/software/UnixBench5.1.3.tgz To install UnixBench: tar zxf UnixBench5.1.3.tgz cd UnixBench make all NOTE: this benchmark clears the UnixBench/results directory during and following each test iteration. Any existing results will be removed TESTING PARAMETERS Testing behavior is mostly fixed - with the exception of selecting individual tests and copies for multi-core testing. You may also specify optional meta attributes and installation attributes. The meta attributes will be included in the results (see save.sh). UnixBench should be installed and compiled before running this benchmark. Review the 'unixbench_dir' parameter comments below for instructions. * collectd_rrd If set, collectd rrd stats will be captured from --collectd_rrd_dir. To do so, when testing starts, existing directories in --collectd_rrd_dir will be renamed to .bak, and upon test completion any directories not ending in .bak will be zipped and saved along with other test artifacts (as collectd-rrd.zip). User MUST have sudo privileges to use this option * collectd_rrd_dir Location where collectd rrd files are stored - default is /var/lib/collectd/rrd * meta_burst If set to 1, designates testing performed in burst mode (e.g. Amazon EC2 t-series burst) * meta_compute_service The name of the compute service this test pertains to. May also be specified using the environment variable bm_compute_service * meta_compute_service_id The id of the compute service this test pertains to. Added to saved results. May also be specified using the environment variable bm_compute_service_id * meta_cpu CPU descriptor - if not specified, it will be set using the 'model name' attribute in /proc/cpuinfo * meta_instance_id The compute service instance type this test pertains to (e.g. c3.xlarge). May also be specified using the environment variable bm_instance_id * meta_memory Memory descriptor - if not specified, the system memory size will be used * meta_os Operating system descriptor - if not specified, it will be taken from the first line of /etc/issue * meta_provider The name of the cloud provider this test pertains to. May also be specified using the environment variable bm_provider * meta_provider_id The id of the cloud provider this test pertains to. May also be specified using the environment variable bm_provider_id * meta_region The compute service region this test pertains to. May also be specified using the environment variable bm_region * meta_resource_id An optional benchmark resource identifiers. May also be specified using the environment variable bm_resource_id * meta_run_id An optional benchmark run identifiers. May also be specified using the environment variable bm_run_id * meta_storage_config Storage configuration descriptor. May also be specified using the environment variable bm_storage_config * meta_test_id Identifier for the test. May also be specified using the environment variable bm_test_id * multicore_copies Explicit number of copies to test with for multi-core testing. The default is # of CPU cores. This parameter should be a minimum of 2 and maximum of 64 * nomultithread Don't run multi-threaded tests * nosinglethread Don't run single threaded tests * output The output directory to use for writing test data (results log and unixbench text output). If not specified, the current working directory will be used * test You may optionally specify individual UnixBench tests to run. The following tests are included with UnixBench. Tests run if this parameter is not set have an *: dhry2reg* whetstone-double* syscall* pipe* context1 spawn* execl* fstime-w fstime-r fstime* fsbuffer-w fsbuffer-r fsbuffer* fsdisk-w fsdisk-r fsdisk* shell1* shell8* shell16 short int long float double arithoh C dc hanoi grep sysexec This parameter may be repeated to designate multiple tests (e.g. --test int --test long) NOTE: the 'context1' test may cause pipe broken errors. It is excluded by default for this reason. Graphics tests are also excluded from the list. * verbose Show verbose output * unixbench_dir Directory where UnixBench is installed. If not specified, the benchmark run script will look up the directory tree from both pwd and --output for presence of a 'UnixBench' directory with an executable 'Run' script in it DEPENDENCIES This benchmark has the following dependencies: perl The UnixBench run script is perl based. NOTE: UnixBench makes use of perl-Time-HiRes (HiRes.pm). This package may need to be installed separately from perl php-cli Test automation scripts (/usr/bin/php) zip Used to compress test artifacts TEST ARTIFACTS This benchmark generates the following artifacts: collectd-rrd.zip collectd RRD files (see --collectd_rrd) unixbench.html HTML formatted UnixBench test report unixbench.log UnixBench test log unixbench.txt Text formatted UnixBench test report SAVE SCHEMA The following columns are included in CSV files/tables generated by save.sh. Indexed MySQL/PostgreSQL columns are identified by *. Columns without descriptions are documented as runtime parameters above. Data types and indexing used is documented in save/schema/unixbench.json. Columns can be removed using the save.sh --remove parameter benchmark_version: [benchmark version] collectd_rrd: [URL to zip file containing collectd rrd files] iteration: [iteration number (used with incremental result directories)] meta_burst meta_compute_service meta_compute_service_id* meta_cpu: [CPU model info] meta_cpu_cache: [CPU cache] meta_cpu_cores: [# of CPU cores] meta_cpu_speed: [CPU clock speed (MHz)] meta_instance_id* meta_hostname: [system under test (SUT) hostname] meta_memory meta_memory_gb: [memory in gigabytes] meta_memory_mb: [memory in megabyets] meta_os_info: [operating system name and version] meta_provider meta_provider_id* meta_region* meta_resource_id meta_run_id meta_storage_config* meta_test_id* multicore_copies: the number of UnixBench copies used to produce the multicore_score metric multicore_score: UnixBench multiple copy/multicore score (produced for compute instances with > 1 core) nomultithread nosinglethread results_html: [URL to the unixbench.html report (if --store option used)] results_log: [URL to the unixbench.log runtime log (if --store option used)] results_text: [URL to the unixbench.txt report (if --store option used)] score: UnixBench single copy score. This metric is always produced test_started*: [when the test started] test_stopped: [when the test ended] test: [space separated list of tests run (see the --test parameter)] USAGE # run 1 test iteration with some metadata ./run.sh --meta_compute_service_id aws:ec2 --meta_instance_id c3.xlarge --meta_region us-east-1 --meta_test_id aws-0914 # run with UnixBench installed in /usr/local/UnixBench ./run.sh --unixbench_dir /usr/local/UnixBench # run 10 test iterations using a specific output directory for i in {1..10}; do mkdir -p ~/unixbench-testing/$i; ./run.sh --output ~/unixbench-testing/$i; done # save.sh saves results to CSV, MySQL, PostgreSQL, BigQuery or via HTTP # callback. It can also save artifacts (text report ) to S3, Azure Blob Storage # or Google Cloud Storage # save results to CSV files ./save.sh # save results from 5 iterations text example above ./save.sh ~/unixbench-testing # save results to a PostgreSQL database ./save --db postgresql --db_user dbuser --db_pswd dbpass --db_host db.mydomain.com --db_name benchmarks # save results to BigQuery and artifact (TRIAD gnuplot PNG image) to S3 ./save --db bigquery --db_name benchmark_dataset --store s3 --store_key THISIH5TPISAEZIJFAKE --store_secret thisNoat1VCITCGggisOaJl3pxKmGu2HMKxxfake --store_container benchmarks1234