How to reproduce embench for riscv32
fanghuaqi opened this issue · 11 comments
Hi there, I found slides here talking about riscv embench results.
In this slides, it said riscv code size is near to ARM
I tried to run it using command below:
./build_all.py --arch riscv32 --chip generic --board ri5cyverilator --cc riscv-nuclei-elf-gcc --cflags="-c -march=rv32imc -mabi=ilp32 -Os -ffunction-sections -fdata-sections" --ldflags="-Wl,-gc-sections -march=rv32imc -mabi=ilp32 -Os --specs=nosys.specs --specs=nano.specs" --user-libs="-lm" --clean
The riscv-nuclei-elf-gcc
version 9.2.0 can be found here https://nucleisys.com/download.php
And I tried to check the code size against the reference data using command python ./benchmark_size.py
Benchmark size
--------- ----
aha-mont64 1.88
crc32 3.98
cubic 22.55
edn 2.00
huffbench 2.33
matmult-int 3.32
minver 6.37
nbody 8.77
nettle-aes 1.73
nettle-sha256 1.97
nsichneu 1.33
picojpeg 1.39
qrduino 1.30
sglib-combined 1.62
slre 1.74
st 8.46
statemate 1.08
ud 4.81
wikisort 2.87
--------- -----
Geometric mean 2.84
Geometric SD 2.19
Geometric range 4.94
All benchmarks sized successfully
Absolute results using command python ./benchmark_size.py --absolute
:
Benchmark size
--------- ----
aha-mont64 2,010
crc32 1,130
cubic 35,718
edn 2,646
huffbench 2,890
matmult-int 1,632
minver 7,442
nbody 8,328
nettle-aes 3,722
nettle-sha256 6,680
nsichneu 15,928
picojpeg 9,678
qrduino 7,554
sglib-combined 3,674
slre 3,822
st 8,456
statemate 4,838
ud 3,464
wikisort 12,326
--------- -----
Geometric mean 5,228
Geometric SD 2.27
Geometric range 9,561.58631295538
All benchmarks sized successfully
From the results, you can see cubic, minver, st
code size are much worse than reference result, are there any steps wrong, please give me some hints.
Thanks
Huaqi
Hi, as far as I understand, you only want to measure size of the programs and not the libraries. So, you should make sure to include argument --dummy-libs="crt0 libc libgcc libm"
to use dummy libraries instead.
With this, I get pretty close to those numbers at 1.05 geomean. Still not quite there, so I am either also missing something or its due to some changes in the computation of the score. I would assume the second as I can see the same number in https://github.com/embench/embench-iot-results
Hi @sobuch , thanks for your explanation, is it the same for arm targets,? with your suggestion, and I tried with command:
./build_all.py --arch riscv32 --chip generic --board ri5cyverilator --cc riscv-nuclei-elf-gcc --cflags="-c -march=rv32imc -mabi=ilp32 -Os -ffunction-sections -fdata-sections" --ldflags="-Wl,-gc-sections -march=rv32imc -mabi=ilp32 -Os --specs=nosys.specs --specs=nano.specs -nostartfiles" --dummy-libs="crt0 libc libgcc libm" --clean
Got the following results:
Benchmark size
--------- ----
aha-mont64 1.03
crc32 0.80
cubic 1.67
edn 1.13
huffbench 1.37
matmult-int 0.91
minver 1.03
nbody 1.03
nettle-aes 1.31
nettle-sha256 1.64
nsichneu 1.25
picojpeg 1.23
qrduino 1.08
sglib-combined 1.09
slre 1.24
st 1.01
statemate 0.83
ud 1.05
wikisort 1.05
--------- -----
Geometric mean 1.12
Geometric SD 1.21
Geometric range 0.43
All benchmarks sized successfully
Thanks
Huaqi
I also use -msave-restore
, these are probably the exact flags you want: https://github.com/embench/embench-iot-results/blob/master/details/ri5cy-rv32imc-gcc-9.2-os.mediawiki#tool-chain-flags-used-in-benchmarking
Hi @sobuch , thanks.
This is what I get with -msave-restore
:
$ ./build_all.py --arch riscv32 --chip generic --board ri5cyverilator --cc riscv-nuclei-elf-gcc --cflags="-c -march=rv32imc -mabi=ilp32 -Os -msave-restore -ffunction-sections -fdata-sections" --ldflags="-Wl,-gc-sections -march=rv32imc -msave-restore -mabi=ilp32 -Os --specs=nosys.specs -nostartfiles" --dummy-libs="crt0 libc libgcc libm" --clean
aha-mont64
crc32
cubic
edn
huffbench
matmult-int
minver
nbody
nettle-aes
nettle-sha256
nsichneu
picojpeg
qrduino
sglib-combined
slre
st
statemate
ud
wikisort
All benchmarks built successfully
(base) hqfang@softserver [15:48:25]:~/workspace/software/embench-iot
$ python ./benchmark_size.py
Benchmark size
--------- ----
aha-mont64 0.99
crc32 0.81
cubic 1.56
edn 1.09
huffbench 1.33
matmult-int 0.85
minver 0.91
nbody 0.89
nettle-aes 1.26
nettle-sha256 1.63
nsichneu 1.26
picojpeg 1.15
qrduino 1.04
sglib-combined 1.02
slre 1.16
st 0.88
statemate 0.82
ud 1.02
wikisort 0.97
--------- -----
Geometric mean 1.06
Geometric SD 1.22
Geometric range 0.43
All benchmarks sized successfully
I also ran for stm32-discovery board, and get the following results:
- Case 1: dummy libs used instead of system lib
$ ./build_all.py --arch arm --chip cortex-m4 --board stm32f4-discovery --cc arm-none-eabi-gcc --cflags="-c -march=armv7-m -mcpu=cortex-m4 -mfloat-abi=soft -Os -ffunction-sections -fdata-sections" --ldflags="-Wl,-gc-sections -O2 -march=armv7-m -mcpu=cortex-m4 -mfloat-abi=soft --specs=nosys.specs -nostartfiles" --dummy-libs="crt0 libc libgcc libm" --clean
aha-mont64
crc32
cubic
edn
huffbench
matmult-int
minver
nbody
nettle-aes
nettle-sha256
nsichneu
picojpeg
qrduino
sglib-combined
slre
st
statemate
ud
wikisort
All benchmarks built successfully
(base) hqfang@softserver [15:43:24]:~/workspace/software/embench-iot
$ python ./benchmark_size.py
Benchmark size
--------- ----
aha-mont64 1.01
crc32 0.96
cubic 1.01
edn 0.99
huffbench 1.01
matmult-int 0.98
minver 1.01
nbody 0.99
nettle-aes 0.99
nettle-sha256 1.00
nsichneu 1.29
picojpeg 1.04
qrduino 1.00
sglib-combined 1.01
slre 0.99
st 0.99
statemate 1.00
ud 0.98
wikisort 1.00
--------- -----
Geometric mean 1.01
Geometric SD 1.06
Geometric range 0.12
All benchmarks sized successfully
- Case 2 : The system libraries size are included:
$ ./build_all.py --arch arm --chip cortex-m4 --board stm32f4-discovery --cc arm-none-eabi-gcc --cflags="-c -march=armv7-m -mcpu=cortex-m4 -mfloat-abi=soft -Os -ffunction-sections -fdata-sections" --ldflags="-Wl,-gc-sections -O2 -march=armv7-m -mcpu=cortex-m4 -mfloat-abi=soft --specs=nosys.specs" --user-libs="-lm" --clean
aha-mont64
crc32
cubic
edn
huffbench
matmult-int
minver
nbody
nettle-aes
nettle-sha256
nsichneu
picojpeg
qrduino
sglib-combined
slre
st
statemate
ud
wikisort
All benchmarks built successfully
(base) hqfang@softserver [15:52:13]:~/workspace/software/embench-iot
$ python ./benchmark_size.py
Benchmark size
--------- ----
aha-mont64 1.80
crc32 3.94
cubic 9.29
edn 1.88
huffbench 1.95
matmult-int 3.36
minver 4.07
nbody 4.74
nettle-aes 1.39
nettle-sha256 1.31
nsichneu 1.36
picojpeg 1.21
qrduino 1.25
sglib-combined 1.52
slre 1.55
st 4.54
statemate 1.19
ud 3.61
wikisort 1.97
--------- -----
Geometric mean 2.26
Geometric SD 1.79
Geometric range 2.78
All benchmarks sized successfully
Thanks
Huaqi
These slides are from December 2019, when Embench was still in development, and we were using GCC 9.2 on PULP RI5CY as the baseline.
We rebased for the 0.5 release in February 2020, to use Arm Cortex-M4 with GCC 9.2 as the baseline. We chose this because it was a more stable industrial standard. You can find the official results in the embench-iot-results repository.
With my colleague @PaoloS02 I have just finished collecting new data on the latest release of GCC. I am preparing a slide deck for general use with all this data in. I'll be making it available at our next monthly videoconference on Monday 21 September at 15:00 UTC. Details will be posted on the Embench mailing list very shortly - perhaps you will be able to join the meeting, and ask any questions.
Hi @jeremybennett, thank you very much for your information, I might check the maillist and get the updates, so where can I get the meeting link?
Thanks
Huaqi
@fanghuaqi If you email me directly (jeremy.bennett@embecosm.com), I'll send you the Zoom invite - reluctant to post it on an open mailing list.
@jeremybennett ok, just sent email, thanks
closed, since I am able to run this embench now. Thanks @jeremybennett @sobuch
@fanghuaqi Good to hear you have success.