Performance Tuning
hosom opened this issue · 0 comments
hosom commented
We are using 32 ST8000AS0002 in JBOD to store packets and 2 of them in RAID 1 to store index files.
The drives are rated for 150MB/s sustained write and we are testing with DD to be capable on all drives of reaching 185MB/s. I know that DD isn't precisely the best test, but it should be a good enough test to get a sense for if the drives are an issue.
We are seeing steno log that it is dropping as much as 61% on a thread and only performing writes measured with iostat peaking at 21 MB/s:
stenographer[19277]: 2018-04-04T20:13:02.533949Z T:61ffb7 [stenotype.cc:519] Thread 11 stats: MB=8600 secs=1113.04 MBps=7.72657 packets=9210727 blocks=8600 polls=0 drops=15291663 drop%=62.4089
stenographer[19277]: 2018-04-04T20:13:02.844822Z T:60ff97 [stenotype.cc:519] Thread 12 stats: MB=11000 secs=1113.35 MBps=9.88006 packets=11721539 blocks=11000 polls=0 drops=13577492 drop%=53.668
stenographer[19277]: 2018-04-04T20:13:02.901094Z T:5affd7 [stenotype.cc:519] Thread 14 stats: MB=12400 secs=1113.41 MBps=11.137 packets=13241645 blocks=12400 polls=0 drops=12242034 drop%=48.0387
stenographer[19277]: 2018-04-04T20:13:03.491155Z T:1effd7 [stenotype.cc:519] Thread 24 stats: MB=12300 secs=1114 MBps=11.0413 packets=13126634 blocks=12300 polls=4872 drops=12050726 drop%=47.8633
stenographer[19277]: 2018-04-04T20:13:03.979704Z T:63fff7 [stenotype.cc:519] Thread 8 stats: MB=12500 secs=1114.49 MBps=11.2159 packets=13333628 blocks=12500 polls=0 drops=12369467 drop%=48.1244
stenographer[19277]: 2018-04-04T20:13:04.318537Z T:1dffb7 [stenotype.cc:519] Thread 25 stats: MB=10600 secs=1114.82 MBps=9.50825 packets=11320772 blocks=10600 polls=7819 drops=13939432 drop%=55.1834
stenographer[19277]: 2018-04-04T20:13:04.718620Z T:5b7fe7 [stenotype.cc:519] Thread 13 stats: MB=12000 secs=1115.23 MBps=10.7601 packets=12796856 blocks=12000 polls=2086 drops=12400150 drop%=49.2128
stenographer[19277]: 2018-04-04T20:13:05.575527Z T:777fe7 [stenotype.cc:519] Thread 5 stats: MB=10400 secs=1116.09 MBps=9.31827 packets=11071242 blocks=10400 polls=0 drops=14456241 drop%=56.6301
stenographer[19277]: 2018-04-04T20:13:05.938896Z T:477fe7 [stenotype.cc:519] Thread 17 stats: MB=12300 secs=1116.45 MBps=11.0171 packets=13123156 blocks=12300 polls=710 drops=12335065 drop%=48.4522
stenographer[19277]: 2018-04-04T20:13:06.030035Z T:07fff7 [stenotype.cc:519] Thread 26 stats: MB=10400 secs=1116.53 MBps=9.31454 packets=11110139 blocks=10400 polls=6295 drops=14208164 drop%=56.1182
stenographer[19277]: 2018-04-04T20:13:07.477289Z T:1ffff7 [stenotype.cc:519] Thread 22 stats: MB=9000 secs=1117.98 MBps=8.05022 packets=9660082 blocks=9000 polls=2511 drops=15403233 drop%=61.4573
Sample iostat -m 2
output makes the box look like it's hardly working:
avg-cpu: %user %nice %system %iowait %steal %idle
0.83 0.00 1.84 0.00 0.00 97.33
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sdag 1.00 0.00 0.01 0 0
sdah 65.00 0.00 0.55 0 1
sde 7.50 0.00 1.51 0 3
sdh 22.50 0.00 4.52 0 9
sdq 15.00 0.00 3.01 0 6
sdf 25.00 0.00 5.02 0 10
sdb 10.00 0.00 2.01 0 4
sdn 17.50 0.00 3.51 0 7
sda 10.00 0.00 2.01 0 4
sdi 17.50 0.00 3.51 0 7
sdc 15.00 0.00 3.01 0 6
sdv 25.00 0.00 5.02 0 10
sdp 12.50 0.00 2.51 0 5
sdr 30.00 0.00 6.02 0 12
sdm 22.50 0.00 4.52 0 9
sdl 35.00 0.00 7.03 0 14
sdd 25.00 0.00 5.02 0 10
sdz 12.50 0.00 2.51 0 5
sdk 15.00 0.00 3.01 0 6
sdt 12.50 0.00 2.51 0 5
sdae 17.50 0.00 3.51 0 7
sdg 22.50 0.00 4.52 0 9
sdy 27.50 0.00 5.52 0 11
sdj 7.50 0.00 1.51 0 3
sdaa 35.00 0.00 7.03 0 14
sdab 35.00 0.00 7.03 0 14
sds 20.00 0.00 4.02 0 8
sdx 20.00 0.00 4.02 0 8
sdaf 12.50 0.00 2.51 0 5
sdw 40.00 0.00 8.03 0 16
sdu 17.50 0.00 3.51 0 7
sdo 30.00 0.00 6.02 0 12
sdac 27.50 0.00 5.52 0 11
sdad 10.00 0.00 2.01 0 4
We have tested:
- Migrating all of our drives to using xfs
- --preallocate_file_mb=4096
- --aiops=512