Question: benchamrk about tcmalloc and memcpy
Opened this issue · 5 comments
hi, I am reading the code and have done some benchmark:
https://github.com/guangqianpeng/libaco/blob/master/bench_result
I have two questions:
tcmalloc
improves the benchmark results. Withaco_amount=1000000
andcopy_stack_size=56B
, the tcmalloc version achieves 37ns peraco_resume()
operation but the default takes 66ns. Why? In this case,aco_resume()
does not allocate memory, which is really confusing...- When copying stack, you use
%xmm
registers to optimize small memory copying. But according to my benchmark, this does not make many differences. I guessmemcpy()
already takes advantage of these registers. Do you have more benchmark results?
I will be very grateful to you for answering my questions :-)
Hi @guangqianpeng,
tcmalloc
improves the benchmark results. Withaco_amount=1000000
andcopy_stack_size=56B
, the tcmalloc version achieves 37ns peraco_resume()
operation but the default takes 66ns. Why? In this case,aco_resume()
does not allocate memory, which is really confusing...
I think the main reason of such result is because the tcmalloc
has many specialized optimizations on memory efficiency and locality especially when the allocation of small objects, which is much better than the vanilla glibc allocator.
- When copying stack, you use
%xmm
registers to optimize small memory copying. But according to my benchmark, this does not make many differences. I guessmemcpy()
already takes advantage of these registers. Do you have more benchmark results?
Here is actually a knack in the code like __uint128_t xmm0
: the gcc would try to use sse to optimize the operations about the __uint128_t
data type (whereas the clang does not, as far as I know). So if you want to use such sse optimization in libaco now, you could use gcc to compile the libaco into a static library and then use a linker to link it with any object file you like. Also, you could choose to use objdump
to inspect the actual machine code generated by the aco_resume
function.
Even when there is no such sse enhancement provided by the compiler, such "very-short memcpy inline" with these general purpose registers are still more efficient than to call the libc memcpy directly. That is because, in the case of such short copy, the cost of a function call is not small enough to neglect anymore. So, there would be some gains anyway.
Maybe in the future we should choose to use the sse directly instead of counting on such compiler's behavior ;-) But I'm afraid that such plan has to be postponed since there is a much more important thing to do now, i.e. the #22.
I will be very grateful to you for answering my questions :-)
All the discussions and questions about libaco would always be welcomed here. Just feel free to open any new issue you like :D
2. Do you have more benchmark results?
I did some benchmark about such conditional memcpy-inline in the past and did get the result I want. But there were no records. I would like to do another test as soon as I get another spare time.
-
I did use
perf
tool to check L1 dcache miss rate of the two versions, tcmalloc version achieved about the half miss rate of glibc. I guess there was false sharing or other cache problems and tried to solve it without tcmalloc. But I finally failed :-( -
I did have inspect the assembly code of
co_resume()
and saw suchmovqd %rbx %xmm1
things, I also trace into the glibc and found that memcpy finally call__memcpy_avx_unaligned()
, which use AVX instruction sets. What I didn't think of is that the overhead ofmemcpy()
call cannot be ignored, especially for small stack.
BTW, the libaco project is great, I am looking forward to your next version (especially the co schedualer).
BTW, the libaco project is great, I am looking forward to your next version (especially the co scheduler).
Thank you very much for your kind encouragement, @guangqianpeng, and I would try to finish the next release as soon as possible ;-)
BTW, the libaco project is great, I am looking forward to your next version (especially the co scheduler).
Thank you very much for your kind encouragement, @guangqianpeng, and I would try to finish the next release as soon as possible ;-)
How about the next release? :)