High load scenarios - huge virtual memory mmaped.
SylwBar opened this issue · 2 comments
Hi.
I'm pushing aprsc to the performance limits and found out that for RX packet rate exceeding continuous 5000 packets/sec is the limit.
Reverse engineering showed it is related to exhausted blocks (CELLBLOCKS_MAX = 200) for pbufs.
It looks that it is possible to increase RX packet rate by increasing CELLBLOCKS_MAX (please confirm) - this is fine.
However, during prolonged situation with exhausted blocks I observed huge reservation of virtual memory by aprsc proces. Reservation is increasing up to several terabytes, and memory is not unreserved at all, process restart is required.
Short analysis revealed that problem could be related to lack of virt. memory unmapping in buffer starvation situation.
In cellmalloc.c:100 there is such code:
cb = mmap( NULL, ca->createsize, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON, -1, 0);
#endif
if (cb == NULL || cb == (char*)-1)
return -1;
if (ca->cellblocks_count >= CELLBLOCKS_MAX) return -1;
It looks that memory "mmaped" is not unmapped when there is no cellblocks.
Following proposal of fix resolved problem, but I guess there are other possible ways of making this logic:
```
if (ca->cellblocks_count >= CELLBLOCKS_MAX)
{
munmap(cb, ca->createsize);
return -1;
}
Regards
Sylwester
Good catch! I moved the check to the top of the function in b60c41e so that the mmap() is not even tried if the maximum amount of blocks have already been allocated. This will perform a bit better when the limit is met.
Right, depending on the type of traffic, the global packet buffer pool may fill up. Instead of adding an awful lot of memory it is possible to reduce pbuf_global_expiration time in config.c, so that old packets are not held in memory so long. In load testing scenarios like yours I've reduced it to 10 seconds. I should figure out why we left it to 4 minutes.