Embree ASAN issues with Emscripten
jjcasmar opened this issue · 4 comments
I am using Embree in Emscripten and I use ASAN to catch leaks in my testing code.
Recently, I found a leak in the way I was using Embree and I solve. However, that fix works in x86_64 but not in wasm, where I get the following error:
==42==ERROR: AddressSanitizer failed to deallocate 0xb09000 (11571200) bytes at address 0x20fb0000
==42==ERROR: AddressSanitizer failed to deallocate 0xb09000 (11571200) bytes at address 0x20490000
AddressSanitizer: CHECK failed: sanitizer_posix.cpp:61 "(("unable to unmap" && 0)) != (0)" (0x0, 0x0) (tid=533069872)
AddressSanitizer: CHECK failed: sanitizer_posix.cpp:61 "(("unable to unmap" && 0)) != (0)" (0x0, 0x0) (tid=528744496)
<empty stack>
I have been able to reproduce the issue with this simple code
#include <gtest/gtest.h>
#include <embree3/rtcore.h>
TEST(Basic, embree)
{
auto rtcDevice{rtcNewDevice("")};
auto rtcScene{rtcNewScene(rtcDevice)};
rtcCommitScene(rtcScene);
rtcReleaseDevice(rtcDevice);
rtcReleaseScene(rtcScene);
}
If I remove the rtcRelease at the end, I get a memory leak error. However, if I add them, I get the already mentioned error. It doesn't matter the order in which I try to destroy the objects, I get the same error.
I have been doing some testing and the error only happens if I set more than one thread for the device.
Is multithreading not allowed in embree running in emscripten?
Embree does not officially support emscripten, we just enabled compiling with it. I would expect multithreading to work though. The simple example above should also not do much multi-threading, it just builds an empty scene.
Even for this simple test code, it still fails with an ASAN error when I use something else than 1 thread for the RTCDevice. If you are interested, I can try to provide a stracktrace, but I totally undersstand if you prefer to close the issue, as Emscripten is not a supported platform.
We would be happy to accept a pull request in case you could localize the issue. I will close the issue for now, please re-open if you know some workaround for the issue.