mitsuba-renderer/drjit

Understanding/Improving Kernel Reuse

errissa opened this issue · 3 comments

I looked at the Mitsuba gradient-based optimization example to understand how kernels are generated and reused. The simple loop in the example launches the same 5 kernels, in the same order with each iteration. First time through the kernels get compiled, then reused with each iteration. In futures runs of the script, the kernels are already cached, they get loaded and reused. Nothing surprising.

However, it looks like - and maybe I'm wrong - that with each iteration DrJIT traces the operations, generates code then hashes it. If the hash matches an already loaded, compiled kernel it can reuse it but it still goes through the process of tracing, generating code and hashing. In a simple example like this one, I know that nothing other than the input data changes and I know that the same set of kernels will get executed with each iteration so it seems that tracing/code generation/hashing is wasted work.

Is there a way to tell DrJIT to skip the tracing/code generation/hashing and to reuse a sequence of already loaded and compiled kernels?

Your understanding of the kernel reuse is correct. The tracing work is indeed replicated and not cached. The overhead of the tracing is very low for C++ code, and for Python code the overhead will decrease significantly with the planned transition to nanobind.

Longterm, the tracing should indeed be cached and reused. As far as I know, this is on the list of desired features, but will probably still take some time.

Thanks. Is there a publicly available roadmap or TODO?

I would be willing to contribute to this work though I suspect it's not a simple or quick thing to implement!

Hi @errissa

I'll close this issue, @dvicini perfectly answered your initial question.

There were a couple of attempts to solve this issue properly (see #35 and https://github.com/JamesZFS/drjit-core for examples). These are now outdated, the backend has been significantly refactored since those implementations. We don't have an official timeline, but it is definitely a project we will attempt be working again on very soon.