How to get gradient with drjit and optimize with Mitsuba3?
linxxcad opened this issue · 2 comments
I use drjit to do a differentiable monte carlo ray tracing algorithm for optimize the normal of object.
Use drjit.cuda.ad.Array3f to represent the 3D normal
The for loop is used to calculate ray tracing spp times, and the average is obtained.
When I use mi.ad.Adam to do optimize, the computation time is particularly long during the first optimization.
But when I use time.time() to calculate the computation time, the calculation time is obviously smaller than really spend time.
How can I count the actual running time?
render_pixel=10000
spp=20
normal=Array3f([1.0,0.0,0.0]
dr.enable_grad(normal)
opt=mi.ad.Adam(lr=0.05)
key='normal'
opt[key]=normal
iterationC=10
for it in range(iterationC):
start=time.time()
image=render(normal,render_pixel,spp)
# render func defined by myself, use a for loop to do a sample ray tracing spp times and calculate the mean value
loss=mse(image,target)
dr.backward(loss)
opt.step()
normal=opt[key]
grad=dr.grad(normal)
dr.eval(grad)
dr.sunc_thread()
end=time.time()
spend=end-start
May I ask if there is any problem with this optimization code?
I can optimize it, but when it=0 is very slow.
When I finished the optimization, I re-ran the code, and the speed was very fast.
After I changed the parameters, it was very slow again.
Hi @linxxcad
Here's an explanation on how to use the KernelHistory
feature to measure the actual runtimes of your kernels: #175 (comment)
A slower first iteration is usually expected, because the kernel needs to be compiled. After the first time, we can use a kernel cache.
Some parameters might be "baked" into your kernel rather than being inputs to the kernel. This means that every time they change, the kernel changes, and therefore must be re-compiled. You can use dr.opaque/dr.make_opaque
on your parameter to make sure that it becomes an input to your kernel.
The optimization loop looks fine. However you mention that you use a for loop to iterate over spp
. If possible, you should use a recorded drjit.loop
. You can find an explanation as to why this helps here.
你好
以下是有关如何使用该功能来测量内核实际运行时间的说明: #175 (评论)
KernelHistory
通常预计第一次迭代会较慢,因为需要编译内核。第一次之后,我们可以使用内核缓存。
某些参数可能被“烘焙”到内核中,而不是作为内核的输入。这意味着每次它们更改时,内核都会更改,因此必须重新编译。您可以在参数上使用,以确保它成为内核的输入。
dr.opaque/dr.make_opaque
优化循环看起来不错。但是,您提到您使用 for 循环来遍历 .如果可能,您应该使用录制的 .您可以在此处找到有关为什么有帮助的解释。
spp``drjit.loop
Hi @linxxcad
Here's an explanation on how to use the
KernelHistory
feature to measure the actual runtimes of your kernels: #175 (comment)A slower first iteration is usually expected, because the kernel needs to be compiled. After the first time, we can use a kernel cache.
Some parameters might be "baked" into your kernel rather than being inputs to the kernel. This means that every time they change, the kernel changes, and therefore must be re-compiled. You can use
dr.opaque/dr.make_opaque
on your parameter to make sure that it becomes an input to your kernel.The optimization loop looks fine. However you mention that you use a for loop to iterate over
spp
. If possible, you should use a recordeddrjit.loop
. You can find an explanation as to why this helps here.
Thank you for your efficient reply! I will make improvements according to your suggestions.