FGP_TV gradient
gfardell opened this issue · 1 comments
gfardell commented
The gradient and adjoint implemented don't seem to have consistent boundary conditions and I think that makes the function non-linear
if (i == dimX-1) val1 = 0.0f; else val1 = D[index] - D[j*dimX + (i+1)];
if (j == dimY-1) val2 = 0.0f; else val2 = D[index] - D[(j+1)*dimX + i];
if (i == 0) {val1 = 0.0f;} else {val1 = R1[j*dimX + (i-1)];}
if (j == 0) {val2 = 0.0f;} else {val2 = R2[(j-1)*dimX + i];}
D[index] = A[index] - lambda*(R1[index] + R2[index] - val1 - val2);
I think for neumann adjoint I expected a additional boundary conditions.
In 1D this will be:
for i ==dimX -1:
D[i] = A[i] - lambda*(-R1[i-1])
This is based on a talking to Vaggelis
finite_differences_details.pdf
I was in the process of looking at TGP_TV so I'll incorporate the additional boundary condition if you agree @dkazanc?