image.laplacian() non-standard?
askerlee opened this issue · 1 comments
The function image.laplacian() seems to return very different results from what Matlab's fspecial() does.
For example, when size=5, sigma=0.5:
image.laplacian() produces:
0.1898 0.4022 0.4938 0.4022 0.1898
0.4022 0.7158 0.8493 0.7158 0.4022
0.4938 0.8493 1.0000 0.8493 0.4938
0.4022 0.7158 0.8493 0.7158 0.4022
0.1898 0.4022 0.4938 0.4022 0.1898
Whereas fspecial('log') produces:
-0.0091 -0.0095 -0.0115 -0.0095 -0.0091
-0.0095 -0.0646 -0.1457 -0.0646 -0.0095
-0.0115 -0.1457 1.0000 -0.1457 -0.0115
-0.0095 -0.0646 -0.1457 -0.0646 -0.0095
-0.0091 -0.0095 -0.0115 -0.0095 -0.0091
Both filters are normalized by the central [3,3] element.
I've tried a few other parameter values and none of them are the same. Moreover, image.laplacian() seems to contain many more positive numbers.
I'm not an image processing person so am not sure if I misunderstood the documentation of this API. Thank you for any help.
I've found a suspicious line in init.lua:
local xsq = math.pow((i-center_x)/(sigma_horz*width),2)/2
local ysq = math.pow((j-center_y)/(sigma_vert*height),2)/2
In the standard equation (http://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm), sigma_horz and sigma_vert are not multiplied by width (height). Is this a bug?
After removing this scaling factor and normalizing the matrix to sum to 0, I got identical results as Matlab. The complete code snippet is:
local logauss = torch.Tensor(height,width)
for i=1,height do
for j=1,width do
xsq = math.pow((i-center_x)/sigma_horz,2)/2
ysq = math.pow((j-center_y)/sigma_vert,2)/2
derivCoef = 1 - (xsq + ysq)
logauss[i][j] = derivCoef * math.exp(-(xsq + ysq))
end
end
logauss = logauss - logauss:sum() / (height*width)
logauss = logauss / logauss[ { math.floor(center_x), math.floor(center_y) } ]