Adjacent nodes jump from positive to negative function values
mochar opened this issue · 6 comments
Hi, thanks for the great library. When drawing random function values from GraphDiffusionKernel
I observe a weird pattern where adjacent nodes jump from negative to positive. You can see what I mean in this image:
kernel = graph_matern.kernels.graph_diffusion_kernel.GraphDiffusionKernel((eigenvectors, eigenvalues), kappa=4, sigma_f=1.0,
dtype=tf.float64)
K = kernel.K(np.arange(N).reshape(-1, 1)).numpy()
prior = pm.MvNormal.dist(mu=0, cov=K, shape=(N, )) # pm = PyMC3
fig, axs = plt.subplots(figsize=(13, 3), ncols=3, sharex=True, sharey=True)
for i in range(3):
draw = prior.random()
# draw = np.abs(draw)
vmin = draw.min()
vmax = draw.max()
cmap = plt.cm.coolwarm
nx.draw(G, pos=pos, ax=axs[i], node_color=draw.tolist(),
width=1, node_size=20, cmap=cmap, vmin=vmin, vmax=vmax)
sm = plt.cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=vmin, vmax=vmax))
sm.set_array([])
cbar = plt.colorbar(sm, ax=axs[i])
I have no issues with the Matern kernel:
I can make a gist file with all the code if necessary.
Thnx!
Thank you @mochar ! This is indeed an unintended behavior. Please check out the recent fix.
Works great, thanks for the quick fix. Do you mind explaining how you went from dividing by 4 to multiplying by -0.5?
The correct formula is with the multiplication by -0.5. Check out equation (12) in the paper. The previous version apparently appeared due to interference with equation (11). Thank you again for noticing that!
Yes I see. Thanks!
@mochar Would you mind sending me an e-mail about this? We've discovered that this bug affects one of our experiments, and would thus like to add a thank you note in the acknowledgments section of our paper, which is to be updated shortly.
Just did!