Positive and negative weights
Closed this issue · 5 comments
Dear MONET team,
I am wondering how MONET handles negative weights in undirected networks.
I am running M1 method using different values for --avgk parameter. I see in the console that the algorithm establishes a threshold to remove low weights when I use --avgk in order to reach the desired mean degree. In this case, we are removing highly negative weights, so negative and positive weights are not equally important for the algorithm, right? How the algorithm treats negative weights during clustering?
I would be very grateful if you could help me to understand better how it works.
Looking forward to hearing from you.
Best,
Jose
Hi Jose,
In the original DREAM challenge, all the weights were positive, thus it was taken for granted that no negative weights were present. Looking at the code, the current procedure consists in sorting the weights, and preserving the largest ones that lead to the desired average degree. Since the "absolute value" of the weights is not considered, it means that in mixtures of positive and negative weights, the negative ones are going to be pruned first, and in decreasing absolute value. For sure, this is not what we want, since very negative weights should be as important as very positive weights in the case of similar absolute values.
One possible solution would be to make the pruning previous to the execution of M1. The other, simply download the source code of MONET, modify the "preprocessing" function of M1 in file .src/M1_code/sub-challenge1/aleph_urv_method.py, and reinstall MONET (see Implementing local changes to MONET in the main README.md).
The good news is that the clustering method is prepared for handling positive and negative links. The semantics used is: positive weights tend to join nodes in the same module (i.e., they are attractive), while negative weights are repulsive with respect to modules. The reference paper is S. Gómez, P. Jensen, A. Arenas, "Analysis of community structure in networks of correlated data", Physical Review E 80 (2009) 016114, DOI:10.1103/PhysRevE.80.016114.
Anyway, even if the clustering method is prepared for negative weights, I'm not sure everything would run smoothly, since there is a process of "adjusting resolution" on top of it which could also run into problems.
Hope this helps.
Sergio.
Hi Sergio,
Thank you very much for the explanation. It is now much more clear. Is this the same for R1 and K1 methods?
Best,
Jose
@JoseAML I can only speak for K1 since I was involved in developing the algorithm: our random walk implementation assumes that edges are confidence-weighted (all weights >=0, and high edge weight -> high "confidence" that the edge represents a true relationship between nodes; see this paper for the original use case). If I remember correctly, this was true for all of the DREAM challenge networks so we didn't spend too much time thinking about how it could generalize to signed networks.
A simple way to make this work out-of-the-box with negative edge weights would just be to preprocess the networks by taking the absolute value of the edge weights - this would get rid of the positive/negative information, which could be important, but it would be the fastest way to get things working. Otherwise, I'm sure there exist random walk implementations that work for signed edges, but this would require redesigning and re-evaluating the algorithm since the original K1 method was only ever evaluated on networks with positive edge weights.
I think @WeijiaZhang24 may know the answer for R1.
Thank you very much for your answer! Then, we always have to consider weights >=0 in any case.
I can confirm that for R1 the weights should also be positive.