visualize the regularization path
huizhang0110 opened this issue · 2 comments
Hi, thanks for your excellent work and I am really interested in it. After reading your code, I add a function (plotpath
) to visualize the regularization path of
class ICI:
...
def plotpath(self, alpha, coefs, query_y, pseudo_y, iter_id):
coefs = np.sum(np.abs(coefs.transpose(2, 1, 0)[::-1, :, :]), axis=2) # 100, 80
alpha = np.log10(alpha[::-1]) # 100
plt.clf()
plt.figure(1)
for i in range(5):
plt.plot(alpha, coefs[:, i], c="y")
for i in range(5, 80):
t = i - 5
c = "r" if query_y[t] == pseudo_y[t] else "black"
plt.plot(alpha, coefs[:, i], c=c, linestyle="-")
plt.savefig(f"path_{iter_id}.png")
import ipdb; ipdb.set_trace()
def expand(self, support_set, X_hat, y_hat, way, num_support, pseudo_y, embeddings, query_y, iter_id=None):
alpha, coefs, _ = self.elasticnet.path(X_hat, y_hat, l1_ratio=1.0)
self.plotpath(alpha, coefs, query_y, pseudo_y, iter_id)
coefs = np.sum(np.abs(coefs.transpose(2, 1, 0)[::-1, num_support:, :]), axis=2)
selected = np.zeros(way)
for gamma in coefs:
for i, g in enumerate(gamma):
if g == 0.0 and (i + num_support not in support_set) and (selected[pseudo_y[i]] < self.step):
support_set.append(i + num_support)
selected[pseudo_y[i]] += 1
if np.sum(selected >= self.step) == way:
break
return support_set
Thanks for raising this problem!
The visualization in the paper is done with the package glmnet, which is also utilized to solve the problem introduced in our work.
Later we found that sklearn also implement an algorithm to solve the problem, which is easier to install and use, and achieves better performance in many cases. Hence we release the code based on the sklearn implementation.
Although the two implementations aims to solve the problem (and may use the same algorithm?), somehow have differences in details. Thus resulting in the different solution path, as you visualized. Due to the source code are based on cython(sklearn) and fortran(glmnet), respectively, it is a little hard to locate the difference. We would be very grateful if you can help us find the difference.
Please note that although the visualization has some difference, the basic idea and claims still hold. For example, many correctly-predicted instances still vanish before the wrongly-predicted instances.
glmnet is implemented by very famous statisticians. It seems that they may have better strategy to visualize regularization path. The experimental conclusion and results in this paper are still held; still the same to our algorithm