It is difficult to train in Large dataset.
Closed this issue · 3 comments
Paper99 commented
I use 80000 samples to train the jointed net. But when I finished the first CNN update, it is difficult to run the next step. This code seemly have a large amount of computation in computing the 'Affinity',
How can I solve this problem?
jwyang commented
Hi, yes, it needs a lot of computation to update the affinity. I think one
way to address this problem is you can just compute a partial affinity
matrix, instead of a full affinity matrix.
…On Tue, May 2, 2017 at 9:03 AM, Paper99 ***@***.***> wrote:
I use the data in 'datasets' files. It can run
I use 80000*32*32 samples to train the jointed net. But when I finished
the first CNN update, it is difficult to run the next step. This code
seemly have a large amount of computation in computing the 'Affinity',
How can I solve this problem?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#14>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADtr546U_E4njsVtA4doMOH5oi3kGpalks5r1oDKgaJpZM4NNmhk>
.
Paper99 commented
Thanks for your advice.
I attempt to compute partial affinity by spliting NNs to batches like you did in batch knn. But I failed. Because NMI for MNIST-test is much lower than before.
So how can I correctly get the partial affinity.
jwyang commented
Hi, I think one way to solve this is using some fast knn algorithm to build connections for close samples, and then compute the affinity for these close samples.