LSH Implementation with TFIDF Dense Matrix
girishmt4 opened this issue · 5 comments
I am currently working on Documents similarity project. We are processing text documents to generate TFIDF Vectors for each document in the corpus. In a nutshell, we are working with DENSE DATA with the documents being the data points and TFIDF values of the terms occuring in the document as their features.
We succeeded in implementing LSH with sparse data but it's not quite efficient.
Is it possible to use FALCONN with dense data for LSH implementation?
Yes, FALCONN supports dense data. In fact, the support for dense data is better than for sparse data. But if your data is very high-dimensional, the dense approach might not be efficient. What dimension do you work with?
I am currently working with a dataset that stores the TF-IDF values for only those terms that occur in the particular document. So, every point will have different dimension.
What is your say on this?
In that case, using a sparse representation might be better.
can you explain the reason behind that? I am still wondering why sparse representation can perform better than the dense one!
With a dense representation, the code will be performing many unnecessary multiplications with zero.