YingfanWang/PaCMAP

Large-scale PaCMAP

Closed this issue · 5 comments

Hey there,

do you have rough estimations how much compute / ram is necessary to scale PaCMAP to 10M, 100M, 1B and 10B rows with each 786 embeddings? Or do you provide a multi-node solution?

Best,

Robert

Hi Robert! I haven't tested PaCMAP on such large scale cases. The largest case I tested is around 1.8B, which requires ~42min to finish with a 48-core Intel Xeon Gold 6226 Processor, and PaCMAP's running time should scale linearly w.r.t. number of rows. Regarding memory, this case successfully finished using less than 64GB of RAM, but I don't have the exact RAM usage number.

Regarding multi-node support, at this moment we don't have a plan for it.

numpy.core._exceptions.MemoryError: Unable to allocate 286. GiB for an array with shape (100000000, 768) and data type float32

@hyhuang00 would I be able to iterate over smaller subsets (.fit(batch1), .fit(batch2) etc.) without completely losing the information of the first batches? Or does .fit completely start from zero when do an additional fit? Or what would be the best way to "fit" more than one batch and do the transform afterwards for every batch?

At this moment, fit() will forget about the previous data and completely start from zero. One possibility is fit() the dataset on a small batch, and then transform() the other parts. This will be able to handle many situations if the initial batch is large and representative enough, but it will fail if the initial batch fails to capture some of the information.

I just posted a question about speeding up processing of a large dataset. I am also trying to apply pacmap to embeddings with dimension 768.

Does your timing correspond to mine? About 35min/1M rows?

I tested the 1M case on a dataset with 100 dimensions. For dataset with larger than 100 dimensions, we will apply a PCA to reduce it down to 100 first, which means there's an extra offset. 35min/1M rows would be a good estimate for things that happen afterwards (Pair construction, embedding optimization, etc.)