GUDHI/gudhi-devel

Are there any prospects to make rips_layer and perslay use GPU and not just CPU? thanks.

fbarfi opened this issue · 12 comments

fbarfi commented
Are there any prospects to make rips_layer and perslay use GPU and not just CPU? thanks.
fbarfi commented

I just became aware that Giotto has created a giotto-deep (TDA for deep learning) which uses PyTorch instead of Tensorflow. Giotto-deep works seamlessly in using the GPU. Hence the above question.

Hello,
as far as I know, there does not exist code to compute persistent homology purely on the GPU, all codes use the CPU for a key part, so rips_layer could not be pure GPU.
Perslay, on the other hand, is a variant of deep-sets, so should be able to work on the GPU. For instance, if you have pre-computed diagrams, the whole Perslay should be able to run on the GPU. If that's not the case (I am not super familiar with tensorflow and that part of gudhi), we should do something about it.
Could you give more details about the kind of seamless integration that you are missing in gudhi?

I am going to transfer this issue to gudhi-devel (the appropriate repository, gudhi-deploy is just internal for CI stuff). I am posting this so you are not surprised and can find it back.

fbarfi commented

tTank you for your prompt response. What I am trying to do is optimize the persistence diagrams and persistence images at the same time (in the same loop calculating the gradients). It works fine for small datasets and it gives amazing results as to the improvements for the persistence images ( when compared to computing the persistence images after optimizing the persistence diagrams and storing them in files). However, for larger datasets it takes a very long time and is not practical anymore. I have an Apple Silicon M2 and I always try to use the GPU in combining TDA and machine learning. So far Gudhi has allowed me to achieve amazing results.

fbarfi commented

I just had a look at the python interface for rips complex and I saw that the code was written in cpython and hence it is not obvious or straightforward to me at least how this can be translated into tensorflow language to make it use the GPU. It might require a major rewriting of the code. thanks.

As far as I understand, the layers before PersLay are the ones in cpython, and involve computing persistence diagrams, for which there is no clear GPU improvements, as @mglisse said. PersLay on the other hand, can benefit from using GPUs, but this should happen straightforwardly if you ask tensorflow to use GPUs explicitly.

fbarfi commented

thanks @MathieuCarriere for your response. I already understood this much. My question which perhaps has to do more with long term projects is to think about the possibility of adapting the computation of the persistence diagrams to GPU if gudhi is to be used for large data studies in deep learning. Otherwise it might remain an obstacle for those of us who are already enjoying this wonderful package but also would want to apply it more extensively to deep learning with big data. I have incorporated seamlessly PersLay within a Keras code to classify the persistence images using various Keras models for image analysis. It of course automatically uses the GPU quite extensively. This is possible only after saving the PIs and diagrams (which is unfortunately a very slow and time consuming process for somewhat big data). Thank you!

@fbarfi I see your point, which makes sense. It would definitely be nice to identify ways to improve on PD computations with GPUs. One last question: PersLay is supposed to take PDs as inputs (and not PIs), so why do you use PersLay for classifying PIs? Did I misunderstand something?

fbarfi commented

@MathieuCarriere I apologize for the confusion. You are right. I do not use PersLay to classify PIs. I use the output -- PIs (saved on disk) and then use those images to classify the original data that I am working with. Basically: I use TDA PIs representations to classify my original data (dealing with Global Trade).

I see, thanks for the clarification!

fbarfi commented

@MathieuCarriere By the way : I am a fan of your work. I extensively rely on your insights and accomplishments!!!

always a pleasure to hear this---thanks a lot! :-)