pmneila/PyMaxflow

Multiprocessing graph cut

Opened this issue · 8 comments

Hi Pmneila
Sorry for this further question, I hope that this will my last one.
Now I'm using my code on Amazon ec2 cloud service because I need more computation power than the one that my laptop can offer (I'm working using graph with millions of nodes) but I see that only one CPU (out of 4) is used for the computation, that is actually a bottleneck.
Do you know a way to make graph cut use more than one processor or it is intrinsically single processor?
Thank you, as usual

Hi.

Do not worry about asking me questions. :)

It is very hard to parallelize graph-cuts algorithms. There are many dependencies between variables and parallelization would require too many synchronization points that would kill the advantages of parallelization.

However, there are approaches that perform graph-cuts in several overlapping sub-regions of the original graph and then merge the results. The partition of the graph depends on its structure and therefore this parallelization procedure cannot be implemented in PyMaxflow, which is structure-agnostic. Fortunately, since you know the structure of your graph, it should be possible for you to split your graph in several subgraphs and perform parallel computations of maxflow (e.g., using multiprocessing).

How to merge the results is explained in this work, section 3.2, but maybe it is not very clear. I might write an example script demonstrating the method with PyMaxflow, but I am currently very busy and won't be able to do it until late November or December.

A much easier solution would be processing several independent images at the same time, provided that you have to process many images…

Thank you for your useful information, we'll look at the paper you linked and try to implement the solution proposed with our graph :)

Hi.

In this month I improved my algorithm but the parallel maxflow computation seems quite hard to obtain. When you have some time, could you please write an example of how to split the graph for my structure or how to merge the results with pymaxflow?

Hi.

Sorry for the wait. I'll try to write that example in a few days (probably next weekend). Can you wait until then or is it urgent? (too busy right now)

Hi.

No problem, thank you for your time. If you are able to write it before surely it will better for us, but we can wait until next weekend without any problem

Hi @pmneila,

I'm trying to avoid opening another issue, since the problem I'm encountering is a bit related to the one reported here.
I'm working with large binary images, on which I'm just applying the image restoration example as follows:

    # penalty and regularization energy terms
    P = (arr, 1 - arr)
    K = 1

    # create graph
    g = maxflow.GraphInt()
    nodeids = g.add_grid_nodes(arr.shape)
    g.add_grid_tedges(nodeids, *P)
    g.add_grid_edges(nodeids, K, structure=np.ones((3, 3)))

    """
    # run maxflow
    g.maxflow()
    segments = g.get_grid_segments(nodeids)

    return segments
    """

For an image of 15 million pixels the program above works (without the comments around maxflow), but on an image of 36 million pixels all the computer (RAM = 24GB) is frozen by g.add_grid_edges() during the construction of the graph.

Is there a workaround without having to split the image in half and performing the graph cut on each half separately?

I understand from your comments above that the algorithm cannot be made to operate in parallel, and that in the paper you shared they performed graph cut on sub-graphs, then they merged the results. However, I don't think that would solve my problem, as the graph is failing to be constructed in the first place.

Hi, @h4k1m0u,

I tried that code with a 6000x6000 image (36 million pixels) in a Mac laptop with 16Gb of RAM and it worked fine. The memory usage went slightly over 16Gb (with some memory swapping in the laptop) but it was below 24Gb.

I also tested the same code in a Linux server and the memory usage was always below 20Gb (specifically, 19.7Gb).

Questions:

  • What is your OS? Windows?
  • What's your architecture? 64-bit?
  • What do you mean with "the computer is frozen"? Is the used memory so large that the OS stops responding?
  • Have you checked the memory usage? Does it keep growing over the 20Gb?

Is there a workaround without having to split the image in half and performing the graph cut on each half separately?

Not really. The only way is splitting the image. The dual decomposition method in the paper above allows you to split the image in small pieces and performing several graph-cuts on each, and then merge the results. If I recall correctly, you don't need to build the whole graph with that method. That would solve your memory issues.

What is your OS? Windows?

Windows 10

What's your architecture? 64-bit?

64 bit

What do you mean with "the computer is frozen"? Is the used memory so large that the OS stops responding?

It becomes unresponsive, even the clock at the lower-right corner doesn't update. I had to turn down the computer directly from the button.

Have you checked the memory usage? Does it keep growing over the 20Gb?

I haven't checked that actually. I supposed that the machine froze because of the large size of the graph (especially the number of non-terminal edges), which didn't happen when dealing with an image half that size for example.

I ended up splitting the image into splits of 20M pixels and performing graph cuts on each split separately. At the end, I sticked them together.