PyLops/pyproximal

Faster TV

Opened this issue · 3 comments

Don't know what's going on behind the scenes but skimage TV is a fair bit quicker than your implementation. Could be looped in with something like the following:

from pyproximal.ProxOperator import _check_tau
from skimage.restoration import denoise_tv_chambolle, denoise_tv_bregman

class TV:
    def __init__(self, sigma, dims, isotropic=True, max_iter=1000):
        self.sigma = sigma
        self.isotropic = isotropic
        self.max_iter = max_iter
        self.count = 0
        self.dims = dims

    def _increment_count(func):
        """Increment counter"""

        def wrapped(self, *args, **kwargs):
            self.count += 1
            return func(self, *args, **kwargs)

        return wrapped

    @_increment_count
    @_check_tau
    def prox(self, x, tau):
        x = x.reshape(self.dims)
        if self.isotropic:
            return denoise_tv_chambolle(
                x, weight=tau * self.sigma, max_num_iter=self.max_iter
            ).ravel()
        else:
            return denoise_tv_bregman(
                x,
                weight=tau * self.sigma,
                isotropic=self.isotropic,
                max_num_iter=self.max_iter,
            ).ravel()

Thanks for reporting this.

We use the implementation in Beck, A. and Teboulle, M., "Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems", 2009.

I am not superfamiliar with neither this algorithm nor the one in skimage. Our implementation was contributed by @olivierleblanc. Olivier do you want to comment on this?

In general we could bring in the code you propose and still keep the current implementation letting user choose

It seems that this implementation of skimage indeed solves the same problem. I didn't know it.

@jameschapman19 can you maybe provide a short piece of code, or plot that compares the computation time between both implementations and for the same sets of internal parameters? The tolerances are of crucial importance as requiring a relative error between two iterates between 10^-2 or 10^-8 will not require the same number of steps in this internal optimization.

@mrava87 If you opt for adding this implementation as an option, double-check the position of the weight factor in the definition of the proximal operator.

Indeed, having a quantitative fair (with similar number of iterations by setting tolerances accordingly) will be nice. I briefly looked into skimage implementation and they use numpy methods so I wouldn’t expect a major difference in speed.

@olivierleblanc for sure if we include this option we will make sure by adding tests that for the same set of parameters the two proximals give the same result (as they solve the same optimization problem)