This is a reference implementation of the subgradient and prox-linear methods for robust blind deconvolution presented in [1]. It has been tested in Julia v1.0.
In the current version, the only external requirement for the implementation
under src/
is Arpack
. The dependencies for the experiments are:
ArgParse
CSV
DataFrames
Images
(fortest_images.jl, test_mnist.jl
)ImageMagick
(fortest_images.jl, test_mnist.jl
)MLDatasets
(only fortest_mnist.jl
)ColorTypes
(required byImageUtils.jl
)
The files under src/
contain the implementation of both methods, as well as
auxiliary utilities such as quickly setting up problem instances, controlling
noise, etc.
To setup a problem with d_1 = 100, d_2 = 200
and 2000
Gaussian measurements
with 25% of the measurements corrupted by additive white noise, it suffices to
write:
include("src/BlindDeconv.jl");
w, x = randn(100), randn(200) # true signals
prob = BlindDeconv.gen_problem(2000, w, x, BlindDeconv.gaussian, BlindDeconv.gaussian, 0.25);
There is a wide selection of measurement matrices. An overview is available via
help?> BlindDeconv.MatType
As a quick example, in order to use a matrix with orthonormal columns as the "left" measurement matrix instead of a Gaussian matrix, one can write:
prob = BlindDeconv.gen_problem(2000, w, x, BlindDeconv.ortho, BlindDeconv.gaussian, 0.25);
In addition, the user can set up a problem where they fully control the measurement matrices and the true signals, as the following snippet shows:
Lmat = randn(1000, 100); Rmat = randn(1000, 200)
w = randn(100); x = rand([-1.0, 0.0, 1.0], 200)
prob = BlindDeconv.gen_problem(w, x, Lmat, Rmat, 0.25)
Then, one can use either of the two methods to solve the above problem. To use
the subgradient method with decaying step size, decaying by a factor of q = 0.98
at each iteration, for 500 iterations, we can write:
wf, xf, ds = BlindDeconv.subgradient_method(prob, 1.0, 0.98, 500);
println("Subgradient method error: ", ds[end])
If, instead, it was known that the problem instance is noiseless, i.e. all measurements are exact, the Polyak step size performs much better empirically, and is activated by setting the relevant keyword argument:
wf, xf, ds = BlindDeconv.subgradient_method(prob, 1.0, 0.0, 500, polyak_step=true)
For the prox-linear method with quadratic penalty parameter a = 1.0
and 15
iterations, write:
wf, xf, ds = BlindDeconv.proximal_method(prob, 1.0, 15);
println("Proximal method error: ", ds[end])
In the experiments/
folder, there is code available for running different
types of empirical evaluations. Individual experiments output the results
in a .csv
file under appropriate names. The figures in the paper were coded
in Latex / Tikz, so it is not possible to script this output using Julia.
In order to reproduce the results using the PyPlot
visualization library in
Julia, it is recommended to use the run_experiment.jl
script. Make sure your
working directory is experiments/
, and run
$ julia run_experiment.jl --help
to see a list of available commands. Each command corresponds to a different experiment presented in the paper, with options that can be listed via
$ julia run_experiment.jl <cmd> --help
where cmd
is one of decay_eval, synthetic_test, matvec_eval, color_img_test, mnist_img_test
. For example, to reproduce the bottom row of Figure 10, use
$ cd experiments
$ julia run_experiment.jl mnist_img_test --idx_w 4096 --idx_x 8192 --iters 800
The choice of parameters for each experiment is documented in [1].
[1]: Vasileios Charisopoulos, Damek Davis, Mateo Díaz and Dmitriy Drusvyatskiy. "Composite optimization for robust blind deconvolution". arXiv preprint abs/1901.01624.