Phase retrieval optimization algorithms
The repository contains a set of phase retrieval algorithms developed in the thesis [Chapter 4] and implemented in Python.
Installation
To install the library, execute
pip install git+https://gitlab.com/shaxov/phrt_opt
Problem formulation
Let
Typically, the number of measurements
To generate a phase retrieval problem the following Python code can be used.
import numpy as np
n = 8
m = 8 * n
# Transmission matrix
tm = np.random.randn(m, n) + 1j * np.random.randn(m, n)
# Solution vector
x = np.random.randn(n, 1) + 1j * np.random.randn(n, 1)
# Vector of intensity measurements
b = np.abs(tm @ x)
Implemented algorithms
The library contains four phase retrieval algorithms: gradient descent, Gauss-Newton, alterating projections, and ADMM. Each of these algorithms solve the original problem using its different equivalent reformulations.
Gradient descent
The equivalent reformulation of the original problem writes
\min_{x\in\mathbb{C}^n} f(x):=\frac{1}{2}\||Ax|^2 - b^2\|^2.
The algorithm is a simple gradient descent
\nabla f(x) = \frac{1}{m}A^*\big[(|Ax|^2-b^2)\odot Ax \big],
is caclucated by means of Wirtinger calculus and
To use the gradient descent method the following Python code can be used.
import phrt_opt
x_hat = phrt_opt.methods.gradient_descent(tm, b)
Line-search
The step length
- backtracking line-search (Armijo)
x_hat = phrt_opt.methods.gradient_descent(
tm, b,
linesearch=phrt_opt.linesearch.Backtracking(),
)
- line-search, which is based on a secant equation (Barzilai and Borwein).
x_hat = phrt_opt.methods.gradient_descent(
tm, b,
linesearch=phrt_opt.linesearch.Secant(),
)
Gauss-Newton
The equivalent reformulation of the original problem writes
\min_{x\in\mathbb{C}^n} f(x):=\frac{1}{2m}\||Ax|^2 - b^2\|^2.
Following the general scheme of Gauss-Newton method, we denote a residual function
r(x) = |Ax|^2 - b^2,
and its jacobian matrix in terms of Wirtinger calculus writes as
\nabla r(x) =
\begin{pmatrix}
A\odot \bar{A}\bar{x} & \bar{A}\odot Ax
\end{pmatrix}
\subset \mathbb{C}^{m\times 2n},
Then, the descent direction
\nabla r(x)^* \nabla r(x) p = - \nabla r(x)^* r(x).
Then
To use the Gauss-Newton method the following Python code can be used.
import phrt_opt
x_hat = phrt_opt.methods.gauss_newton(tm, b)
Symmetric system solver
There are two implemented methods that can be used for solving the Gauss-Netwon system at each iteration:
- Cholesky solver
x_hat = phrt_opt.methods.gauss_newton(
tm, b,
quadprog=phrt_opt.quadprog.Cholesky(),
)
- Conjugate gradient descent solver
x_hat = phrt_opt.methods.gauss_newton(
tm, b,
quadprog=phrt_opt.quadprog.ConjugateGradient(),
)
Alternating projections
The equivalent reformulation of the original problem writes
\min_{(x,y)\in\mathbb{C}^n\times\mathbb{C}^m} \frac{1}{2} \|Ax-y\|^2 \;\; \text{such that} \;\; |y| = b.
The algorithm contains two consecutive updates:
\begin{align*}
y^{(k+1)} &= b\odot\exp\big( i\arg(Ax^{(k)}) \big),\\
x^{(k+1)} &= A^\dag y^{(k+1)},
\end{align*}
where
To use the Gauss-Newton method the following Python code can be used.
import phrt_opt
x_hat = phrt_opt.methods.alternating_projections(tm, b)
ADMM
The equivalent reformulation of the original problem writes
\min_{(y,z,\xi)\in\mathbb{C}^m\times\mathbb{C}^m\times\mathbb{C}^m} \frac{1}{2} \|\xi\|^2 \;\;
\text{such that} \;\; y - z = \xi, \; y\in \operatorname{range}(A), \; z\in\mathcal{M}_b,
where then
The algorithm contains three consecutive updates:
\begin{align*}
z^{(k+1)} &= b\odot\exp\big( i\arg(y^{(k)} + (1 - \rho^{(k)})) \big),\\
y^{(k+1)} &= AA^\dag z^{(k+1)},\\
\lambda^{(k+1)} &= \frac{1}{1 + \rho^{(k+1)}}\big( \lambda^{(k)} + y^{(k+1) - z^{(k+1)} \big)
\end{align*}
where constant
, linear
, exponential
, and auto
. The default strategy is set to constant(1.)
.
import phrt_opt
x_hat = phrt_opt.methods.admm(tm, b)
Strategies for parameter $\rho$
A parameter
Constant
x_hat = phrt_opt.methods.admm(
tm, b,
strategy=phrt_opt.strategies.constant(.5),
)
Linear
x_hat = phrt_opt.methods.admm(
tm, b,
strategy=phrt_opt.strategies.linear(),
)
Exponential
x_hat = phrt_opt.methods.admm(
tm, b,
strategy=phrt_opt.strategies.exponential(),
)
Auto
x_hat = phrt_opt.methods.admm(
tm, b,
strategy=phrt_opt.strategies.auto(),
)