XanaduAI/MrMustard

TensorFlow dependecy needs to be fixed <2.16.0

Closed this issue · 3 comments

TensorFlow >=2.16.0 has been released a month ago. Installing MrMustard with pip install mrmustard will install TensorFlow 2.16.1 by default. This is problematic as new releases of TensorFlow dropped support for Keras 2.0 -- see the release note. Therefore, the following call

return tf.keras.optimizers.legacy.Adam(learning_rate=0.001)

will trigger an error. For a MWE, the error can easily be triggered with the optimization given in the README

MrMustard/README.md

Lines 252 to 285 in d31b70f

### Optimization
The `mrmustard.training.Optimizer` uses Adam underneath the hood for the optimization of Euclidean parameters, a custom symplectic optimizer for Gaussian gates and states and a unitary/orthogonal optimizer for interferometers.
We can turn any simulation in Mr Mustard into an optimization by marking which parameters we wish to be trainable. Let's take a simple example: synthesizing a displaced squeezed state.
```python
from mrmustard import math
from mrmustard.lab import Dgate, Ggate, Attenuator, Vacuum, Coherent, DisplacedSqueezed
from mrmustard.physics import fidelity
from mrmustard.training import Optimizer
math.change_backend("tensorflow")
D = Dgate(x = 0.1, y = -0.5, x_trainable=True, y_trainable=True)
L = Attenuator(transmissivity=0.5)
# we write a function that takes no arguments and returns the cost
def cost_fn_eucl():
state_out = Vacuum(1) >> D >> L
return 1 - fidelity(state_out, Coherent(0.1, 0.5))
G = Ggate(num_modes=1, symplectic_trainable=True)
def cost_fn_sympl():
state_out = Vacuum(1) >> G >> D >> L
return 1 - fidelity(state_out, DisplacedSqueezed(r=0.3, phi=1.1, x=0.4, y=-0.2))
# For illustration, here the Euclidean optimization doesn't include squeezing
opt = Optimizer(symplectic_lr=0.1, euclidean_lr=0.01)
opt.minimize(cost_fn_eucl, by_optimizing=[D]) # using Adam for D
# But the symplectic optimization always does
opt = Optimizer(symplectic_lr=0.1, euclidean_lr=0.01)
opt.minimize(cost_fn_sympl, by_optimizing=[G,D]) # uses Adam for D and the symplectic opt for G
```

As a temporary fix, I suggest to replace ^2.15.0 with ~2.15.0 in the following

MrMustard/pyproject.toml

Lines 52 to 59 in d31b70f

tensorflow = {version = "^2.15.0" }
tensorflow-macos = { version = "2.15.0", platform = "darwin", markers = "platform_machine=='arm64'" }
tensorflow-intel = { version = "^2.15.0", platform = "win32" }
# Disabled to prevent taking over GPU:
# tensorflow-cpu = [
# { version = "^2.15.0", platform = "linux", markers = "platform_machine!='arm64' and platform_machine!='aarch64'" },
# { version = "^2.15.0", platform = "darwin", markers = "platform_machine!='arm64' and platform_machine!='aarch64'" },]
tensorflow-cpu-aws = { version = "^2.15.0", platform = "linux", markers = "platform_machine=='arm64' or platform_machine=='aarch64'" }

Thank you, nice find! Would you like to open a PR and add your name to the contributors for the next release?

Hey @ziofil , thanks for the opportunity, I opened a PR with the quick-fix as you suggested!

Hi @xvalcarce! Thanks for putting this all together, super detailed and made things easy. We didn't want to force a lower TensorFlow version upon our users, so I just merged a PR to add support for TensorFlow 2.16 - you can check out the details in the PR above, but the tl;dr is MrMustard now uses the non-legacy version of the Adam optimizer when you have TF 2.16+ installed. Lmk if you have any questions/concerns about things