aux_models is missing and error with l1_Fourier_lifted
nzilberstein opened this issue · 4 comments
Hi,
I have the following issues:
-
aux_model in LDAMP doesn't seem to be in the repo
-
when running l1_fourier_lifted as it, I have the following error:
Traceback (most recent call last):
File "/home/nicolas/.local/lib/python3.9/site-packages/sigpy/prox.py", line 49, in __call__
output = self._prox(alpha, input)
File "/home/nicolas/.local/lib/python3.9/site-packages/sigpy/prox.py", line 276, in _prox
return thresh.soft_thresh(self.lamda * alpha, input)
File "/home/nicolas/.local/lib/python3.9/site-packages/sigpy/thresh.py", line 33, in soft_thresh
return _soft_thresh(lamda, input)
numpy.core._exceptions.UFuncTypeError: ufunc '_soft_thresh' did not contain a loop with signature matching types (<class 'numpy.dtype[float64]'>, <class 'numpy.dtype[complex128]'>) -> <class 'numpy.dtype[complex128]'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/nicolas/nicolas/MIMO_detection_project/Langevin_joint_symbs_H/score-based-channels/test_l1Fourier_lifted.py", line 183, in <module>
alg.update()
File "/home/nicolas/.local/lib/python3.9/site-packages/sigpy/alg.py", line 63, in update
self._update()
File "/home/nicolas/.local/lib/python3.9/site-packages/sigpy/alg.py", line 189, in _update
backend.copyto(self.x, self.proxg(self.alpha, self.x))
File "/home/nicolas/.local/lib/python3.9/site-packages/sigpy/prox.py", line 52, in __call__
raise RuntimeError('Exceptions from {}.'.format(self)) from e
RuntimeError: Exceptions from <[256, 64] L1Reg Prox>..
Seems to be related to the lifting parameter, but when changing the values to the size of the channel, or even using lifting 1, I still having the same issue.
Thanks,
Nicolas.
Thanks for raising these, I am currently working through a major cleanup which should also address #3. As part of this, I've reworked both the LDAMP and fsAD (L1) baselines, here are some answers below to your punctual issues.
Note that I've also updated requirements.txt
and I was able to run both LDAMP and fsAD in a clean environment from scratch.
I have also downloaded exact train and test data used with the following commands (running in the main folder):
mkdir data
curl -L https://utexas.box.com/shared/static/nmyg5s06r6m2i5u0ykzlhm4vjiqr253m.mat --output ./data/CDL-C_Nt64_Nr16_ULA0.50_seed1234.mat
curl -L https://utexas.box.com/shared/static/2a7tavjo9hk3wyhe9vv0j7s2l6en4mj7.mat --output ./data/CDL-C_Nt64_Nr16_ULA0.50_seed4321.mat
- Thanks for catching this. I have now added the missing files and
python train_ldamp.py
should work (by default it will train on CDL-C on a wide SNR range, one model per SNR point, like in the paper).
- I didn't try to reproduce the exact curve in the paper, but for CDL-C it looked right at -30 dB. Note that I also fixed the SNR mismatch issue in #3 and added an automated plot after
python test_ldamp.py
which should automatically save results.
- Hmm, this error looks more like a backend BLAS library issue to me due to invalid data types. It's complaining that the first input to
output = self._prox(alpha, input)
isnp.real64
, whereas it expected it to benp.complex128
from what I can tell. However, I am not able to reproduce this in my setup.
-
On my setup running Ubuntu 22.04 and an Intel i9-10980xe processor, I did the following:
- Create a fresh
conda
environment withpython 3.10.9
in it. - Activate it and run
conda install pip
. - Run
pip install -r requirements.txt
(make sure you clone the latest main branch). - Run
python test_l1Fourier_lifted.py --lifting 1 --steps 250
(note thatsteps
is a new arg part of the rework). Takes about 40 seconds to run for me.
- Create a fresh
-
The above worked for me, and gave me the following console output, which roughly matches the Lasso curve in the paper (for the paper we tuned step size and learning rate extensively):
SNR = -10.00 dB, NMSE = 12.39 dB using lambda = 3.0e-01 and step size = 3.0e-03
SNR = -5.00 dB, NMSE = 7.62 dB using lambda = 3.0e-01 and step size = 3.0e-03
SNR = 0.00 dB, NMSE = 3.06 dB using lambda = 3.0e-01 and step size = 3.0e-03
SNR = 5.00 dB, NMSE = -1.60 dB using lambda = 3.0e-01 and step size = 3.0e-03
SNR = 10.00 dB, NMSE = -6.00 dB using lambda = 3.0e-01 and step size = 3.0e-03
SNR = 15.00 dB, NMSE = -9.89 dB using lambda = 3.0e-01 and step size = 3.0e-03
SNR = 20.00 dB, NMSE = -12.77 dB using lambda = 3.0e-01 and step size = 3.0e-03
SNR = 25.00 dB, NMSE = -14.32 dB using lambda = 3.0e-01 and step size = 3.0e-03
SNR = 30.00 dB, NMSE = -14.93 dB using lambda = 3.0e-01 and step size = 3.0e-03
If you'd be willing, can you please try these steps in a new environment (including downloading the data) and let me know how it went? If you're not using conda, you could try directly in a Python environment, ideally 3.10.
Thanks for the fast answer!
- Now it works L-DAMP. However, after training, when I ran the testing, I couldn't replicate the results. I obtain the following results
SNR = -9.94 dB, NMSE = -0.48 dB
SNR = -4.94 dB, NMSE = -1.18 dB
SNR = 0.06 dB, NMSE = -2.07 dB
SNR = 5.06 dB, NMSE = -2.82 dB
SNR = 10.06 dB, NMSE = -3.37 dB
SNR = 15.06 dB, NMSE = -3.61 dB
SNR = 20.06 dB, NMSE = -3.74 dB
SNR = 25.06 dB, NMSE = -3.79 dB
- This also works fine now. I followed your steps and I got the same results.
Thanks for catching this, it turns out there was a bug in test_ldamp.py
when getting the ground-truth. This should now be fixed in the latest main branch. I also added print code that was missing from the test code.
The good news is that all models trained should be re-usable as they are. I trained one at 30 dB and by running:
python test_ldamp.py --snr_range 30
I get:
Learned D-AMP: SNR = 30.00 dB, NMSE = -19.98 dB
Which seems to match Figure 5c at the end-point, now you should be able to reproduce the entire curve without retraining.
Yes, now it works well! Thanks for the nice work!