n2v example notebooks throw errors
Histo23 opened this issue · 1 comments
Hi,
after having used the n2v notebook for 3D denoising with my own images for quite some time it has stopped working recently. I went back and tried to run the example notebooks for training for 3D and 2D-RGB with the provided example data and get the same error when running the actual training. I run csbdeep 0.5.2 and n2v 0.2.1 (if I try to use csbdeep 0.6.0 I get "n2v 0.2.1 has requirement csbdeep<0.6.0,>=0.4.0, but you'll have csbdeep 0.6.0 which is incompatible.")
Here is the error that shows up after executing "history = model.train(X, X_val)"
8 blind-spots will be generated per training patch of size (64, 64).
Preparing validation data: 100%|██████████| 848/848 [00:00<00:00, 1293.30it/s]
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
- https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
- https://github.com/tensorflow/addons
- https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/csbdeep/utils/tf.py:245: The name tf.summary.image is deprecated. Please use tf.compat.v1.summary.image instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/csbdeep/utils/tf.py:273: The name tf.summary.merge is deprecated. Please use tf.compat.v1.summary.merge instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/csbdeep/utils/tf.py:280: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.
Epoch 1/25
39/39 [==============================] - 18s 464ms/step - loss: 0.7861 - n2v_mse: 0.7861 - n2v_abs: 0.7184 - val_loss: 0.7623 - val_n2v_mse: 0.7605 - val_n2v_abs: 0.7256
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/n2v/models/n2v_standard.py:316: The name tf.Summary is deprecated. Please use tf.compat.v1.Summary instead.
AttributeError Traceback (most recent call last)
in ()
----> 1 history = model.train(X, X_val)
5 frames
/usr/local/lib/python3.6/dist-packages/n2v/models/n2v_standard.py in on_epoch_end(self, epoch, logs)
316 summary = tf.Summary()
317 summary_value = summary.value.add()
--> 318 summary_value.simple_value = value.item()
319 summary_value.tag = name
320 self.writer.add_summary(summary, epoch)
AttributeError: 'float' object has no attribute 'item'
Also running the 3d n2v prediction notebook on a previously trained model gives an error when executing:
execution of:
model_name = '300_32_test_dapi_works'
basedir = '/content/gdrive/My Drive/CARE/n2v_3D/models'
model = N2V(config=None, name=model_name, basedir=basedir)
leads to:
AttributeError Traceback (most recent call last)
in ()
2 model_name = '300_32_test_dapi_works'
3 basedir = '/content/gdrive/My Drive/CARE/n2v_3D/models'
----> 4 model = N2V(config=None, name=model_name, basedir=basedir)
2 frames
/usr/local/lib/python3.6/dist-packages/n2v/models/n2v_config.py in is_valid(self, return_invalid)
235 'normal_fitted', 'identity']
236 ok['n2v_neighborhood_radius']= _is_int(self.n2v_neighborhood_radius, 0)
--> 237 ok['single_net_per_channel'] = isinstance( self.single_net_per_channel, bool )
238
239 if self.structN2Vmask is None:
AttributeError: 'N2VConfig' object has no attribute 'single_net_per_channel'
I just wonder what the reason might be.
Best,
Christian
Hi @Histo23,
Thank you for reporting.
I will try to reproduce the first error. Which keras version are you using?
The second error seems to come from the new config-parameter single_net_per_channel
which was introduced in v0.2.1. This new option is turned on by default and trains an independent U-Net for each channel of a multi-channel image to avoid channel-bleed through artifacts.
You could hack
the config of old N2V training runs, by adding "single_net_per_channel": False
.