Horea94/Fruit-Images-Dataset

MacOS clone and run, this error occurs: OSError: Unable to open file (unable to open file: name = 'output_files/fruit-360 model/model.h5',

Closed this issue · 2 comments

Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
data (InputLayer)            (None, 100, 100, 3)       0         
_________________________________________________________________
lambda_1 (Lambda)            (None, 100, 100, 4)       0         
_________________________________________________________________
conv1 (Conv2D)               (None, 100, 100, 16)      1616      
_________________________________________________________________
conv1_relu (Activation)      (None, 100, 100, 16)      0         
_________________________________________________________________
pool1 (MaxPooling2D)         (None, 50, 50, 16)        0         
_________________________________________________________________
conv2 (Conv2D)               (None, 50, 50, 32)        12832     
_________________________________________________________________
conv2_relu (Activation)      (None, 50, 50, 32)        0         
_________________________________________________________________
pool2 (MaxPooling2D)         (None, 25, 25, 32)        0         
_________________________________________________________________
conv3 (Conv2D)               (None, 25, 25, 64)        51264     
_________________________________________________________________
conv3_relu (Activation)      (None, 25, 25, 64)        0         
_________________________________________________________________
pool3 (MaxPooling2D)         (None, 12, 12, 64)        0         
_________________________________________________________________
conv4 (Conv2D)               (None, 12, 12, 128)       204928    
_________________________________________________________________
conv4_relu (Activation)      (None, 12, 12, 128)       0         
_________________________________________________________________
pool4 (MaxPooling2D)         (None, 6, 6, 128)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 4608)              0         
_________________________________________________________________
fcl1 (Dense)                 (None, 1024)              4719616   
_________________________________________________________________
dropout_1 (Dropout)          (None, 1024)              0         
_________________________________________________________________
fcl2 (Dense)                 (None, 128)               131200    
_________________________________________________________________
dropout_2 (Dropout)          (None, 128)               0         
_________________________________________________________________
predictions (Dense)          (None, 121)               15609     
=================================================================
Total params: 5,137,065
Trainable params: 5,137,065
Non-trainable params: 0
_________________________________________________________________
None
Found 55357 images belonging to 121 classes.
Found 6119 images belonging to 121 classes.
Found 20618 images belonging to 121 classes.
Epoch 1/25
1108/1108 [==============================] - 278s 251ms/step - loss: 3.7775 - accuracy: 0.1765 - val_loss: 2.2806 - val_accuracy: 0.7799
Epoch 2/25
/Users/mingh/.pyenv/versions/3.8.1/lib/python3.8/site-packages/keras/callbacks/callbacks.py:706: RuntimeWarning: Can save best model only with val_acc available, skipping.
  warnings.warn('Can save best model only with %s available, '
1108/1108 [==============================] - 276s 249ms/step - loss: 0.7650 - accuracy: 0.7800 - val_loss: 0.3695 - val_accuracy: 0.9317
Epoch 3/25
1108/1108 [==============================] - 270s 244ms/step - loss: 0.2649 - accuracy: 0.9177 - val_loss: 0.4914 - val_accuracy: 0.9593
Epoch 4/25
1108/1108 [==============================] - 271s 245ms/step - loss: 0.1443 - accuracy: 0.9546 - val_loss: 0.0690 - val_accuracy: 0.9747
Epoch 5/25
1108/1108 [==============================] - 275s 248ms/step - loss: 0.0923 - accuracy: 0.9704 - val_loss: 0.2997 - val_accuracy: 0.9745
Epoch 6/25
1108/1108 [==============================] - 279s 252ms/step - loss: 0.0671 - accuracy: 0.9788 - val_loss: 0.0598 - val_accuracy: 0.9799
Epoch 7/25
1108/1108 [==============================] - 270s 243ms/step - loss: 0.0487 - accuracy: 0.9843 - val_loss: 0.0040 - val_accuracy: 0.9822
Epoch 8/25
1108/1108 [==============================] - 266s 240ms/step - loss: 0.0446 - accuracy: 0.9862 - val_loss: 0.0636 - val_accuracy: 0.9820
Epoch 9/25
1108/1108 [==============================] - 265s 239ms/step - loss: 0.0370 - accuracy: 0.9887 - val_loss: 0.0085 - val_accuracy: 0.9806
Epoch 10/25
1108/1108 [==============================] - 266s 240ms/step - loss: 0.0304 - accuracy: 0.9901 - val_loss: 0.0721 - val_accuracy: 0.9835

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.05000000074505806.
Epoch 11/25
1108/1108 [==============================] - 268s 242ms/step - loss: 0.0136 - accuracy: 0.9958 - val_loss: 0.0014 - val_accuracy: 0.9874
Epoch 12/25
1108/1108 [==============================] - 266s 240ms/step - loss: 0.0101 - accuracy: 0.9968 - val_loss: 0.1642 - val_accuracy: 0.9873
Epoch 13/25
1108/1108 [==============================] - 276s 249ms/step - loss: 0.0085 - accuracy: 0.9972 - val_loss: 2.9743e-04 - val_accuracy: 0.9887
Epoch 14/25
1108/1108 [==============================] - 267s 241ms/step - loss: 0.0088 - accuracy: 0.9971 - val_loss: 4.8541e-04 - val_accuracy: 0.9887
Epoch 15/25
1108/1108 [==============================] - 270s 244ms/step - loss: 0.0082 - accuracy: 0.9976 - val_loss: 0.0287 - val_accuracy: 0.9882
Epoch 16/25
1108/1108 [==============================] - 267s 241ms/step - loss: 0.0077 - accuracy: 0.9976 - val_loss: 1.9202e-04 - val_accuracy: 0.9877
Epoch 17/25
1108/1108 [==============================] - 262s 237ms/step - loss: 0.0076 - accuracy: 0.9978 - val_loss: 0.0138 - val_accuracy: 0.9891
Epoch 18/25
1108/1108 [==============================] - 268s 242ms/step - loss: 0.0073 - accuracy: 0.9977 - val_loss: 0.1534 - val_accuracy: 0.9899
Epoch 19/25
1108/1108 [==============================] - 268s 242ms/step - loss: 0.0063 - accuracy: 0.9978 - val_loss: 0.0231 - val_accuracy: 0.9894

Epoch 00019: ReduceLROnPlateau reducing learning rate to 0.02500000037252903.
Epoch 20/25
1108/1108 [==============================] - 272s 245ms/step - loss: 0.0045 - accuracy: 0.9986 - val_loss: 0.0017 - val_accuracy: 0.9918
Epoch 21/25
1108/1108 [==============================] - 279s 252ms/step - loss: 0.0034 - accuracy: 0.9991 - val_loss: 0.0782 - val_accuracy: 0.9889
Epoch 22/25
1108/1108 [==============================] - 270s 244ms/step - loss: 0.0041 - accuracy: 0.9987 - val_loss: 0.0175 - val_accuracy: 0.9895

Epoch 00022: ReduceLROnPlateau reducing learning rate to 0.012500000186264515.
Epoch 23/25
1108/1108 [==============================] - 269s 242ms/step - loss: 0.0033 - accuracy: 0.9991 - val_loss: 0.0074 - val_accuracy: 0.9918
Epoch 24/25
1108/1108 [==============================] - 268s 242ms/step - loss: 0.0030 - accuracy: 0.9991 - val_loss: 0.0096 - val_accuracy: 0.9907
Epoch 25/25
1108/1108 [==============================] - 272s 246ms/step - loss: 0.0024 - accuracy: 0.9994 - val_loss: 8.0244e-06 - val_accuracy: 0.9920
---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-2-0f6530fe8848> in <module>
     40 
     41 model = network(input_shape=input_shape, num_classes=num_classes)
---> 42 train_and_evaluate_model(model, name="fruit-360-model")

<ipython-input-1-061a40b784e9> in train_and_evaluate_model(model, name, epochs, batch_size, verbose, useCkpt)
    130                                   callbacks=[learning_rate_reduction, save_model])
    131 
--> 132     model.load_weights(model_out_dir + "/model.h5")
    133 
    134     validationGen.reset()

~/.pyenv/versions/3.8.1/lib/python3.8/site-packages/keras/engine/saving.py in load_wrapper(*args, **kwargs)
    490                 os.remove(tmp_filepath)
    491             return res
--> 492         return load_function(*args, **kwargs)
    493 
    494     return load_wrapper

~/.pyenv/versions/3.8.1/lib/python3.8/site-packages/keras/engine/network.py in load_weights(self, filepath, by_name, skip_mismatch, reshape)
   1219         if h5py is None:
   1220             raise ImportError('`load_weights` requires h5py.')
-> 1221         with h5py.File(filepath, mode='r') as f:
   1222             if 'layer_names' not in f.attrs and 'model_weights' in f:
   1223                 f = f['model_weights']

~/.pyenv/versions/3.8.1/lib/python3.8/site-packages/h5py/_hl/files.py in __init__(self, name, mode, driver, libver, userblock_size, swmr, rdcc_nslots, rdcc_nbytes, rdcc_w0, track_order, **kwds)
    404             with phil:
    405                 fapl = make_fapl(driver, libver, rdcc_nslots, rdcc_nbytes, rdcc_w0, **kwds)
--> 406                 fid = make_fid(name, mode, userblock_size,
    407                                fapl, fcpl=make_fcpl(track_order=track_order),
    408                                swmr=swmr)

~/.pyenv/versions/3.8.1/lib/python3.8/site-packages/h5py/_hl/files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
    171         if swmr and swmr_support:
    172             flags |= h5f.ACC_SWMR_READ
--> 173         fid = h5f.open(name, flags, fapl=fapl)
    174     elif mode == 'r+':
    175         fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/h5f.pyx in h5py.h5f.open()

OSError: Unable to open file (unable to open file: name = 'output_files/fruit-360 model/model.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

Thank you for pointing this out.
It seems that at some point the accuracy key in the metric list was changed from 'acc' to 'accuracy'. So in this case, the checkpoint callback was not able to save the model because it could not find the 'acc' metric. And because the model was never saved, loading it from the disk caused the "File not found" error.
I updated the code and it should be working now.

It works. Thank you very much. 😊