Problem loading the "standard" model
Closed this issue · 8 comments
Hi @mameli,
I was trying to use your model and reproduce results, but I can't load the "standard" model due to the perceptual_similarity module not being found.
Traceback (most recent call last):
File "C:\Users\apili\Downloads\Artifact_Removal_GAN-1.0\inference.py", line 14, in <module>
learner = load_learner(path=root_model_path, file=exported_model)
File "C:\Users\apili\anaconda3\envs\fastai\lib\site-packages\fastai\basic_train.py", line 621, in load_learner
state = torch.load(source, map_location='cpu') if defaults.device == torch.device('cpu') else torch.load(source)
File "C:\Users\apili\anaconda3\envs\fastai\lib\site-packages\torch\serialization.py", line 367, in load
return _load(f, map_location, pickle_module)
File "C:\Users\apili\anaconda3\envs\fastai\lib\site-packages\torch\serialization.py", line 538, in _load
result = unpickler.load()
ModuleNotFoundError: No module named 'perceptual_similarity'
However, I have this module installed and can't seem to solve it on my own. Any help is welcome.
Thanks in advance,
Marta Marques
Hi Marta,
I'm sorry but the file "inference.py" is not in my repository and I can't fully help you with only the traceback. If you just want to use the pre-trained model you don't need the module perceptual_similarity. You can find how to generate the images in the App.py script, specifically in the "process_image()" function you can find the steps to use the weights with the fastai learner.
If you still have some trouble you can email me.
Filippo
I have the same problem...
I want to do your predict result . I refer function 'process_image' int the App.py and the model 'standard.pkl' , and I think model 'standard.pkl' need the 'perceptual_similarity'. Because when I run to :
learner = load_learner(path=root_model_path, file=exported_model)
It still has the error: ModuleNotFoundError: No module named 'perceptual_similarity'
Dose it mean I can't use the App.py to predict with the model 'standard.pkl' ? Maybe I have to retrain a new model to skip the keyword 'perceptual_similarity'. Would you give me some suggestion ?
Hello everyone,
I had to make a few changes to the code because the perceptual_similarity module changed its name to "lpips" and the load_learner wasn't working properly because of that.
I added the environment.yml file for the creation of a new virtual env and a new notebook without flask to play with the models.
You can find the installation steps in the reame.md
Let me know if you find any other problems
Thank you for your understanding
Sorry , it has the same problem
I work on colab . I had used the conda envirment and your environment.yml. This is my code:
%%bash
source activate arnet_env
python3.7
import os
import sys
os.environ['CUDA_VISIBLE_DEVICES']='1'
from fastai import *
from fastai.vision import *
torch.backends.cudnn.benchmark=True
root_model_path = Path("./models/")
exported_model = Path("standard.pkl")
learner = load_learner (path=root_model_path, file=exported_model)
And the error is the same :
Traceback (most recent call last):
File "", line 13, in
File "/usr/local/lib/python3.7/dist-packages/fastai/basic_train.py", line 621, in load_learner
state = torch.load(source, map_location='cpu') if defaults.device == torch.device('cpu') else torch.load(source)
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 772, in _legacy_load
result = unpickler.load()
ModuleNotFoundError: No module named 'perceptual_similarity'
And I am sure the proble is on the last row:
learner = load_learner (path=root_model_path, file=exported_model)
Hi @goldenbili,
You have to download the new weights from the new release https://github.com/mameli/Artifact_Removal_GAN/releases/tag/1.1.
The old weights are linked to a nonexisting perceptual_similarity module now
Thank you very much @mameli.
This works fine for me and makes it extremely easy to get results.
Just wondering if it is normal for the network to create some random blue squares in some of my test images or if it's something I'm doing wrong when handling these particular images.
Thank you @mameli .
I can do test very well, and the result is amazing.
I will try to do your train later.
Hi @martafilipa,
I'm aware of this problem and I must say that the pre-trained model is not ready for production use. The training dataset is composed of just 800 images and because of that, the model can produce some unstable results if it can't "recognise" some portion of a picture. Narrowing down the scope of the model and using a broader dataset can help a lot. I tested the model architecture trained with a different and much larger dataset to restore some VHS-like videos link and the results are more reliable.
I'm confident to say that the artifact removal model can be improved by using a bigger dataset but sadly the training time/cost is too high for a simple "proof of concept" project