felixrosberg/FaceDancer

export model on `train.py`

zengjingming opened this issue · 7 comments

Hi, Thank you for your great work!

I just want to ask how to export .h5 file when I run python train/train.py. It seems that the export part in the code is at the front, before training.
image

I'm sorry for the ignorance. When I run the train.py first time. I don't have any weights to load.So how can I export the model?

Hello and thank you!

Yes, this is probably a odd solution. But if the export flag is set to True it will just export the model and exit. If you see a couple of rows above, you have G = load_model_internal(..) and opt.load. So if you set the --load flag to an int of a checkpoint you can load a model and train from there. If you also set --export to True you will load the model, export and exit. If you set --export to True but keep --load as None it will skip the export step. :)

Sorry to bother you again. I'm trying to tun this project on linux server with GPUs. Could you please tell me what version of tensorflow should I install ? I have tried tensorflow-cpu and tensorflow-directml-plugin you mentioned. But it seems it only use cpu.

No worries! That way of intallation is suggested by good buddy and contributer netrunner-exe, so I know not much of it and would not install TensorFlow personally that way. I think any of the later 2.0 version should work, but try installing tensorflow-gpu instead. Also make sure you have correct version of CUDA. There is a second installation suggestion in the READ.ME, perhaps that would work for you. Otherwise I suggest you install tensorflow-gpu and look up what more you need on tensorflows webpage.

Sorry to bother you again. I'm trying to tun this project on linux server with GPUs. Could you please tell me what version of tensorflow should I install ? I have tried tensorflow-cpu and tensorflow-directml-plugin you mentioned. But it seems it only use cpu.

The directml installation method is more suitable for Windows or WSL. For Linux, I would also suggest trying the installation method described on the official Tensorflow website.

conda install -c conda-forge cudatoolkit=11.2.2 cudnn=8.1.0
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/
python3 -m pip install tensorflow

Verify install:
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

You may need to fix the command a bit for more appropriate versions of cudatoolkit and cudnn for a particular GPU driver, or maybe not - in your case this can only be found out by experience.
If everything was successful, the last command should show the number of GPUs in the system.

Thank you ! I have built my system successfully!

Can I ask what's the version of python,tensorflow this project uses?