intel/neuro-vectorizer

[Running on RTX GEFORCE 1070 8GB Machine]

Closed this issue · 11 comments

Hi @AmeerHajAli,

Thank you for yesterday's response.
I'm able to move forward now.
But the configure.sh files have some default values for TESLA K80 GPU.
How should i change the WORD_VOCAB_SIZE, PATH_VOCAB_SIZE, TARGET_VOCAB_SIZE and various default values to match my machine specifications.

I'm currently working on i9-9th gen processor with 32DDR4 and 1070 RTX card.

Thanks and Regards,
Vinayak N Baddi

Can you try using these values too and report how well it performs?

Hi @AmeerHajAli ,

Which Values?
The same as in configure.sh file?

yes.

Hi @AmeerHajAli ,

I tried running the available values in the configure file.
It starts running and gets stuck in between without any error value and there is 100% utilization of processor.

Kindly let me know if you have any inputs.
What time it typically takes to run autovec.py file?

Hi @vinayak618, this is because it is training and it takes a while. I will try to upload something soon that you can you to speed up the processes. It will basically be a pretrained model.

Hi @AmeerHajAli ,

Is the training happening in CPU?
When i check the nvidia-smi after running autovec.py, there is no GPu utilization.
But CPU utilization is 100% all the time.

The training is not too expensive, we do not run it on the GPU. The main cause of CPU utilization is the compilation/executions of programs and generation of the bag of words encodings.
If you want to train on the GPU you can set the num_gpus to 1 in the autovec.py file.

Would you prefer to have a pretrained model that you can use directly to vectorize your code or you are trying to train on your own new code?

Hi @AmeerHajAli,

I would like to train manually and see the results.
but meanwhile, it would be great if you can share pretrained model and i can compare the results then.

Hi @AmeerHajAli,

i'm trying to run the code from last two days in CPU, still its stuck.
Now trying to run on CUDA, facing problems on cuda/tf error.
What specfic cuda and cudnn and TF version did you install to train on GPU?

I made some refactoring to the code and added some examples and a very simple pretrained model. The training time should also be improved. Can you follow the instructions again and let me know if you still have any issues?

Hi @AmeerHajAli ,

Thank you so much for the support.
It is now working for both training and inference using your provided pre-trained model.

Thanks alot.