malllabiisc/WordGCN

Would it possible for you to release your pre-trained model checkpoint?

Punchwes opened this issue · 6 comments

Hi,

Thanks very much for your work, it's really impressive. I have managed to run the code with default setting on given dataset which consists of 57 million sentences on a Titan V, it takes around 18 hours to go over just one epoch(I noticed that your negative samples are set to be 100, wouldn't it be too large?). I wonder would it be possible for you to also release a pre-trained checkpoint and may I also ask your gpu and runtime?

Many thanks.

Hi @Punchwes,
I have uploaded the checkpoint of the pre-trained model. You can download it from the following link:
https://drive.google.com/open?id=1NmYkldyC23fYmyzooojiF3nlnat7rgP9

I have not personally verified whether we can directly restore the checkpoint with the provided code as there have been a few modifications. However, you can access the parameters from the checkpoint.

Hi, @Punchwes , can you use the checkpoint directly? I just restore the model using the checkpoints and it gives some bugs.

Hi @eyuansu62 ,

I have not tried to fine-tune the given checkpoint, but I have successfully got access to the parameters in the checkpoint so I guess it should work.

@Punchwes i am really a fresh in tensorflow. I just use the saver.restore but it gives some errors. So could you please tell me how to get access to the parameters in the checkpoints?

@eyuansu62 If you just want to get parameters or weights you could get to it like below:

ckpt_path = 'best_model/best_int_avg'
reader = tf.train.NewCheckpointReader(ckpt_path)
var_to_shape_map = reader.get_variable_to_shape_map()
embedding = reader.get_tensor('Embed_mat/embed_matrix')

You can get names by var_to_shape_map and use get_tensor to get specific weights.

@Punchwes Really thanks!