ratishsp/data2text-plan-py

questions about cpu and content plan tracing back

BlackFeetMouse opened this issue · 2 comments

Hello, thank you so much for sharing such a nice project.

I have read the paper and I want to ask two questions.

  1. Could you tell me whether extractor.lua file could be run under CPU environment using the model you provided? I have tried to set the label -gpu -1 , but unfortunately it was nor working.

  2. Besides, I am wondering whether it is possible to extract the data items that are related to each sentence generated by the model.
    For example, there is one sentence in the generated summary Tristan Thompson chipped in seven points and 13 rebounds as the starting power forward. Whether it is possible to trace back to which content plan items (TT's PTS related content plan item and TT's Rebound related content plan item) generated this sentence? Are there any variables in the code that I could use to trace back?

Thank you so much for your time and consideration.

  1. extractor.lua makes use of cudaTensors so I think it cannot be used as is. I guess you could use it if you replace it with regulart tensors. Though I haven't tried that.
  2. sentstr, entarg, numarg variables will be helpful. sentstr is the sentence, entarg, numarg are the entity and number arguments
local sentstr = idxstostring(sent[k], ivocab)
local entarg, numarg = get_args(sent[k], ent_dists[k], num_dists[k], ivocab)

https://github.com/ratishsp/data2text-1/blob/649b6ec967ee846523329f2cf426640506b30203/extractor.lua#L499-L500

Thank you so much for your advice and hints. I will try to modify the tensor to see whether it could run under CPU mode.