Performance issues in the definition of _inference, MemN2N-split-memory/memn2n/memn2n_dialog.py(P1)
DLPerf opened this issue · 1 comments
DLPerf commented
Hello, I found a performance issue in the definition of _inference, MemN2N-split-memory/memn2n/memn2n_dialog.py, tf.nn.embedding_lookup(self.A, stories)
will be created repeatedly during program execution, resulting in reduced efficiency. I think it should be created before the loop.
The same issues exist in:
tf.nn.embedding_lookup(self.A, profile)
in line 158;tf.reduce_sum(m_emb, 2)
in line 160 and 161;tf.transpose(m, [0, 2, 1])
in line 183;tf.transpose(m_profile, [0, 2, 1])
in line 184;tf.nn.embedding_lookup(self.A, stories)
in line 154;tf.reduce_sum(m_emb, 2)
in line 155;tf.transpose(m, [0, 2, 1])
;
Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.
chaitjo commented
Sorry, this codebase is not being maintained.