wasiahmad/NeuralCodeSum

Project dependencies may have API risk issues

PyDeps opened this issue · 0 comments

Hi, In NeuralCodeSum, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

numpy
tqdm
nltk
prettytable
torch>=1.3.0

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project,
The version constraint of dependency tqdm can be changed to >=4.36.0,<=4.64.0.
The version constraint of dependency prettytable can be changed to >=0.6,<=1.0.1.

The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the tqdm
tqdm.tqdm
tqdm.tqdm.set_description
The calling methods from the prettytable
prettytable.PrettyTable
prettytable.PrettyTable.add_row
The calling methods from the all methods
os.path.isfile
i.code_words.size
result.F.relu.view
pad_indices.extend
lengths.tolist
self.named_parameters
self.scores.unsqueeze.expand_as.view
target.view.size
i.self.layer
c2nl.utils.misc.relative_matmul
size.self.tt.FloatTensor.zero_
c2nl.inputters.timer.AverageMeter.update
self.Decoder.super.__init__
torch.nn.ReLU
self.proj
nltk.translate.bleu_score.corpus_bleu
torch.nn.Sigmoid
word_dict.word_to_char_ids
c2nl.utils.misc.tens2sen
torch.matmul
torch.log.squeeze
c2nl.modules.char_embedding.CharEmbedding
torch.min
m.view
b.get_current_origin
target.view.scores.gather.view
wq.expand.expand
nltk.stem.PorterStemmer.stem
s.decode
torch.utils.data.sampler.RandomSampler
self.decode
inp.masked_fill.t
c2nl.modules.util_class.Elementwise
self.offset.align.view.scores.gather.view
BleuScorer
rnn_output.contiguous
self.RNNEncoder.super.__init__
i.code_mask.size
line.rstrip
torch.cat.gt
dev_exs.extend
pos_enc.cuda.expand
os.environ.copy
self.tanh
tokenizer.Tokens
wq.expand.view
self.words
self.type_embeddings
target.lower.split
nltk.translate.bleu_score.SmoothingFunction
cov.clone
self.attn.parameters
self.meteor_p.stdin.write
PositionalEncoding
self.layer_weights
c2nl.inputters.dataset.CommentDataset
scores.append
self.register_buffer
size.self.tt.LongTensor.fill_
b.prediction.index_select
self.transformer_c.init_state
w.word_dict.word_to_char_ids.tolist
sent_states.data.index_select
self._build_target_tokens
self.drop
summ_len.cuda.cuda
c2nl.inputters.constants.UNK.align.eq.float
torch.nn.Tanh
self.score.size
kwargs.get
num.decode.split
char_emb.conv.transpose
self.next_ys.append
code_chars.size
torch.nn.Embedding
logging.StreamHandler
torch.nn.Softmax
batch.x.view.transpose
multiplier.cuda.size
uh.expand.expand
self.dropout.size
res.keys
self._regexp.finditer
setuptools.setup
source.size
self.softmax
init_from_scratch
rvar
self.meteor_p.stdout.readline
fill.append
code_mask_rep.cuda.cuda
IOError
self.network.load_state_dict
alignment.contiguous
torch.FloatTensor
warnings.warn
torch.nn.functional.softmax.size
tgt_pad_mask.unsqueeze.unsqueeze
self.relative_positions_embeddings_k
c2nl.inputters.vocabulary.Vocabulary.add
self._convert_word_to_char_ids
torch.cat
bx.b.out.index_fill_
collections.OrderedDict
old_args.keys
self.RNNDecoderBase.super.__init__
lengths.max
self.MultiHeadedAttention.super.__init__
logging.getLogger
i.summ_words.size.i.summ_word_rep.copy_
out.log.squeeze
multiplier.unsqueeze.expand
tqdm.tqdm
length.range_vec.unsqueeze.expand.transpose.transpose
id.hypotheses.split
i.attn.max
i.predictions.lower
os.path.dirname
self.TransformerEncoder.super.__init__
torch.nn.Linear
code_word_rep.cuda.cuda
getattr
tgt.tgt_chars.torch.Tensor.to.unsqueeze.tolist
beam_scores.view.topk
RuntimeError
torch.abs
emb_dims.extend
torch.tensor
print
torch.Tensor
cov.clone.fill_.cov.torch.min.log.sum
length.torch.arange.unsqueeze
self.optimizer.state_dict
self.slice
torch.cuda.is_available
self.score
args.vars.items
self.src_pos_embeddings
torch.exp.size
x_tz_matmul.reshape.permute
c2nl.modules.copy_generator.CopyGeneratorCriterion
c2nl.inputters.constants.UNK.target.ne.float
torch.bmm
self.__tens2sent
self.rnns.parameters
cov.dim
self.transformer_c.count_parameters
copy.copy
decoder.init_decoder
self.calc_score
scores.self.softmax.to
fn
net_loss.mean.backward
self.tanh.size
p.numel
copy_score.data.masked_fill_
generator.forward
self.network.eval
unshape
source_maps.append
self._single_reflen
hyps.append
acc_dec_outs.append
self.next_ys.size
lengths.lengths.device.max_len.torch.arange.type_as.repeat
self._run_forward_pass
rvar.size
b.advance
unicodedata.normalize
c2nl.eval.bleu.bleu_scorer.BleuScorer
c2nl.config.override_model_args
TransformerEncoderLayer
src.size
beam.scores.add_
prec.append
stem
torch.gt
scores.masked_fill.float
torch.exp.div
self.dropout_2
self.entities
self.embedder
tokenize_with_camel_case
self.dropout
out.log.log
torch.ones
torch.log
args.dev_tgt_files.append
split.unique_javadoc_tokens.update
alignment.cuda.cuda
v.cuda
prettytable.PrettyTable.add_row
self.global_scorer.update_global_state
numpy.mean
c2nl.config.add_model_args
perm.x.permute.contiguous.view
self.tanh.view
read_data
encoder_final.unsqueeze.expand
target.view.scores.gather.view.mul
next
memory_bank.size
x.mean
target.view.view
self.crefs.extend
torch.ones_like
w.self.model.tgt_dict.word_to_char_ids.tolist
t.size
rec.append
torch.stack.squeeze
tgt_len.i.tgt_tensor.copy_
numpy.random.seed
white_space_fix
self._bridge
scores.contiguous.size
blank_arr.append
self.VecEmbedding.super.__init__
eval
self.linear_in.view
new_args.keys
self.model.network.eval
inp_chars.torch.Tensor.to
self.decoder.init_decoder_state
logging.FileHandler.setFormatter
char_emb.transpose.transpose
self.sigmoid.expand_as
c2nl.config.get_model_args
c2nl.utils.copy_utils.replace_unknown
torch.nn.utils.rnn.pad_packed_sequence
e.size
self.decoder
c2nl.inputters.constants.PAD.target.ne.float
none.unsqueeze.unsqueeze
c2nl.encoders.transformer.TransformerEncoder
self.dropout_1
i.batch.size
dec_log_probs.append
enc_dec_attn.mean.mean
code_char_rep.cuda.cuda
output.self.layer_weights.squeeze
self.retest
self.Transformer.super.__init__
validate_official
h_t_.view.size
math.exp
self.relu
c2nl.objects.Summary.append_token
model.network.count_encoder_parameters
torch.max.view
self.score_ratio
tokens.append
math.sqrt.size
argparse.ArgumentParser.parse_args
copy_generator.forward
torch.nn.functional.sigmoid
f.read.strip
references.keys
mask.unsqueeze.unsqueeze
h_t_.view.view
reference.split
c2nl.inputters.timer.Timer
self.transformer_d.init_state
c2nl.encoders.rnn_encoder.RNNEncoder
collections.defaultdict.items
c2nl.translator.penalties.PenaltyBuilder
self.length_penalty
_skip
torch.sort
NotImplementedError
self.reinforce.sample
inp.inp_chars.torch.Tensor.to.unsqueeze
code_len.cuda.cuda
self.intermediate
dict.word_to_char_ids
prediction.split
attr.lower.split
json.dumps
b.prediction.index_add_
tgt.squeeze.clone
tgt_size.data.len.torch.zeros.long
self._init_cache
self.src_vocab.add_tokens
ground_truth.normalize_answer.split
src.strip
multiplier.cuda.cuda
abs
cook_refs
states.view
dict.byte
perm.x.permute.contiguous
self._tokens.append
c2nl.decoders.transformer.TransformerDecoder
argparse.ArgumentParser.add_argument_group
beam.global_state.beam.attn.torch.min.sum
bx.b.out.index_select
setuptools.find_packages
self.feed_forward
ml_loss.sum.sum
self.LayerNorm.super.__init__
inputs.size
t.max
join
b.get_hyp
self.prev_ks.append
torch.stack.transpose
torch.nn.functional.softmax.squeeze
w.params.word_to_char_ids.tolist
c2nl.inputters.dataset.CommentDataset.lengths
emb.size
parameters.numel
self.embedder.size
self._validate
beam.scores.clone.fill_
int
_.detach
TAG_TYPE_MAP.get
dec_out.squeeze
s.split
_get_ngrams
self._bridge.append
summ_word_rep.cuda.cuda
words.torch.Tensor.type_as
fill_b.cuda.cuda
a.repeat
tgt_seq.contiguous.ne
nltk.translate.bleu
self.Seq2seq.super.__init__
sys.stderr.write
args.train_tgt_files.append
self.transformer
i.matches.span
tmp.mul.log
max_index.item
c2nl.inputters.utils.load_data
subprocess.call
target.view.ne
tgt.tgt_chars.torch.Tensor.to.unsqueeze
layer
args.train_src_tag_files.append
torch.arange.unsqueeze
self.GlobalAttention.super.__init__
self.src_highway_net
ValueError
self._pen_is_none
align.view.ne
inp.masked_fill.gt
self.CopyGenerator.super.__init__
args.dev_src_files.append
self.network
args.dev_src_tag_files.append
self.network.register_buffer
c2nl.inputters.vocabulary.UnicodeCharsVocabulary
self.fusion_sigmoid
hypothesis_str.replace.replace.replace
i.self.rnns
attn.append
self.CodeTokenizer.super.__init__
torch.utils.data.DataLoader
self.PositionalEncoding.super.__init__
c2nl.inputters.vector.vectorize
torch.nn.utils.clip_grad_norm_
zip
load_words.update
loss_per_token.item.item
self._stat
time.time
scores.view.gather
list
self.ctest.append
summ_chars.size
sequence.unsqueeze
decoder.init_decoder.beam_update
bottle_hidden
self.scores.unsqueeze.expand_as
self.ctest.extend
torch.LongTensor
encoder_final.unsqueeze.unsqueeze
torch.nn.Conv1d
self.cov_penalty
torch.clamp
load_words.add
x.std
subprocess.Popen
kwargs.byte
compute_eval_score
self.make_embedding._modules.values
new_layer_wise_coverage.append
lengths.device.max_len.torch.arange.type_as
bx.b.out.index_add_
ex.size
x.view
torch.nn.ModuleList
isinstance
emb.dim
self.optimizer.state.values
self._coverage_penalty
c2nl.inputters.timer.AverageMeter
args.dataset_weights.keys
torch.utils.data.sampler.SequentialSampler
map
tgt.gt.float
self.optimizer.load_state_dict
sequence.translate
collections.Counter.update
numpy.argsort
torch.nn.CrossEntropyLoss
blank_b.cuda.cuda
beam.global_state.keys
layer.chunk
c2nl.utils.misc.aeq
summary.vectorize
words.unsqueeze.cuda
self.rnn.parameters
self.linear
memory_bank.batch_size.torch.Tensor.type_as.long.fill_.repeat
state.items
examples.append
tgt_seq.contiguous
align.view.view
copy_info.cpu.numpy
scores.contiguous.contiguous
source_tag.split
src_map.size
self.copier
set.add
state.update_state
source.dim
self.opts.get
i.matches.group
self.Embedder.super.__init__
c2nl.inputters.dataset.SortedBatchSampler
self.reset
torch.stack
refmaxcounts.get
parser.register
tgt.tgt_chars.torch.Tensor.to.unsqueeze.to
self.rnns.append
b.prediction.index_fill_
self.dropout.squeeze
self.network.cuda
tgt.size
self.word_to_char_ids
code_mask_rep.repeat.byte
self.make_embedding
linear
tgt_chars.tolist.torch.Tensor.unsqueeze
f.read
regex.compile
self.compatible
c2nl.inputters.timer.Timer.time
self.tt.LongTensor
line.rstrip.split
self._tokens.insert
torch.exp
x.permute.reshape
summ_len.float.ml_loss.div.mean
ex_weights.cuda.cuda
inputs.split
ml_loss.sum.div
len
self.relative_positions_embeddings_v
attr.lower
split.unique_function_tokens.update
self.normalize
eval_accuracies
self.network.parameters
align.view.eq
c2nl.modules.embeddings.Embeddings
self.tgt_pos_embeddings
self.TransformerEncoderLayer.super.__init__
self._single_reflen.append
src.strip.split
c2nl.decoders.rnn_decoder.RNNDecoder
c2nl.inputters.constants.UNK.target.eq.float
self.meteor_p.wait
summ_len.float
dim.head_count.batch_size.x.view.transpose
self.fscore
torch.ones_like.unsqueeze
enumerate
coverage.dim
dec
beam.prev_ks.beam.global_state.index_select.add
c2nl.objects.Summary
tqdm.tqdm.set_description
self.rnn
bx.blank.torch.Tensor.to
self.src_word_embeddings
i.summ_chars.size
future_mask.triu_.view.triu_
self.Highway.super.__init__
c2nl.utils.misc.generate_relative_positions_matrix
collections.Counter
numpy.zeros
decoder.init_decoder.repeat_beam_size_times
code_len.size
memory_bank.batch_size.torch.Tensor.type_as.long.fill_
attn_out.index_select
kwargs.byte.unsqueeze
code_type_rep.cuda.cuda
torch.optim.Adam
pos_enc.cuda.cuda
self.Embeddings.super.__init__
logging.warning
self.tt.FloatTensor
float
token.split
init_from_scratch.init_optimizer
input_dim.layer.bias.data.fill_
parser.add_argument_group
compute_bleu
self.word_lut.weight.data.copy_
dict
self.output
argparse.ArgumentParser.register
self.optimizer.step
self.attention
layer_scores.unsqueeze.output.transpose.torch.matmul.squeeze
q.size
self.network.cpu
c2nl.utils.copy_utils.align
maxcounts.get
str
c2nl.modules.multi_head_attn.MultiHeadedAttention
_insert
c2nl.models.seq2seq.Seq2seq
out.shape.lengths.sequence_mask.unsqueeze
c2nl.utils.misc.generate_relative_positions_matrix.to
new_test.self.retest.compute_score
encoder_final.unsqueeze.size
d.size
unbottle
enc
key.hypotheses.split
Decoder
i.code_chars.size
Code2NaturalLanguage
c2nl.inputters.vocabulary.Vocabulary.normalize
prediction.normalize_answer.split
torch.save
max_len.torch.arange.unsqueeze
range_vec.unsqueeze.expand
attn_copy.data.masked_fill_
enc_dec_attn.mean.dim
cov.clone.fill_
self.context_attn
e.view
self.transformer.count_parameters
conv
sys.path.append
init_from_scratch.parallelize
self.src_char_embeddings
self.copy_generator
self._check_args
self.linear_query
copy.copy.pop
logging.getLogger.setLevel
vars
self.criterion
beam.global_state.index_select
self.tgt_highway_net
inp.masked_fill.masked_fill
sent_states.data.copy_
vocab_sizes.extend
inputs.view
torch.is_tensor
torch.nn.DataParallel
scores.contiguous.view
numpy.random.shuffle
reqs.strip.split
c2nl.modules.copy_generator.CopyGenerator
self.pe
snake_case_tokenized.extend
hyp.append
cov.size
main.model.Code2NaturalLanguage.load_checkpoint
module
gts.keys
decoder.decode
str.maketrans
torch.cos
self.src_vocab.remove
code_len.repeat
self.linear_context
c2nl.eval.bleu.bleu_scorer.BleuScorer.compute_score
h_s.contiguous.view
c2nl.modules.global_attention.GlobalAttention
main
train
source_map.cuda.cuda
scores.self.softmax.to.squeeze
self.key
multiplier.cuda.unsqueeze
dict.size
self.tok2ind.keys
torch.nn.functional.softmax
r.split
x.transpose
wt.item
self.UnicodeCharsVocabulary.super.__init__
mask.float.squeeze
c2nl.modules.util_class.LayerNorm
init_from_scratch.cuda
alignments.append
exp_score.div.sum
self.linear_out
self.v
torch.zeros
self.tgt_word_embeddings
args.train_src_files.append
self.init_decoder
set
sorted
self.sigmoid
self.network.embedder.tgt_word_embeddings.fix_word_lut
hasattr
self.embedder.squeeze
feat.squeeze
text.lower
b.sort_finished
self.get_hyp
torch.cat.append
tokenize_with_snake_case
count.batch.x.view.transpose.repeat.transpose.contiguous.view
scores.self.softmax.to.chunk
tgt_seq.contiguous.view
i.code_type.size
torch.nn.Sequential
state.state.sequence_mask.unsqueeze
i.code_chars.size.i.code_char_rep.copy_
self.TransformerDecoder.super.__init__
eval_score
load_words
model.network.layer_wise_parameters
torch.matmul.reshape
sum
summ_char_rep.cuda.cuda
Encoder
concat_c.self.linear_out.view
min
self._run_forward_ml
inp.t.contiguous.view
self._initialize_bridge
network.state_dict
self.decoder.decode
open.write
num.decode.split.decode
os.path.abspath
all
x.transpose.contiguous
parser.add_argument_group.add_argument
int.copy_info.cpu.numpy.astype.tolist
self.all_scores.append
self.copy_attn
translations.append
torch.nn.functional.relu
train_exs.extend
self.encoder
scores.view.size
target.view.eq
c2nl.eval.bleu.corpus_bleu
torch.nn.functional.softmax.unsqueeze
refs.append
my_lcs
count_file_lines
h_s.size
any
self.copy_attn.parameters
collections.Counter.most_common
m.group
dim.wquh.view.self.v.view
int.copy_info.cpu.numpy.astype.tolist.cpu
torch.cat.squeeze
self._length_penalty
c2nl.inputters.utils.build_word_and_char_dict
tgt_chars.torch.Tensor.to
out.log.size
torch.mul
i.code_type.size.i.code_type_rep.copy_
re.finditer
c2nl.objects.Summary.prepend_token
self.ratio
logging.getLogger.addHandler
self.layer.parameters
target.lower
self.linear_in
text.split
self.network.train
lower
c2nl.modules.position_ffn.PositionwiseFeedForward
collections.Counter.values
memory_bank.batch_size.torch.Tensor.type_as.long
attention_scores.append
torch.arange
hypotheses.keys
add_train_args
threading.Lock
tgt_seq.cuda.cuda
logging.getLogger.warning
logging.getLogger.info
torch.stack.append
fill_arr.append
self.parameters
net_loss.mean.item
ml_loss.sum.mul
torch.nn.Dropout
init_from_scratch.update
copy_info.cpu.numpy.astype
self.add
PositionalEncoding.unsqueeze
filter_fn
code_mask_rep.repeat.repeat
num.format.rstrip
max
code.vectorize
Translation
count.batch.x.view.transpose.repeat.transpose.contiguous
torch.cuda.device_count
states.size
load_words.append
torch.manual_seed
camel_case_tokenized.extend
b.get_current_state
h_s.contiguous
bx.fill.torch.Tensor.to
num.format.rstrip.rstrip
data.append
cov.clone.fill_.cov.torch.min.log
align.view.size
self.tok2ind.get
f
Code2NaturalLanguage.init_optimizer
exp_score.div.div
beam.scores.clone
attn.squeeze
self.finished.sort
self.TransformerDecoderLayer.super.__init__
ml_loss.sum.view
self.Encoder.super.__init__
numpy.random.random
isinstance.item
ml_loss.sum.mean
self.transformer_c
self.compute_score
_fix_enc_hidden
w.item
self.global_scorer.update_score
i.code_words.size.i.code_word_rep.copy_
source.split
tgt.strip.split
self.scores.unsqueeze
hypothesis_str.replace.replace
idx.start.self.slice.untokenize
multiplier.unsqueeze.unsqueeze.expand
self.network.embedder.src_word_embeddings.fix_word_lut
self.network.mean
self.embedding
logging.FileHandler
self.bridge.parameters
self.encoder.count_parameters
inp.masked_fill.tolist
self.crefs.append
range.append
inp.data.eq
self.word_vec_size.vocabulary.len.torch.FloatTensor.zero_
normalize_answer
hidden.size
c2nl.inputters.constants.UNK.align.ne.float
dict.values
beam.scores.sub_
embedder
hasattr.copy_attn
multiplier.unsqueeze.unsqueeze
c2nl.eval.meteor.Meteor.compute_score
self.form_src_vocab
tmp.mul.mul
self.softmax.size
tgt.tgt_chars.torch.Tensor.to.unsqueeze.repeat
self.linear_copy
logging.Formatter
sent.size
main.model.Code2NaturalLanguage.load
model.network.count_parameters
self.transformer_d.count_parameters
cov.clone.fill_.cov.torch.max.sum
scores.view.view
torch.optim.SGD
torch.load
torch.cat.size
e.data.repeat
batch_size.tgt_words.expand.unsqueeze
self.meteor_p.stdout.readline.strip
inp.t.contiguous
groups.append
self.query
self.generator
self.fusion_gate.squeeze
math.log
prettytable.PrettyTable
self.global_scorer.score
argparse.ArgumentParser
self._activation
future_mask.triu_.view
remove_punc
src_vocabs.append
beam.attn.sum
c2nl.models.transformer.Transformer
shape.transpose
atexit.unregister
k.bleu_list.append
memory_bank.size.memory_bank.size.emb.size.memory_bank.size.torch.zeros.type_as
self.PositionwiseFeedForward.super.__init__
max_len.torch.arange.unsqueeze.float
self.optimizer.zero_grad
tgt_words.data.eq
count.batch.x.view.transpose.repeat.transpose
x.transpose.contiguous.view
argparse.Namespace
attn.size
length.range_vec.unsqueeze.expand.transpose
math.sqrt
c2nl.translator.beam.Beam
line.strip
self.decoder.load_state_dict
uh.expand.view
range
scores.masked_fill.masked_fill
Embedder
c2nl.eval.meteor.Meteor
self._from_beam
init_from_scratch.checkpoint
atexit.register
os.path.join
params.byte.unsqueeze
self.meteor_p.stdout.readline.dec.strip
tgt_words.tgt_chars.to.unsqueeze
c2nl.inputters.vocabulary.Vocabulary
self.fusion_gate
self.model.tgt_dict.word_to_char_ids
word_probs.size
self.TEXT.t.lower
self.layer_norm_2
c2nl.decoders.state.RNNDecoderState
precook
candidate.split
model.network.count_decoder_parameters
main.model.Code2NaturalLanguage
words.torch.Tensor.type_as.unsqueeze
v.lower
rnn_output.size
perm.x.permute.contiguous.permute
self.meteor_p.kill
c2nl.modules.highway.Highway
uuid.uuid4
self.decoder.count_parameters
self.DecoderBase.super.__init__
self.layer_norm
s.encode
lengths.numel
tgt.strip
encoder
blank.append
words.unsqueeze.expand
torch.sin
batch_size.torch.Tensor.type_as
self.__generate_sequence
batch_size.lengths.lengths.device.max_len.torch.arange.type_as.repeat.lt
representations.append
c2nl.eval.rouge.Rouge
sentence.split
self.dropout.split
shape
self.attn.append
human_format
numpy.array
random.choice
self.ind2tok.items
self.ind2tok.get
self.Elementwise.super.__init__
self.CharEmbedding.super.__init__
torch.nn.Parameter
sequence.translate.strip
source.c.torch.cat.view
subprocess.check_output
word_rep.size.torch.arange.type
parser.add_argument
lengths.unsqueeze
tgt_pad_mask.unsqueeze.size
process_examples
super
collections.defaultdict
init_from_scratch.predict
open.close
self.finished.append
self.transformer.init_state
c2nl.objects.Code
code_mask_rep.byte.unsqueeze
nltk.stem.PorterStemmer
words.torch.Tensor.type_as.append
self.cook_append
self.meteor_p.stdin.flush
TransformerDecoderLayer
round
open
var
z.transpose
word.encode
logging.StreamHandler.setFormatter
perm.x.permute.contiguous.size
self.transformer_d
self.value
self._from_beam.append
std_attentions.append
time.strftime
torch.no_grad
self.decoder.init_decoder
self.copier.count_parameters
self.copier.init_decoder_state
h_s.transpose
init_from_scratch.save
format
type
batch.x.view.transpose.repeat
align.data.masked_fill_
cook_test
c2nl.eval.rouge.Rouge.compute_score
iter
rnn_type.nn.getattr
torch.cuda.manual_seed
self.tgt_char_embeddings
shape.size
set_defaults
i.summ_words.size
torch.nn.utils.rnn.pack_padded_sequence
c2nl.utils.copy_utils.make_src_map
self.make_embedding.add_module
self.close
c2nl.utils.misc.sequence_mask
torch.max
i.code_mask.size.i.code_mask_rep.copy_
torch.load.size
c2nl.utils.misc.count_file_lines
torch.tril
align_unk.tmp.mul.mul
i.summ_chars.size.i.summ_char_rep.copy_
psutil.virtual_memory
c2nl.utils.copy_utils.collapse_copy_scores
self.attn
self.shutdown
self.embedder.gt
tuple
self.data.t.self.TEXT_WS.t.join.strip
lengths.size

@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.