torch/torch7

what is the main problem in this

Naveed14 opened this issue · 0 comments

when i run my code so for some instant run and then give me this error
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at C:\w\1\s\windows\pytorch\aten\src\THNN/generic/ClassNLLCriterion.c:97

`loading vocabulary file bert-base-uncased-vocab.txt
loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at C:\Users\redro.pytorch_pretrained_bert\9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba
extracting archive file C:\Users\redro.pytorch_pretrained_bert\9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir C:\Users\redro\AppData\Local\Temp\tmpbysl_867
Model config {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 30522
}

n_trainable_params: 109484547, n_nontrainable_params: 0

training arguments:

model_name: bert_spc
dataset: twitter
optimizer: <class 'torch.optim.adam.Adam'>
initializer: <function xavier_uniform_ at 0x0000013E80479E18>
learning_rate: 2e-05
dropout: 0.1
l2reg: 0.01
num_epoch: 10
batch_size: 64
log_step: 10
embed_dim: 300
hidden_dim: 300
bert_dim: 768
pretrained_bert_name: bert-base-uncased
max_seq_len: 80
polarities_dim: 3
hops: 3
device: cpu
seed: None
cross_val_fold: 10
model_class: <class 'models.bert_spc.BERT_SPC'>
dataset_file: {'train': './datasets/acl-14-short-data/train.raw', 'test': './datasets/acl-14-short-data/test.raw'}
inputs_cols: ['text_bert_indices', 'bert_segments_ids']
fold : 0

epoch: 0`