RUCAIBox/TextBox

Fast-Bleu part

Closed this issue · 4 comments

hmqt commented

Hi,
I run my dataset on textbox-0.2.1 and everything is right, but when the program reached to evaluation part the program it doesn't give me results. However, fast-bleu is installed correctly with the last version.
this is my output, please help me
(base) hana@hana:~/Documents/TextBox-0.2.1$ python3 run.py
[nltk_data] Downloading package punkt to /home/hana/nltk_data...
[nltk_data] Package punkt is already up-to-date!
23 Apr 23:30 INFO
General Hyper Parameters:
gpu_id=0
use_gpu=True
DDP=False
seed=2020
state=INFO
reproducibility=True
data_path=dataset/novels
checkpoint_dir=saved/
generated_text_dir=generated/

Training Hyper Parameters:
epochs=50
train_batch_size=16
learner=adam
learning_rate=0.001
eval_step=1
stopping_step=2
grad_clip=0.1
init_lr=0.001
warmup_steps=7
g_pretraining_epochs=80
d_pretraining_epochs=50
d_sample_num=10000
d_sample_training_epochs=3
adversarail_training_epochs=80
adversarail_d_epochs=5

Evaluation Hyper Parameters:
metrics=['bleu']
n_grams=[1, 2, 3, 4, 5]
eval_batch_size=32

Model Hyper Parameters:
generator_embedding_size=32
discriminator_embedding_size=64
hidden_size=32
dropout_rate=0.25
l2_reg_lambda=0.2
filter_sizes=[2, 3, 4]
filter_nums=[200, 200, 200]
Monte_Carlo_num=16

Dataset Hyper Parameters:
train_batch_size=16
learning_rate=0.001
eval_batch_size=32
vocab_size=2000
seq_len=120
task_type=unconditional
init_lr=0.001
warmup_steps=7
PLM_MODELS=BERT
pretrained_model_path=BERT
language=arabic
share_vocab=True
post_processing=paraphrase
bleu_type=multi-bleu
rouge_type=py-rouge
corpus_meteor=False
metrics_for_best_model=['bleu', 'rouge-1', 'rouge-2', 'rouge-l', 'meteor']
generation_kwargs={'num_beams': 1, 'do_sample': True, 'top_p': 0.9, 'temperature': 0.7}
accumulation_steps=12
prefix_prompt=Paraphrase

23 Apr 23:30 INFO Loading data from restored
23 Apr 23:30 INFO Vocab size: source 15000, target 15000
23 Apr 23:30 INFO train: 19442 cases, valid: 22672 cases, test: 22672 cases

23 Apr 23:30 INFO Build [unconditional] DataLoader for [train]
23 Apr 23:30 INFO batch_size = [16], shuffle = [True], drop_last = [True]

23 Apr 23:30 INFO Build [unconditional] DataLoader for [valid]
23 Apr 23:30 INFO batch_size = [16], shuffle = [True], drop_last = [True]

23 Apr 23:30 INFO Build [unconditional] DataLoader for [test]
23 Apr 23:30 INFO batch_size = [32], shuffle = [False], drop_last = [False]

23 Apr 23:30 INFO SeqGAN(
(generator): SeqGANGenerator(
(LSTM): LSTM(32, 32)
(word_embedding): Embedding(15000, 32, padding_idx=0)
(vocab_projection): Linear(in_features=32, out_features=15000, bias=True)
)
(discriminator): SeqGANDiscriminator(
(word_embedding): Embedding(15000, 64, padding_idx=0)
(dropout): Dropout(p=0.25, inplace=False)
(filters): ModuleList(
(0): Sequential(
(0): Conv2d(1, 200, kernel_size=(2, 64), stride=(1, 1))
(1): ReLU()
(2): MaxPool2d(kernel_size=(121, 1), stride=(121, 1), padding=0, dilation=1, ceil_mode=False)
)
(1): Sequential(
(0): Conv2d(1, 200, kernel_size=(3, 64), stride=(1, 1))
(1): ReLU()
(2): MaxPool2d(kernel_size=(120, 1), stride=(120, 1), padding=0, dilation=1, ceil_mode=False)
)
(2): Sequential(
(0): Conv2d(1, 200, kernel_size=(4, 64), stride=(1, 1))
(1): ReLU()
(2): MaxPool2d(kernel_size=(119, 1), stride=(119, 1), padding=0, dilation=1, ceil_mode=False)
)
)
(W_T): Linear(in_features=600, out_features=600, bias=True)
(W_H): Linear(in_features=600, out_features=600, bias=False)
(W_O): Linear(in_features=600, out_features=1, bias=True)
)
)
Trainable parameters: 2780449
23 Apr 23:30 INFO Note: NumExpr detected 16 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
23 Apr 23:30 INFO NumExpr defaulting to 8 threads.
23 Apr 23:30 INFO Using none schedule
23 Apr 23:30 INFO Start generator pretraining...
23 Apr 23:31 INFO epoch 0 generator pretraining [time: 32.93s, train loss: 6.3129]
23 Apr 23:31 INFO epoch 1 generator pretraining [time: 32.06s, train loss: 5.7754]
23 Apr 23:32 INFO epoch 2 generator pretraining [time: 33.90s, train loss: 5.6460]
23 Apr 23:32 INFO epoch 3 generator pretraining [time: 32.13s, train loss: 5.5229]
23 Apr 23:33 INFO epoch 4 generator pretraining [time: 32.17s, train loss: 5.4046]
23 Apr 23:33 INFO epoch 5 generator pretraining [time: 32.34s, train loss: 5.3003]
23 Apr 23:34 INFO epoch 6 generator pretraining [time: 31.86s, train loss: 5.2042]
23 Apr 23:34 INFO epoch 7 generator pretraining [time: 30.01s, train loss: 5.1136]
23 Apr 23:35 INFO epoch 8 generator pretraining [time: 30.31s, train loss: 5.0284]
23 Apr 23:35 INFO epoch 9 generator pretraining [time: 30.33s, train loss: 4.9484]
23 Apr 23:36 INFO epoch 10 generator pretraining [time: 33.51s, train loss: 4.8735]
23 Apr 23:36 INFO epoch 11 generator pretraining [time: 30.21s, train loss: 4.8025]
23 Apr 23:37 INFO epoch 12 generator pretraining [time: 30.47s, train loss: 4.7364]
23 Apr 23:37 INFO epoch 13 generator pretraining [time: 30.25s, train loss: 4.6738]
23 Apr 23:38 INFO epoch 14 generator pretraining [time: 30.19s, train loss: 4.6146]
23 Apr 23:38 INFO epoch 15 generator pretraining [time: 29.83s, train loss: 4.5588]
23 Apr 23:39 INFO epoch 16 generator pretraining [time: 31.50s, train loss: 4.5062]
23 Apr 23:39 INFO epoch 17 generator pretraining [time: 29.76s, train loss: 4.4564]
23 Apr 23:40 INFO epoch 18 generator pretraining [time: 30.25s, train loss: 4.4078]
23 Apr 23:40 INFO epoch 19 generator pretraining [time: 29.89s, train loss: 4.3629]
23 Apr 23:41 INFO epoch 20 generator pretraining [time: 30.13s, train loss: 4.3200]
23 Apr 23:41 INFO epoch 21 generator pretraining [time: 30.00s, train loss: 4.2790]
23 Apr 23:42 INFO epoch 22 generator pretraining [time: 30.06s, train loss: 4.2398]
23 Apr 23:42 INFO epoch 23 generator pretraining [time: 30.11s, train loss: 4.2027]
23 Apr 23:43 INFO epoch 24 generator pretraining [time: 30.28s, train loss: 4.1673]
23 Apr 23:43 INFO epoch 25 generator pretraining [time: 30.02s, train loss: 4.1336]
23 Apr 23:44 INFO epoch 26 generator pretraining [time: 30.44s, train loss: 4.1014]
23 Apr 23:44 INFO epoch 27 generator pretraining [time: 30.24s, train loss: 4.0697]
23 Apr 23:45 INFO epoch 28 generator pretraining [time: 30.09s, train loss: 4.0402]
23 Apr 23:45 INFO epoch 29 generator pretraining [time: 30.35s, train loss: 4.0123]
23 Apr 23:46 INFO epoch 30 generator pretraining [time: 32.85s, train loss: 3.9850]
23 Apr 23:46 INFO epoch 31 generator pretraining [time: 29.72s, train loss: 3.9586]
23 Apr 23:47 INFO epoch 32 generator pretraining [time: 30.45s, train loss: 3.9338]
23 Apr 23:47 INFO epoch 33 generator pretraining [time: 30.07s, train loss: 3.9099]
23 Apr 23:48 INFO epoch 34 generator pretraining [time: 30.09s, train loss: 3.8866]
23 Apr 23:48 INFO epoch 35 generator pretraining [time: 29.86s, train loss: 3.8643]
23 Apr 23:49 INFO epoch 36 generator pretraining [time: 30.21s, train loss: 3.8430]
23 Apr 23:49 INFO epoch 37 generator pretraining [time: 30.01s, train loss: 3.8221]
23 Apr 23:50 INFO epoch 38 generator pretraining [time: 32.70s, train loss: 3.8027]
23 Apr 23:51 INFO epoch 39 generator pretraining [time: 30.43s, train loss: 3.7827]
23 Apr 23:51 INFO epoch 40 generator pretraining [time: 31.53s, train loss: 3.7643]
23 Apr 23:52 INFO epoch 41 generator pretraining [time: 30.56s, train loss: 3.7468]
23 Apr 23:52 INFO epoch 42 generator pretraining [time: 29.92s, train loss: 3.7296]
23 Apr 23:53 INFO epoch 43 generator pretraining [time: 30.06s, train loss: 3.7125]
23 Apr 23:53 INFO epoch 44 generator pretraining [time: 29.96s, train loss: 3.6962]
23 Apr 23:54 INFO epoch 45 generator pretraining [time: 30.26s, train loss: 3.6807]
23 Apr 23:54 INFO epoch 46 generator pretraining [time: 29.90s, train loss: 3.6653]
23 Apr 23:55 INFO epoch 47 generator pretraining [time: 30.37s, train loss: 3.6506]
23 Apr 23:55 INFO epoch 48 generator pretraining [time: 30.20s, train loss: 3.6367]
23 Apr 23:56 INFO epoch 49 generator pretraining [time: 30.35s, train loss: 3.6227]
23 Apr 23:56 INFO epoch 50 generator pretraining [time: 32.48s, train loss: 3.6096]
23 Apr 23:57 INFO epoch 51 generator pretraining [time: 30.20s, train loss: 3.5961]
23 Apr 23:57 INFO epoch 52 generator pretraining [time: 29.87s, train loss: 3.5834]
23 Apr 23:58 INFO epoch 53 generator pretraining [time: 30.26s, train loss: 3.5710]
23 Apr 23:58 INFO epoch 54 generator pretraining [time: 30.08s, train loss: 3.5590]
23 Apr 23:59 INFO epoch 55 generator pretraining [time: 30.19s, train loss: 3.5473]
23 Apr 23:59 INFO epoch 56 generator pretraining [time: 30.20s, train loss: 3.5359]
24 Apr 00:00 INFO epoch 57 generator pretraining [time: 30.22s, train loss: 3.5245]
24 Apr 00:00 INFO epoch 58 generator pretraining [time: 29.89s, train loss: 3.5140]
24 Apr 00:01 INFO epoch 59 generator pretraining [time: 36.90s, train loss: 3.5033]
24 Apr 00:01 INFO epoch 60 generator pretraining [time: 29.74s, train loss: 3.4929]
24 Apr 00:02 INFO epoch 61 generator pretraining [time: 29.75s, train loss: 3.4833]
24 Apr 00:02 INFO epoch 62 generator pretraining [time: 33.26s, train loss: 3.4737]
24 Apr 00:03 INFO epoch 63 generator pretraining [time: 32.22s, train loss: 3.4638]
24 Apr 00:03 INFO epoch 64 generator pretraining [time: 30.12s, train loss: 3.4548]
24 Apr 00:04 INFO epoch 65 generator pretraining [time: 30.26s, train loss: 3.4458]
24 Apr 00:04 INFO epoch 66 generator pretraining [time: 30.07s, train loss: 3.4366]
24 Apr 00:05 INFO epoch 67 generator pretraining [time: 30.35s, train loss: 3.4283]
24 Apr 00:05 INFO epoch 68 generator pretraining [time: 30.14s, train loss: 3.4200]
24 Apr 00:06 INFO epoch 69 generator pretraining [time: 33.77s, train loss: 3.4114]
24 Apr 00:06 INFO epoch 70 generator pretraining [time: 30.04s, train loss: 3.4040]
24 Apr 00:07 INFO epoch 71 generator pretraining [time: 30.27s, train loss: 3.3961]
24 Apr 00:07 INFO epoch 72 generator pretraining [time: 29.95s, train loss: 3.3876]
24 Apr 00:08 INFO epoch 73 generator pretraining [time: 30.14s, train loss: 3.3804]
24 Apr 00:08 INFO epoch 74 generator pretraining [time: 30.03s, train loss: 3.3733]
24 Apr 00:09 INFO epoch 75 generator pretraining [time: 30.09s, train loss: 3.3660]
24 Apr 00:09 INFO epoch 76 generator pretraining [time: 30.01s, train loss: 3.3594]
24 Apr 00:10 INFO epoch 77 generator pretraining [time: 30.30s, train loss: 3.3521]
24 Apr 00:10 INFO epoch 78 generator pretraining [time: 30.12s, train loss: 3.3457]
24 Apr 00:11 INFO epoch 79 generator pretraining [time: 30.19s, train loss: 3.3387]
24 Apr 00:11 INFO End generator pretraining...
24 Apr 00:11 INFO Start discriminator pretraining...
24 Apr 00:19 INFO epoch 0 discriminator pretraining [time: 495.68s, train loss: 0.6866]
24 Apr 00:27 INFO epoch 1 discriminator pretraining [time: 496.04s, train loss: 0.6707]
24 Apr 00:36 INFO epoch 2 discriminator pretraining [time: 493.29s, train loss: 0.6301]
24 Apr 00:44 INFO epoch 3 discriminator pretraining [time: 494.74s, train loss: 0.5829]
24 Apr 00:52 INFO epoch 4 discriminator pretraining [time: 498.42s, train loss: 0.5495]
24 Apr 01:00 INFO epoch 5 discriminator pretraining [time: 493.95s, train loss: 0.5175]
24 Apr 01:09 INFO epoch 6 discriminator pretraining [time: 512.57s, train loss: 0.4932]
24 Apr 01:17 INFO epoch 7 discriminator pretraining [time: 498.38s, train loss: 0.4561]
24 Apr 01:26 INFO epoch 8 discriminator pretraining [time: 495.72s, train loss: 0.4363]
24 Apr 01:34 INFO epoch 9 discriminator pretraining [time: 500.41s, train loss: 0.4164]
24 Apr 01:42 INFO epoch 10 discriminator pretraining [time: 501.15s, train loss: 0.4002]
24 Apr 01:51 INFO epoch 11 discriminator pretraining [time: 505.71s, train loss: 0.3883]
24 Apr 01:59 INFO epoch 12 discriminator pretraining [time: 500.09s, train loss: 0.3690]
24 Apr 02:07 INFO epoch 13 discriminator pretraining [time: 505.64s, train loss: 0.3607]
24 Apr 02:16 INFO epoch 14 discriminator pretraining [time: 497.66s, train loss: 0.3499]
24 Apr 02:24 INFO epoch 15 discriminator pretraining [time: 503.58s, train loss: 0.3353]
24 Apr 02:33 INFO epoch 16 discriminator pretraining [time: 515.09s, train loss: 0.3285]
24 Apr 02:41 INFO epoch 17 discriminator pretraining [time: 500.26s, train loss: 0.3127]
24 Apr 02:49 INFO epoch 18 discriminator pretraining [time: 499.37s, train loss: 0.3115]
24 Apr 02:58 INFO epoch 19 discriminator pretraining [time: 497.48s, train loss: 0.3039]
24 Apr 03:06 INFO epoch 20 discriminator pretraining [time: 502.93s, train loss: 0.3009]
24 Apr 03:14 INFO epoch 21 discriminator pretraining [time: 499.27s, train loss: 0.2936]
24 Apr 03:23 INFO epoch 22 discriminator pretraining [time: 521.81s, train loss: 0.2840]
24 Apr 03:31 INFO epoch 23 discriminator pretraining [time: 498.76s, train loss: 0.2800]
24 Apr 03:40 INFO epoch 24 discriminator pretraining [time: 500.83s, train loss: 0.2803]
24 Apr 03:48 INFO epoch 25 discriminator pretraining [time: 502.41s, train loss: 0.2763]
24 Apr 03:56 INFO epoch 26 discriminator pretraining [time: 497.66s, train loss: 0.2654]
24 Apr 04:05 INFO epoch 27 discriminator pretraining [time: 505.04s, train loss: 0.2640]
24 Apr 04:13 INFO epoch 28 discriminator pretraining [time: 498.60s, train loss: 0.2636]
24 Apr 04:22 INFO epoch 29 discriminator pretraining [time: 514.90s, train loss: 0.2563]
24 Apr 04:30 INFO epoch 30 discriminator pretraining [time: 498.40s, train loss: 0.2584]
24 Apr 04:38 INFO epoch 31 discriminator pretraining [time: 502.70s, train loss: 0.2550]
24 Apr 04:47 INFO epoch 32 discriminator pretraining [time: 498.53s, train loss: 0.2501]
24 Apr 04:55 INFO epoch 33 discriminator pretraining [time: 499.98s, train loss: 0.2434]
24 Apr 05:03 INFO epoch 34 discriminator pretraining [time: 503.62s, train loss: 0.2459]
24 Apr 05:12 INFO epoch 35 discriminator pretraining [time: 500.41s, train loss: 0.2423]
24 Apr 05:20 INFO epoch 36 discriminator pretraining [time: 504.69s, train loss: 0.2367]
24 Apr 05:29 INFO epoch 37 discriminator pretraining [time: 501.51s, train loss: 0.2423]
24 Apr 05:37 INFO epoch 38 discriminator pretraining [time: 501.22s, train loss: 0.2367]
24 Apr 05:45 INFO epoch 39 discriminator pretraining [time: 499.42s, train loss: 0.2335]
24 Apr 05:54 INFO epoch 40 discriminator pretraining [time: 503.93s, train loss: 0.2347]
24 Apr 06:02 INFO epoch 41 discriminator pretraining [time: 504.25s, train loss: 0.2340]
24 Apr 06:10 INFO epoch 42 discriminator pretraining [time: 506.19s, train loss: 0.2310]
24 Apr 06:19 INFO epoch 43 discriminator pretraining [time: 509.51s, train loss: 0.2329]
24 Apr 06:27 INFO epoch 44 discriminator pretraining [time: 507.49s, train loss: 0.2297]
24 Apr 06:36 INFO epoch 45 discriminator pretraining [time: 502.08s, train loss: 0.2324]
24 Apr 06:44 INFO epoch 46 discriminator pretraining [time: 500.93s, train loss: 0.2229]
24 Apr 06:53 INFO epoch 47 discriminator pretraining [time: 505.86s, train loss: 0.2290]
24 Apr 07:01 INFO epoch 48 discriminator pretraining [time: 502.42s, train loss: 0.2263]
24 Apr 07:09 INFO epoch 49 discriminator pretraining [time: 506.07s, train loss: 0.2226]
24 Apr 07:09 INFO End discriminator pretraining...
24 Apr 07:09 INFO Start adversarial training...
24 Apr 08:00 INFO epoch 0 training [time: 3063.69s, train loss: 0.0089]
24 Apr 08:52 INFO epoch 1 training [time: 3073.89s, train loss: 0.3042]
24 Apr 09:43 INFO epoch 2 training [time: 3076.13s, train loss: 0.0619]
24 Apr 10:34 INFO epoch 3 training [time: 3072.97s, train loss: 0.1491]
24 Apr 11:25 INFO epoch 4 training [time: 3075.57s, train loss: 0.0521]
24 Apr 12:17 INFO epoch 5 training [time: 3085.51s, train loss: 0.1268]
24 Apr 13:08 INFO epoch 6 training [time: 3079.47s, train loss: 0.0609]
24 Apr 13:59 INFO epoch 7 training [time: 3064.64s, train loss: 0.2090]
24 Apr 14:50 INFO epoch 8 training [time: 3069.69s, train loss: 0.1598]
24 Apr 15:42 INFO epoch 9 training [time: 3073.54s, train loss: 0.2071]
24 Apr 16:33 INFO epoch 10 training [time: 3072.00s, train loss: 0.0371]
24 Apr 17:24 INFO epoch 11 training [time: 3072.05s, train loss: 0.4547]
24 Apr 18:15 INFO epoch 12 training [time: 3072.16s, train loss: 0.1465]
24 Apr 19:06 INFO epoch 13 training [time: 3066.54s, train loss: 0.0066]
24 Apr 19:58 INFO epoch 14 training [time: 3071.99s, train loss: 0.0041]
24 Apr 20:49 INFO epoch 15 training [time: 3077.03s, train loss: 0.0046]
24 Apr 21:40 INFO epoch 16 training [time: 3095.21s, train loss: 0.1002]
24 Apr 22:34 INFO epoch 17 training [time: 3205.07s, train loss: 0.0871]
24 Apr 23:25 INFO epoch 18 training [time: 3080.23s, train loss: 0.0078]
25 Apr 00:17 INFO epoch 19 training [time: 3080.21s, train loss: 0.6846]
25 Apr 01:08 INFO epoch 20 training [time: 3082.79s, train loss: 0.0027]
25 Apr 01:59 INFO epoch 21 training [time: 3077.08s, train loss: 0.3228]
25 Apr 02:52 INFO epoch 22 training [time: 3153.75s, train loss: 0.0066]
25 Apr 03:46 INFO epoch 23 training [time: 3262.84s, train loss: 0.0712]
25 Apr 04:39 INFO epoch 24 training [time: 3145.26s, train loss: 0.1236]
25 Apr 05:30 INFO epoch 25 training [time: 3091.30s, train loss: 0.9160]
25 Apr 06:22 INFO epoch 26 training [time: 3095.30s, train loss: 0.0938]
25 Apr 07:13 INFO epoch 27 training [time: 3085.12s, train loss: 0.1692]
25 Apr 08:05 INFO epoch 28 training [time: 3088.20s, train loss: 0.0587]
25 Apr 08:56 INFO epoch 29 training [time: 3091.89s, train loss: 0.0629]
25 Apr 09:48 INFO epoch 30 training [time: 3091.01s, train loss: 0.0728]
25 Apr 10:39 INFO epoch 31 training [time: 3101.84s, train loss: 0.0694]
25 Apr 11:31 INFO epoch 32 training [time: 3111.74s, train loss: 0.0689]
25 Apr 12:23 INFO epoch 33 training [time: 3101.66s, train loss: 0.0073]
25 Apr 13:15 INFO epoch 34 training [time: 3101.68s, train loss: 0.2278]
25 Apr 14:06 INFO epoch 35 training [time: 3092.17s, train loss: 0.0076]
25 Apr 14:58 INFO epoch 36 training [time: 3092.15s, train loss: 0.0000]
25 Apr 15:49 INFO epoch 37 training [time: 3100.31s, train loss: 0.0003]
25 Apr 16:41 INFO epoch 38 training [time: 3103.53s, train loss: 0.0079]
25 Apr 17:33 INFO epoch 39 training [time: 3096.45s, train loss: 0.0758]
25 Apr 18:24 INFO epoch 40 training [time: 3098.27s, train loss: 0.1573]
25 Apr 19:16 INFO epoch 41 training [time: 3095.96s, train loss: 0.0077]
25 Apr 20:07 INFO epoch 42 training [time: 3092.62s, train loss: 0.0821]
25 Apr 20:59 INFO epoch 43 training [time: 3093.58s, train loss: 0.0031]
25 Apr 21:51 INFO epoch 44 training [time: 3145.17s, train loss: 0.0711]
25 Apr 22:44 INFO epoch 45 training [time: 3131.35s, train loss: 0.0630]
25 Apr 23:36 INFO epoch 46 training [time: 3122.37s, train loss: 0.0564]
26 Apr 00:28 INFO epoch 47 training [time: 3117.09s, train loss: 0.0454]
26 Apr 01:20 INFO epoch 48 training [time: 3117.25s, train loss: 0.0077]
26 Apr 02:11 INFO epoch 49 training [time: 3111.24s, train loss: 0.0869]
26 Apr 03:03 INFO epoch 50 training [time: 3125.30s, train loss: 0.0945]
26 Apr 03:56 INFO epoch 51 training [time: 3146.26s, train loss: 0.1068]
26 Apr 04:49 INFO epoch 52 training [time: 3178.39s, train loss: 0.0111]
26 Apr 05:41 INFO epoch 53 training [time: 3120.06s, train loss: 0.3165]
26 Apr 06:33 INFO epoch 54 training [time: 3154.74s, train loss: 0.0087]
26 Apr 07:26 INFO epoch 55 training [time: 3124.74s, train loss: 0.1176]
26 Apr 08:18 INFO epoch 56 training [time: 3120.42s, train loss: 0.5267]
26 Apr 09:10 INFO epoch 57 training [time: 3127.11s, train loss: 0.6991]
26 Apr 10:02 INFO epoch 58 training [time: 3122.78s, train loss: 0.3278]
26 Apr 10:54 INFO epoch 59 training [time: 3124.67s, train loss: 0.0189]
26 Apr 11:46 INFO epoch 60 training [time: 3119.73s, train loss: 0.3379]
26 Apr 12:38 INFO epoch 61 training [time: 3136.38s, train loss: 0.1393]
26 Apr 13:30 INFO epoch 62 training [time: 3126.28s, train loss: 0.9907]
26 Apr 14:22 INFO epoch 63 training [time: 3125.17s, train loss: 0.0340]
26 Apr 15:14 INFO epoch 64 training [time: 3124.72s, train loss: 0.7393]
26 Apr 16:06 INFO epoch 65 training [time: 3123.74s, train loss: 0.4404]
26 Apr 16:59 INFO epoch 66 training [time: 3130.54s, train loss: 0.3823]
26 Apr 17:51 INFO epoch 67 training [time: 3131.72s, train loss: 0.1315]
26 Apr 18:43 INFO epoch 68 training [time: 3127.59s, train loss: 0.3624]
26 Apr 19:35 INFO epoch 69 training [time: 3129.02s, train loss: 0.1061]
26 Apr 20:27 INFO epoch 70 training [time: 3130.44s, train loss: 0.8873]
26 Apr 21:19 INFO epoch 71 training [time: 3127.70s, train loss: 0.3913]
26 Apr 22:11 INFO epoch 72 training [time: 3128.90s, train loss: 0.0003]
26 Apr 23:04 INFO epoch 73 training [time: 3166.15s, train loss: 1.3058]
26 Apr 23:56 INFO epoch 74 training [time: 3126.71s, train loss: 0.4074]
27 Apr 00:49 INFO epoch 75 training [time: 3134.46s, train loss: 0.5775]
27 Apr 01:41 INFO epoch 76 training [time: 3139.76s, train loss: 0.0001]
27 Apr 02:33 INFO epoch 77 training [time: 3145.85s, train loss: 0.0979]
27 Apr 03:26 INFO epoch 78 training [time: 3129.52s, train loss: 0.0303]
27 Apr 04:19 INFO epoch 79 training [time: 3221.91s, train loss: 0.5272]
27 Apr 04:19 INFO End adversarial pretraining...
27 Apr 04:19 INFO best valid loss: -1, best valid ppl: None
27 Apr 04:19 INFO Loading model structure and parameters from saved/SeqGAN-novels-Apr-23-2023_23-30-28.pth
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 709/709 [01:34<00:00, 7.47it/s]
Traceback (most recent call last):
File "/home/hana/Documents/TextBox-0.2.1/run.py", line 4, in
run_textbox(model='SeqGAN', dataset='novels', config_file_list=None, config_dict={'dataset': 'novels','dataset_path': './dataset'})
File "/home/hana/Documents/TextBox-0.2.1/textbox/quick_start/quick_start.py", line 90, in run_textbox
test_result = trainer.evaluate(test_data, load_best_model=saved)
File "/home/hana/miniconda3/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/hana/Documents/TextBox-0.2.1/textbox/trainer/trainer.py", line 472, in evaluate
result = self.evaluator.evaluate(generate_corpus, reference_corpus)
File "/home/hana/Documents/TextBox-0.2.1/textbox/evaluator/base_evaluator.py", line 87, in evaluate
metric_result = evaluator.evaluate(generate_corpus=generate_corpus, reference_corpus=reference_corpus)
File "/home/hana/Documents/TextBox-0.2.1/textbox/evaluator/abstract_evaluator.py", line 43, in evaluate
info_dict = self._calc_metrics_info(generate_corpus=generate_corpus, reference_corpus=reference_corpus)
File "/home/hana/Documents/TextBox-0.2.1/textbox/evaluator/bleu_evaluator.py", line 80, in _calc_metrics_info
results = self._calc_fast_bleu(generate_corpus=generate_corpus, reference_corpus=reference_corpus)
File "/home/hana/Documents/TextBox-0.2.1/textbox/evaluator/bleu_evaluator.py", line 58, in _calc_fast_bleu
bleu = BLEU(reference_corpus, self.weights)
File "/home/hana/miniconda3/lib/python3.10/site-packages/fast_bleu/python_wrapper.py", line 85, in init
self.__instance = self.__get_instance(
RuntimeError: ffi_prep_cif_var failed

hmqt commented

please, give me the command to evaluate my generated texts to evaluate my generated data in different pc, using textbox-0.2.1.
Because I fixed fast-bleu in another pc, and I don't would train my model with my data another time it's take a lot time.

We do not support that now, but you can try to use the code to produce the same operation.

我也遇到了这个问题
Traceback (most recent call last):
File "/home/chem/Desktop/textbox/TextBox/run_textbox.py", line 19, in
run_textbox(
File "/home/chem/Desktop/textbox/TextBox/textbox/quick_start/quick_start.py", line 67, in run_textbox
test_result = trainer.evaluate(test_data, load_best_model=saved)
File "/home/chem/anaconda3/envs/RealTorch2/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/chem/Desktop/textbox/TextBox/textbox/trainer/trainer.py", line 367, in evaluate
result = self.evaluator.evaluate(generate_corpus, reference_corpus)
File "/home/chem/Desktop/textbox/TextBox/textbox/evaluator/ngram_evaluator.py", line 47, in evaluate
result_dict = self._calculate_metrics(generate_corpus=generate_corpus, reference_corpus=reference_corpus)
File "/home/chem/Desktop/textbox/TextBox/textbox/evaluator/ngram_evaluator.py", line 113, in calculate_metrics
result_list = self.metrics_info(generate_corpus, reference_corpus, metric)
File "/home/chem/Desktop/textbox/TextBox/textbox/evaluator/ngram_evaluator.py", line 96, in metrics_info
result = metric_fuc(generate_corpus=generate_corpus, reference_corpus=reference_corpus, n_grams=self.n_grams)
File "/home/chem/Desktop/textbox/TextBox/textbox/evaluator/metrics.py", line 42, in bleu

bleu = BLEU(reference_corpus, weights)
File "/home/chem/anaconda3/envs/RealTorch2/lib/python3.10/site-packages/fast_bleu/python_wrapper.py", line 85, in init
self.__instance = self.__get_instance(
RuntimeError: ffi_prep_cif_var failed
报了如下错误。
但是我在服务器的base环境下跑时,fast_bleu就没有报错,但是在我自己的torch环境下(为了能用GPU,名字为RealTorch2)就会报上面的错误,我尝试了将base中的fast_bleu直接拷贝到自定义环境的site-package中,但问题仍未解决,请问您遇到过这个问题么?我尝试问了一下chatgpt,它举出了以下可能:
"RuntimeError: ffi_prep_cif_var failed" 错误通常与 C 扩展模块或外部库的加载和调用有关,通常由不同环境变量或依赖项引起的问题。在您的情况下,您提到在不同的环境中运行相同的代码时出现了此错误。

这个错误是由于使用 FFI(Foreign Function Interface,外部函数接口)时,尝试预处理函数调用规范(CIF)时出现问题。以下是可能导致此错误的一些原因和解决方法:

环境依赖性问题: 您的自定义环境 "RealTorch2" 可能缺少某些依赖项或配置不正确。确保该环境中安装的库和依赖项与您在 "base" 环境中使用的相同版本一致。您可以使用 Conda 等工具来创建环境和管理依赖项。

库路径问题: 您的自定义环境中可能没有正确配置库路径,以便找到所需的动态链接库(.so 文件)。请检查环境变量,以确保它们包括正确的库路径。

位数兼容性问题: 如果您在 "base" 环境中使用的是不同位数的 Python(例如 32 位 vs. 64 位),则在自定义环境中使用相同的位数可能会有问题。确保您在两个环境中使用相同位数的 Python。

C 扩展模块: 如果您的代码依赖于某些 C 扩展模块,这些模块可能需要在自定义环境中重新编译或安装。确保这些扩展模块与自定义环境兼容。

Conda 环境不同: 如果 "base" 环境和 "RealTorch2" 环境是使用不同的 Conda 环境创建的,可能会存在一些差异,如环境变量或依赖项的不同版本。确保两个环境中的配置一致。

Python 版本: 您的自定义环境中可能使用了不同版本的 Python。确保两个环境中使用的 Python 版本一致。
请问您有什么可能有用的resolution么?

fast-bleu和我们的框架不会相互影响,我认为你可以前往fast-bleu的仓库(https://github.com/Danial-Alh/fast-bleu),看看如何解决这个问题,比如更换版本。