OpenGVLab/EfficientQAT

Reproduce Llama2-7b

laomao0 opened this issue · 5 comments

I reproduce the results of Block-ap PPL, wikitext2 7.76 c4 9.50 [Llama2-7b-w2g64], logs below.
I use the auther released weight [1] to test, the result is 7.65, 9.36. Are there something to notice for training the model,
or just the randomness to cause the PPL shift?

[1]https://huggingface.co/ChenMnZ/Llama-2-7b-BlockAP-w2g64/tree/main

Logs:
[2024-08-14 16:31:39 root] (main_block_ap.py 118): INFO Namespace(model='Llama-2-7b-hf/', cache_dir='./cache', output_dir='./output/block_ap_log/Llama-2-7b-w2g64', save_quant_dir='./output/block_ap_models/Llama-2-7b-w2g64', real_quant=True, resume_quant=None, calib_dataset='redpajama', train_size=4096, val_size=64, training_seqlen=2048, batch_size=2, epochs=2, num_workers=2, prefetch_factor=None, ppl_seqlen=2048, seed=2, eval_ppl=True, eval_tasks='piqa', eval_batch_size=16, wbits=2, group_size=64, quant_lr=0.0001, weight_lr=2e-05, min_lr_factor=20, clip_grad=0.3, wd=0, net='Llama-2', max_memory='70GiB', early_stop=0, off_load_to_disk=False)
[2024-08-14 16:31:40 root] (main_block_ap.py 137): INFO === start quantization ===
[2024-08-14 16:31:41 root] (main_block_ap.py 144): INFO load trainloader from ./cache/dataloader_Llama-2_redpajama_4096_64_2048_train.cache
[2024-08-14 16:31:41 root] (main_block_ap.py 146): INFO load valloader from ./cache/dataloader_Llama-2_redpajama_4096_64_2048_val.cache
[2024-08-14 16:31:41 root] (block_ap.py 40): INFO Starting ...
[2024-08-14 16:32:33 root] (block_ap.py 168): INFO === Start quantize blocks 0===
[2024-08-14 16:38:40 root] (block_ap.py 282): INFO blocks 0 epoch 0 recon_loss:8.594008249929175e-06 val_loss:7.580841611343203e-06 quant_lr:5.246359588146619e-05 norm:0.00012392 max memory_allocated 8455.283203125 time 290.969509601593
[2024-08-14 16:43:27 root] (block_ap.py 282): INFO blocks 0 epoch 1 recon_loss:8.12751295597991e-06 val_loss:7.262070539582055e-06 quant_lr:5e-06 norm:0.00005905 max memory_allocated 8517.970703125 time 286.4055132865906
[2024-08-14 16:45:05 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 16:45:06 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 16:45:07 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 16:45:08 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 16:45:11 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 16:45:14 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 16:45:16 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 16:45:18 root] (block_ap.py 168): INFO === Start quantize blocks 1===
[2024-08-14 16:52:29 root] (block_ap.py 282): INFO blocks 1 epoch 0 recon_loss:0.0004997474025003612 val_loss:0.000351719674654305 quant_lr:5.246359588146619e-05 norm:nan max memory_allocated 8517.970703125 time 297.591655254364
[2024-08-14 16:57:07 root] (block_ap.py 282): INFO blocks 1 epoch 1 recon_loss:0.00018435771926306188 val_loss:0.00020597601542249322 quant_lr:5e-06 norm:0.06656547 max memory_allocated 8517.970703125 time 278.26057744026184
[2024-08-14 16:59:18 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 16:59:50 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 17:00:13 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 17:00:45 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 17:01:02 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 17:01:05 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 17:01:07 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 17:01:07 root] (block_ap.py 168): INFO === Start quantize blocks 2===
[2024-08-14 17:07:58 root] (block_ap.py 282): INFO blocks 2 epoch 0 recon_loss:0.0003257098142057657 val_loss:0.0003710766904987395 quant_lr:5.246359588146619e-05 norm:0.00037167 max memory_allocated 8517.970703125 time 286.33166909217834
[2024-08-14 17:12:42 root] (block_ap.py 282): INFO blocks 2 epoch 1 recon_loss:0.00031177984783425927 val_loss:0.00035950145684182644 quant_lr:5e-06 norm:0.00031014 max memory_allocated 8517.970703125 time 283.7495985031128
[2024-08-14 17:14:11 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 17:14:12 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 17:14:14 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 17:14:15 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 17:16:01 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 17:17:12 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 17:18:00 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 17:18:01 root] (block_ap.py 168): INFO === Start quantize blocks 3===
[2024-08-14 17:24:09 root] (block_ap.py 282): INFO blocks 3 epoch 0 recon_loss:0.0006854579551145434 val_loss:0.0007203494315035641 quant_lr:5.246359588146619e-05 norm:0.00050366 max memory_allocated 8517.970703125 time 283.4612078666687
[2024-08-14 17:28:51 root] (block_ap.py 282): INFO blocks 3 epoch 1 recon_loss:0.0006577327731065452 val_loss:0.0006985294166952372 quant_lr:5e-06 norm:0.00036405 max memory_allocated 8518.318359375 time 281.97388911247253
[2024-08-14 17:30:24 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 17:30:24 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 17:30:25 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 17:30:26 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 17:32:05 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 17:35:10 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 17:38:02 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 17:38:02 root] (block_ap.py 168): INFO === Start quantize blocks 4===
[2024-08-14 17:44:26 root] (block_ap.py 282): INFO blocks 4 epoch 0 recon_loss:0.0013201860710978508 val_loss:0.0013336996780708432 quant_lr:5.246359588146619e-05 norm:0.00093194 max memory_allocated 8518.318359375 time 295.5432846546173
[2024-08-14 17:49:13 root] (block_ap.py 282): INFO blocks 4 epoch 1 recon_loss:0.0012707230634987354 val_loss:0.001295733731240034 quant_lr:5e-06 norm:0.00075984 max memory_allocated 8518.505859375 time 286.6534764766693
[2024-08-14 17:51:29 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 17:51:46 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 17:51:55 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 17:51:56 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 17:51:59 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 17:52:01 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 17:52:04 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 17:52:04 root] (block_ap.py 168): INFO === Start quantize blocks 5===
[2024-08-14 17:58:21 root] (block_ap.py 282): INFO blocks 5 epoch 0 recon_loss:0.0021819218527525663 val_loss:0.0022145770490169525 quant_lr:5.246359588146619e-05 norm:0.00123494 max memory_allocated 8518.505859375 time 285.17820978164673
[2024-08-14 18:03:00 root] (block_ap.py 282): INFO blocks 5 epoch 1 recon_loss:0.0021043650340288877 val_loss:0.002152420347556472 quant_lr:5e-06 norm:0.00095953 max memory_allocated 8518.505859375 time 278.853679895401
[2024-08-14 18:04:28 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 18:04:29 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 18:04:30 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 18:04:31 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 18:04:33 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 18:04:35 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 18:04:38 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 18:04:38 root] (block_ap.py 168): INFO === Start quantize blocks 6===
[2024-08-14 18:10:46 root] (block_ap.py 282): INFO blocks 6 epoch 0 recon_loss:0.0033518734853714705 val_loss:0.0034217406064271927 quant_lr:5.246359588146619e-05 norm:0.00199120 max memory_allocated 8518.505859375 time 277.6243155002594
[2024-08-14 18:15:22 root] (block_ap.py 282): INFO blocks 6 epoch 1 recon_loss:0.003231955925002694 val_loss:0.0033283629454672337 quant_lr:5e-06 norm:0.00152712 max memory_allocated 8518.505859375 time 276.12470412254333
[2024-08-14 18:16:48 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 18:16:49 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 18:16:50 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 18:16:51 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 18:16:54 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 18:16:56 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 18:16:58 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 18:16:58 root] (block_ap.py 168): INFO === Start quantize blocks 7===
[2024-08-14 18:23:02 root] (block_ap.py 282): INFO blocks 7 epoch 0 recon_loss:0.004820345435291529 val_loss:0.004918545018881559 quant_lr:5.246359588146619e-05 norm:0.00240560 max memory_allocated 8518.505859375 time 278.41521096229553
[2024-08-14 18:27:38 root] (block_ap.py 282): INFO blocks 7 epoch 1 recon_loss:0.004653809126466513 val_loss:0.004793040454387665 quant_lr:5e-06 norm:0.00181658 max memory_allocated 8518.505859375 time 276.16285276412964
[2024-08-14 18:29:06 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 18:29:07 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 18:29:08 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 18:29:09 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 18:29:12 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 18:29:14 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 18:29:17 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 18:29:17 root] (block_ap.py 168): INFO === Start quantize blocks 8===
[2024-08-14 18:35:26 root] (block_ap.py 282): INFO blocks 8 epoch 0 recon_loss:0.006675077602267265 val_loss:0.006832738872617483 quant_lr:5.246359588146619e-05 norm:0.00309778 max memory_allocated 8518.505859375 time 278.64258790016174
[2024-08-14 18:40:06 root] (block_ap.py 282): INFO blocks 8 epoch 1 recon_loss:0.006455935072153807 val_loss:0.0066694761626422405 quant_lr:5e-06 norm:0.00235353 max memory_allocated 8518.630859375 time 280.33279943466187
[2024-08-14 18:41:33 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 18:41:35 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 18:41:36 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 18:41:37 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 18:41:39 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 18:41:41 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 18:41:44 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 18:41:44 root] (block_ap.py 168): INFO === Start quantize blocks 9===
[2024-08-14 18:47:57 root] (block_ap.py 282): INFO blocks 9 epoch 0 recon_loss:0.00886989664286375 val_loss:0.009123056195676327 quant_lr:5.246359588146619e-05 norm:0.00360573 max memory_allocated 8518.630859375 time 280.0216248035431
[2024-08-14 18:52:36 root] (block_ap.py 282): INFO blocks 9 epoch 1 recon_loss:0.00860089622437954 val_loss:0.00892514456063509 quant_lr:5e-06 norm:0.00288742 max memory_allocated 8519.658203125 time 278.7797086238861
[2024-08-14 18:54:03 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 18:54:04 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 18:54:05 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 18:54:06 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 18:54:08 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 18:54:10 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 18:54:12 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 18:54:12 root] (block_ap.py 168): INFO === Start quantize blocks 10===
[2024-08-14 19:00:27 root] (block_ap.py 282): INFO blocks 10 epoch 0 recon_loss:0.011428315192461014 val_loss:0.011835003271698952 quant_lr:5.246359588146619e-05 norm:0.00466039 max memory_allocated 8519.658203125 time 285.22361612319946
[2024-08-14 19:05:09 root] (block_ap.py 282): INFO blocks 10 epoch 1 recon_loss:0.011084350757300854 val_loss:0.011578837409615517 quant_lr:5e-06 norm:0.00364604 max memory_allocated 8519.658203125 time 282.1856586933136
[2024-08-14 19:06:38 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 19:06:39 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 19:06:39 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 19:06:40 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 19:06:43 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 19:06:45 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 19:06:47 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 19:06:47 root] (block_ap.py 168): INFO === Start quantize blocks 11===
[2024-08-14 19:13:05 root] (block_ap.py 282): INFO blocks 11 epoch 0 recon_loss:0.013866116292774677 val_loss:0.014432313852012157 quant_lr:5.246359588146619e-05 norm:0.00563362 max memory_allocated 8519.658203125 time 281.92831587791443
[2024-08-14 19:17:49 root] (block_ap.py 282): INFO blocks 11 epoch 1 recon_loss:0.013461158610880375 val_loss:0.014141718856990337 quant_lr:5e-06 norm:0.00433671 max memory_allocated 8519.658203125 time 284.3981432914734
[2024-08-14 19:19:16 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 19:19:17 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 19:19:18 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 19:19:18 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 19:19:21 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 19:19:23 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 19:19:25 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 19:19:27 root] (block_ap.py 168): INFO === Start quantize blocks 12===
[2024-08-14 19:25:34 root] (block_ap.py 282): INFO blocks 12 epoch 0 recon_loss:0.016755955293774605 val_loss:0.017479244619607925 quant_lr:5.246359588146619e-05 norm:0.00553546 max memory_allocated 8519.658203125 time 279.1489243507385
[2024-08-14 19:30:17 root] (block_ap.py 282): INFO blocks 12 epoch 1 recon_loss:0.016300391405820847 val_loss:0.017152251675724983 quant_lr:5e-06 norm:0.00442587 max memory_allocated 8519.658203125 time 283.40688371658325
[2024-08-14 19:31:47 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 19:31:48 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 19:31:49 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 19:31:50 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 19:31:52 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 19:31:55 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 19:31:57 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 19:31:57 root] (block_ap.py 168): INFO === Start quantize blocks 13===
[2024-08-14 19:38:10 root] (block_ap.py 282): INFO blocks 13 epoch 0 recon_loss:0.020204272121191025 val_loss:0.021183490753173828 quant_lr:5.246359588146619e-05 norm:0.00652536 max memory_allocated 8519.658203125 time 285.3700318336487
[2024-08-14 19:42:55 root] (block_ap.py 282): INFO blocks 13 epoch 1 recon_loss:0.0196205023676157 val_loss:0.020783085376024246 quant_lr:5e-06 norm:0.00520806 max memory_allocated 8519.658203125 time 284.82681918144226
[2024-08-14 19:44:24 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 19:44:25 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 19:44:26 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 19:44:27 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 19:44:30 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 19:44:32 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 19:44:34 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 19:44:34 root] (block_ap.py 168): INFO === Start quantize blocks 14===
[2024-08-14 19:50:42 root] (block_ap.py 282): INFO blocks 14 epoch 0 recon_loss:0.024218318983912468 val_loss:0.025555459782481194 quant_lr:5.246359588146619e-05 norm:0.00686987 max memory_allocated 8519.658203125 time 278.9575672149658
[2024-08-14 19:55:24 root] (block_ap.py 282): INFO blocks 14 epoch 1 recon_loss:0.02356504462659359 val_loss:0.025096077471971512 quant_lr:5e-06 norm:0.00564267 max memory_allocated 8519.658203125 time 281.90392565727234
[2024-08-14 19:56:51 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 19:56:52 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 19:56:53 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 19:56:54 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 19:56:57 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 19:56:59 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 19:57:01 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 19:57:01 root] (block_ap.py 168): INFO === Start quantize blocks 15===
[2024-08-14 20:03:11 root] (block_ap.py 282): INFO blocks 15 epoch 0 recon_loss:0.029241742566227913 val_loss:0.03108583576977253 quant_lr:5.246359588146619e-05 norm:0.00760930 max memory_allocated 8519.658203125 time 284.7247955799103
[2024-08-14 20:07:54 root] (block_ap.py 282): INFO blocks 15 epoch 1 recon_loss:0.028492607176303864 val_loss:0.03053319826722145 quant_lr:5e-06 norm:0.00631329 max memory_allocated 8519.658203125 time 282.15850925445557
[2024-08-14 20:09:41 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 20:09:42 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 20:09:43 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 20:09:44 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 20:09:46 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 20:09:48 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 20:09:50 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 20:09:50 root] (block_ap.py 168): INFO === Start quantize blocks 16===
[2024-08-14 20:16:03 root] (block_ap.py 282): INFO blocks 16 epoch 0 recon_loss:0.03898978978395462 val_loss:0.041594915091991425 quant_lr:5.246359588146619e-05 norm:0.01111209 max memory_allocated 8519.658203125 time 283.0070013999939
[2024-08-14 20:20:44 root] (block_ap.py 282): INFO blocks 16 epoch 1 recon_loss:0.0378853864967823 val_loss:0.04081832617521286 quant_lr:5e-06 norm:0.00931864 max memory_allocated 8519.658203125 time 280.76309466362
[2024-08-14 20:22:07 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 20:22:08 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 20:22:09 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 20:22:10 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 20:22:12 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 20:22:14 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 20:22:16 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 20:22:16 root] (block_ap.py 168): INFO === Start quantize blocks 17===
[2024-08-14 20:28:19 root] (block_ap.py 282): INFO blocks 17 epoch 0 recon_loss:0.04711794853210449 val_loss:0.0506027415394783 quant_lr:5.246359588146619e-05 norm:0.01079455 max memory_allocated 8519.658203125 time 284.0053617954254
[2024-08-14 20:32:58 root] (block_ap.py 282): INFO blocks 17 epoch 1 recon_loss:0.045941613614559174 val_loss:0.04977923259139061 quant_lr:5e-06 norm:0.00914490 max memory_allocated 8519.658203125 time 278.86100482940674
[2024-08-14 20:34:24 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 20:34:25 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 20:34:25 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 20:34:26 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 20:34:29 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 20:34:32 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 20:34:34 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 20:34:34 root] (block_ap.py 168): INFO === Start quantize blocks 18===
[2024-08-14 20:40:45 root] (block_ap.py 282): INFO blocks 18 epoch 0 recon_loss:0.06027898192405701 val_loss:0.06512356549501419 quant_lr:5.246359588146619e-05 norm:0.01245181 max memory_allocated 8519.658203125 time 284.8619499206543
[2024-08-14 20:45:32 root] (block_ap.py 282): INFO blocks 18 epoch 1 recon_loss:0.05885666236281395 val_loss:0.06417141109704971 quant_lr:5e-06 norm:0.01045257 max memory_allocated 8519.658203125 time 287.0018377304077
[2024-08-14 20:47:00 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 20:47:01 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 20:47:02 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 20:47:02 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 20:47:05 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 20:47:08 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 20:47:10 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 20:47:10 root] (block_ap.py 168): INFO === Start quantize blocks 19===
[2024-08-14 20:53:30 root] (block_ap.py 282): INFO blocks 19 epoch 0 recon_loss:0.07558350265026093 val_loss:0.08216790854930878 quant_lr:5.246359588146619e-05 norm:0.01257595 max memory_allocated 8519.658203125 time 289.10349678993225
[2024-08-14 20:58:17 root] (block_ap.py 282): INFO blocks 19 epoch 1 recon_loss:0.07396982610225677 val_loss:0.0810832604765892 quant_lr:5e-06 norm:0.01061844 max memory_allocated 8519.658203125 time 287.0697674751282
[2024-08-14 20:59:46 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 20:59:47 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 20:59:48 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 20:59:49 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 20:59:52 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 20:59:54 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 20:59:57 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 20:59:57 root] (block_ap.py 168): INFO === Start quantize blocks 20===
[2024-08-14 21:06:23 root] (block_ap.py 282): INFO blocks 20 epoch 0 recon_loss:0.09740427881479263 val_loss:0.10711297392845154 quant_lr:5.246359588146619e-05 norm:0.01927342 max memory_allocated 8519.658203125 time 285.2277088165283
[2024-08-14 21:11:06 root] (block_ap.py 282): INFO blocks 20 epoch 1 recon_loss:0.09526768326759338 val_loss:0.10575759410858154 quant_lr:5e-06 norm:0.01600632 max memory_allocated 8519.658203125 time 282.47752952575684
[2024-08-14 21:13:18 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 21:13:19 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 21:13:19 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 21:13:20 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 21:13:23 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 21:13:25 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 21:13:28 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 21:13:28 root] (block_ap.py 168): INFO === Start quantize blocks 21===
[2024-08-14 21:19:39 root] (block_ap.py 282): INFO blocks 21 epoch 0 recon_loss:0.11951509863138199 val_loss:0.1327338069677353 quant_lr:5.246359588146619e-05 norm:0.01767462 max memory_allocated 8519.658203125 time 285.6971871852875
[2024-08-14 21:24:21 root] (block_ap.py 282): INFO blocks 21 epoch 1 recon_loss:0.11720212548971176 val_loss:0.1313733160495758 quant_lr:5e-06 norm:0.01514120 max memory_allocated 8519.658203125 time 282.26369166374207
[2024-08-14 21:25:48 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 21:25:49 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 21:25:50 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 21:25:51 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 21:25:54 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 21:25:56 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 21:25:58 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 21:25:58 root] (block_ap.py 168): INFO === Start quantize blocks 22===
[2024-08-14 21:32:53 root] (block_ap.py 282): INFO blocks 22 epoch 0 recon_loss:0.14931800961494446 val_loss:0.1669609248638153 quant_lr:5.246359588146619e-05 norm:0.02538409 max memory_allocated 8519.658203125 time 281.6525242328644
[2024-08-14 21:37:35 root] (block_ap.py 282): INFO blocks 22 epoch 1 recon_loss:0.1465100646018982 val_loss:0.16539032757282257 quant_lr:5e-06 norm:0.02159324 max memory_allocated 8519.658203125 time 281.9111089706421
[2024-08-14 21:39:03 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 21:39:04 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 21:39:05 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 21:39:06 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 21:39:08 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 21:39:10 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 21:39:13 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 21:39:13 root] (block_ap.py 168): INFO === Start quantize blocks 23===
[2024-08-14 21:45:34 root] (block_ap.py 282): INFO blocks 23 epoch 0 recon_loss:0.1784018725156784 val_loss:0.2008756846189499 quant_lr:5.246359588146619e-05 norm:0.02084934 max memory_allocated 8519.658203125 time 276.7708706855774
[2024-08-14 21:50:26 root] (block_ap.py 282): INFO blocks 23 epoch 1 recon_loss:0.175482377409935 val_loss:0.19920319318771362 quant_lr:5e-06 norm:0.01792830 max memory_allocated 8519.658203125 time 292.64991760253906
[2024-08-14 21:51:53 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 21:51:54 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 21:51:55 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 21:51:56 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 21:51:59 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 21:52:01 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 21:52:03 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 21:52:03 root] (block_ap.py 168): INFO === Start quantize blocks 24===
[2024-08-14 21:58:27 root] (block_ap.py 282): INFO blocks 24 epoch 0 recon_loss:0.21409031748771667 val_loss:0.2420840859413147 quant_lr:5.246359588146619e-05 norm:0.02759734 max memory_allocated 8519.658203125 time 289.225692987442
[2024-08-14 22:03:13 root] (block_ap.py 282): INFO blocks 24 epoch 1 recon_loss:0.21059788763523102 val_loss:0.24024002254009247 quant_lr:5e-06 norm:0.02373131 max memory_allocated 8519.658203125 time 285.1870496273041
[2024-08-14 22:05:25 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 22:05:26 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 22:05:27 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 22:05:28 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 22:05:30 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 22:05:33 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 22:05:35 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 22:05:35 root] (block_ap.py 168): INFO === Start quantize blocks 25===
[2024-08-14 22:11:48 root] (block_ap.py 282): INFO blocks 25 epoch 0 recon_loss:0.24982158839702606 val_loss:0.2835429012775421 quant_lr:5.246359588146619e-05 norm:0.02741695 max memory_allocated 8519.658203125 time 287.72892332077026
[2024-08-14 22:16:28 root] (block_ap.py 282): INFO blocks 25 epoch 1 recon_loss:0.24625223875045776 val_loss:0.2816184163093567 quant_lr:5e-06 norm:0.02328901 max memory_allocated 8519.658203125 time 280.09438252449036
[2024-08-14 22:17:56 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 22:17:57 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 22:17:58 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 22:17:59 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 22:18:01 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 22:18:04 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 22:18:06 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 22:18:06 root] (block_ap.py 168): INFO === Start quantize blocks 26===
[2024-08-14 22:24:20 root] (block_ap.py 282): INFO blocks 26 epoch 0 recon_loss:0.2987477481365204 val_loss:0.33984383940696716 quant_lr:5.246359588146619e-05 norm:0.03653025 max memory_allocated 8519.658203125 time 280.57843804359436
[2024-08-14 22:28:56 root] (block_ap.py 282): INFO blocks 26 epoch 1 recon_loss:0.294434517621994 val_loss:0.33769410848617554 quant_lr:5e-06 norm:0.03079848 max memory_allocated 8519.658203125 time 276.3008465766907
[2024-08-14 22:30:24 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 22:30:25 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 22:30:26 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 22:30:27 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 22:30:30 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 22:30:32 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 22:30:34 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 22:30:35 root] (block_ap.py 168): INFO === Start quantize blocks 27===
[2024-08-14 22:37:00 root] (block_ap.py 282): INFO blocks 27 epoch 0 recon_loss:0.34948235750198364 val_loss:0.3982967436313629 quant_lr:5.246359588146619e-05 norm:0.03138978 max memory_allocated 8519.658203125 time 293.03565287590027
[2024-08-14 22:41:54 root] (block_ap.py 282): INFO blocks 27 epoch 1 recon_loss:0.3449081480503082 val_loss:0.3958095908164978 quant_lr:5e-06 norm:0.02752341 max memory_allocated 8519.658203125 time 294.02703952789307
[2024-08-14 22:43:18 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 22:43:19 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 22:43:20 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 22:43:21 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 22:43:23 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 22:43:25 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 22:43:27 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 22:43:27 root] (block_ap.py 168): INFO === Start quantize blocks 28===
[2024-08-14 22:49:32 root] (block_ap.py 282): INFO blocks 28 epoch 0 recon_loss:0.4159965515136719 val_loss:0.4738450050354004 quant_lr:5.246359588146619e-05 norm:0.04556414 max memory_allocated 8519.658203125 time 269.2068998813629
[2024-08-14 22:53:58 root] (block_ap.py 282): INFO blocks 28 epoch 1 recon_loss:0.4103543162345886 val_loss:0.4709373414516449 quant_lr:5e-06 norm:0.03992183 max memory_allocated 8519.658203125 time 265.77947306632996
[2024-08-14 22:55:22 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 22:55:23 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 22:55:24 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 22:55:25 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 22:55:28 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 22:55:30 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 22:55:32 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 22:55:33 root] (block_ap.py 168): INFO === Start quantize blocks 29===
[2024-08-14 23:01:26 root] (block_ap.py 282): INFO blocks 29 epoch 0 recon_loss:0.4943501055240631 val_loss:0.5624938011169434 quant_lr:5.246359588146619e-05 norm:0.05130781 max memory_allocated 8519.658203125 time 266.5509307384491
[2024-08-14 23:06:05 root] (block_ap.py 282): INFO blocks 29 epoch 1 recon_loss:0.4875487685203552 val_loss:0.5588646531105042 quant_lr:5e-06 norm:0.04454644 max memory_allocated 8519.658203125 time 278.9385681152344
[2024-08-14 23:07:32 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 23:07:33 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 23:07:34 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 23:07:35 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 23:07:37 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 23:07:39 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 23:07:41 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 23:07:42 root] (block_ap.py 168): INFO === Start quantize blocks 30===
[2024-08-14 23:13:52 root] (block_ap.py 282): INFO blocks 30 epoch 0 recon_loss:0.6065151691436768 val_loss:0.6869809031486511 quant_lr:5.246359588146619e-05 norm:0.09233399 max memory_allocated 8519.658203125 time 279.1169159412384
[2024-08-14 23:18:26 root] (block_ap.py 282): INFO blocks 30 epoch 1 recon_loss:0.5979331731796265 val_loss:0.6819834113121033 quant_lr:5e-06 norm:0.07441913 max memory_allocated 8519.658203125 time 274.71182131767273
[2024-08-14 23:19:51 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 23:19:52 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 23:19:53 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 23:19:54 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 23:19:57 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 23:19:59 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 23:20:02 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 23:20:02 root] (block_ap.py 168): INFO === Start quantize blocks 31===
[2024-08-14 23:26:27 root] (block_ap.py 282): INFO blocks 31 epoch 0 recon_loss:0.9959386587142944 val_loss:1.117325782775879 quant_lr:5.246359588146619e-05 norm:0.47569627 max memory_allocated 8519.658203125 time 291.72733783721924
[2024-08-14 23:31:17 root] (block_ap.py 282): INFO blocks 31 epoch 1 recon_loss:0.9752663969993591 val_loss:1.1027153730392456 quant_lr:5e-06 norm:nan max memory_allocated 8519.658203125 time 290.31018567085266
[2024-08-14 23:32:39 root] (block_ap.py 318): INFO pack quantized self_attn.q_proj finished
[2024-08-14 23:32:40 root] (block_ap.py 318): INFO pack quantized self_attn.k_proj finished
[2024-08-14 23:32:41 root] (block_ap.py 318): INFO pack quantized self_attn.v_proj finished
[2024-08-14 23:32:42 root] (block_ap.py 318): INFO pack quantized self_attn.o_proj finished
[2024-08-14 23:32:44 root] (block_ap.py 318): INFO pack quantized mlp.gate_proj finished
[2024-08-14 23:32:46 root] (block_ap.py 318): INFO pack quantized mlp.up_proj finished
[2024-08-14 23:32:49 root] (block_ap.py 318): INFO pack quantized mlp.down_proj finished
[2024-08-14 23:32:53 root] (main_block_ap.py 165): INFO 25272.88169503212
[2024-08-14 23:32:55 root] (main_block_ap.py 168): INFO start saving model
[2024-08-14 23:32:58 root] (main_block_ap.py 171): INFO save model success
[2024-08-14 23:35:50 root] (main_block_ap.py 40): INFO wikitext2 perplexity: 7.76
[2024-08-14 23:35:50 root] (main_block_ap.py 40): INFO c4 perplexity: 9.50
[2024-08-14 23:35:51 lm-eval] (huggingface.py 96): WARNING pretrained model kwarg is not of type str. Many other model arguments may be ignored. Please do not launch via accelerate or use parallelize=True if passing an existing model this way.
[2024-08-14 23:35:51 lm-eval] (huggingface.py 276): WARNING Passed an already-initialized model through pretrained, assuming single-process call to evaluate() or custom distributed integration
[2024-08-14 23:35:52 lm-eval] (init.py 491): INFO group and group_alias keys in tasks' configs will no longer be used in the next release of lm-eval. tag will be used to allow to call a collection of tasks just like group. group will be removed in order to not cause confusion with the new ConfigurableGroup which will be the offical way to create groups with addition of group-wide configuations.
[2024-08-14 23:35:57 lm-eval] (evaluator.py 158): INFO Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234
[2024-08-14 23:35:57 lm-eval] (evaluator.py 209): INFO Using pre-initialized model
[2024-08-14 23:35:57 lm-eval] (evaluator.py 262): WARNING Overwriting default num_fewshot of piqa from None to 0
[2024-08-14 23:35:57 lm-eval] (evaluator.py 274): INFO Setting fewshot random generator seed to 1234
[2024-08-14 23:35:57 lm-eval] (task.py 423): INFO Building contexts for piqa on rank 0...
[2024-08-14 23:36:04 lm-eval] (evaluator.py 457): INFO Running loglikelihood requests
[2024-08-14 23:36:37 lm-eval] (huggingface.py 1341): WARNING Failed to get model SHA for LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): QuantLinear()
(k_proj): QuantLinear()
(v_proj): QuantLinear()
(o_proj): QuantLinear()
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): QuantLinear()
(up_proj): QuantLinear()
(down_proj): QuantLinear()
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
) at revision main. Error: Repo id must be a string, not <class 'transformers.models.llama.modeling_llama.LlamaForCausalLM'>: 'LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): QuantLinear()
(k_proj): QuantLinear()
(v_proj): QuantLinear()
(o_proj): QuantLinear()
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): QuantLinear()
(up_proj): QuantLinear()
(down_proj): QuantLinear()
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)'.
[2024-08-14 23:36:45 root] (main_block_ap.py 55): INFO |Tasks|Version|Filter|n-shot| Metric | |Value | |Stderr|
|-----|------:|------|-----:|--------|---|-----:|---|-----:|
|piqa | 1|none | 0|acc |↑ |0.7345|± |0.0103|
| | |none | 0|acc_norm|↑ |0.7443|± |0.0102|

[2024-08-14 23:36:45 root] (main_block_ap.py 59): INFO Average Acc: 73.45%

I found that all released models are trained for 2 epochs, except that w2g64 llama-2-7b is trained for 3 epochs.

So, the 0.1 ppl difference maybe caused by different training epoch.

Additionally, such negligible ppl difference in 2-bit can be compensated in the following e2e-qp process.

I found that all released models are trained for 2 epochs, except that w2g64 llama-2-7b is trained for 3 epochs.

So, the 0.1 ppl difference maybe caused by different training epoch.

Additionally, such negligible ppl difference in 2-bit can be compensated in the following e2e-qp process.

Thanks for your quick replay.

By the way, in Table 8, the training time of Llama-2-7b [block-ap] is 3.3h, what is the training setting for this? I see the log in my reproduced w2g64-llama2-7b epoch 2, need about 7h to quant 32 layers?

Thanks.

@laomao0
The lower training speed is resulted by pytorch dataloader.

I try to rewrite the data processing code through pytorch dataloader to make the code more clear.

However, such operation lead memory bottleneck and lower GPU use ratio (can be seen through nvidia-smi).

I'm still trying to fix this.

@laomao0
I have fixed the bottleneck of data processing. You can clone the latest code for the faster training speed.