AkariAsai/self-rag

some problem with run_long_form_static.py

pzwstudy opened this issue · 1 comments

  1. When I finished executing ASQA.sh, there was no result in my ASQA.out file :
    WARNING 05-09 14:07:49 config.py:467] Casting torch.bfloat16 to torch.float16. INFO 05-09 14:07:49 llm_engine.py:73] Initializing an LLM engine with config: model='/mnt/data/home/usera6k04/project/self-rag/llama2-7b', tokenizer='/mnt/data/home/usera6k04/project/self-rag/llama2-7b', tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=4096, download_dir='.cache', load_format=auto, tensor_parallel_size=1, quantization=None, enforce_eager=False, seed=0) INFO 05-09 14:08:20 llm_engine.py:223] # GPU blocks: 302, # CPU blocks: 512 INFO 05-09 14:08:22 model_runner.py:394] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. INFO 05-09 14:08:26 model_runner.py:437] Graph capturing finished in 4 secs.
    2.when I execute FactScore.sh, a error in my FactScore.error.out
    `Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
    Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
    Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
    Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.

Processed prompts: 0%| | 0/1 [00:00<?, ?it/s]
Processed prompts: 100%|██████████| 1/1 [00:00<00:00, 1.80it/s]
Processed prompts: 100%|██████████| 1/1 [00:00<00:00, 1.80it/s]

Processed prompts: 0%| | 0/1 [00:00<?, ?it/s]
Processed prompts: 100%|██████████| 1/1 [00:00<00:00, 1.14it/s]
Processed prompts: 100%|██████████| 1/1 [00:00<00:00, 1.14it/s]
Traceback (most recent call last):
File "run_long_form_static.py", line 441, in
main()
File "run_long_form_static.py", line 380, in main
"cat": item["cat"], "intermediate": intermediate["original_splitted_sentences"][0]})
KeyError: 'original_splitted_sentences'`

May I ask how you solve this?