defog-ai/sql-eval

beam search is not support for some Models

Closed this issue · 2 comments

How to reproduce

Based on the documentation of the model phi-2

Remark: In the generation function, our model currently does not support beam search (num_beams > 1).
It doesn't support beam search.

 python -W ignore main.py \
  -q data/questions_gen.csv \
  -o "results/results.csv" \
  -g hf \
  -f "prompts/prompt.md" \
  -m microsoft/phi-2

Behaviour:

It throws an error as below:

Traceback (most recent call last):
  File "/Users/xx/ql-eval/main.py", line 54, in <module>
    run_hf_eval(args)
  File "/Users/xx/sql-eval/eval/hf_runner.py", line 137, in run_hf_eval
    pipe(
  File "/Users/xx/sql-eval/.venv/lib/python3.11/site-packages/transformers/pipelines/text_generation.py", line 208, in __call__
    return super().__call__(text_inputs, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xx/sql-eval/.venv/lib/python3.11/site-packages/transformers/pipelines/base.py", line 1140, in __call__
    return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xx/sql-eval/.venv/lib/python3.11/site-packages/transformers/pipelines/base.py", line 1147, in run_single
    model_outputs = self.forward(model_inputs, **forward_params)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xx/sql-eval/.venv/lib/python3.11/site-packages/transformers/pipelines/base.py", line 1046, in forward
    model_outputs = self._forward(model_inputs, **forward_params)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xx/sql-eval/.venv/lib/python3.11/site-packages/transformers/pipelines/text_generation.py", line 271, in _forward
    generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xx/sql-eval/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xx/sql-eval/.venv/lib/python3.11/site-packages/transformers/generation/utils.py", line 1797, in generate
    return self.beam_search(
           ^^^^^^^^^^^^^^^^^
  File "/Users/xx/sql-eval/.venv/lib/python3x.11/site-packages/transformers/generation/utils.py", line 3255, in beam_search
    model_kwargs["past_key_values"] = self._temporary_reorder_cache(
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/xx/sql-eval/.venv/lib/python3.11/site-packages/transformers/generation/utils.py", line 2979, in _temporary_reorder_cache
    past_key_values.reorder_cache(beam_idx)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'InferenceParams' object has no attribute 'reorder_cache'

There might be serval ways to fix it:

  • Add new arguments to execute the main.py, such as disable-beam-search=true
  • Inside the hf-runner.py, we can use a try-catch to see if this model supports beam search before running each prompt. sample code as below:
    try:
        pipe("prepare to start the engine", max_new_tokens=300, do_sample=False, num_beams=2)
    except AttributeError:
        print("Model does not support num_beams. Using num_beams=1")
        # Add a flag to force num_beams to 1 

Thanks for opening the issue – that's a great point and useful suggestion.

Feel free to make a PR with with try except approach if you'd like! If not, we will add that today or tomorrow.