google-deepmind/opro

AttributeError: 'NoneType' object has no attribute 'lower'

Opened this issue · 6 comments

python optimize_instructions.py --optimizer="gpt-3.5-turbo" --scorer="text-bison" --instruction_pos="Q_end" --dataset="gsm8k" --task="train" --palm_api_key="..." --openai_api_key="..."

I tried to run this code to get such a problem on the gsm8k data collection. How should I solve it?

File "/root/autodl-tmp/LLMasop/opro/evaluation/eval_utils.py", line 802, in evaluate_single_instruction
choices = list(
File "/root/autodl-tmp/LLMasop/opro/evaluation/eval_utils.py", line 804, in
lambda x, y: _parse_prediction(
File "/root/autodl-tmp/LLMasop/opro/evaluation/eval_utils.py", line 794, in _parse_prediction
return metrics.get_normalized_prediction(
File "/root/autodl-tmp/LLMasop/opro/evaluation/metrics.py", line 210, in get_normalized_prediction
prediction_parsed = prediction.lower().strip()
AttributeError: 'NoneType' object has no attribute 'lower'

Hi @luochenxin, this seems to suggest that the raw_answers_to_parse object you get at

raw_answers_to_parse = (
has None elements (instead it should be a list of strings). Could you print out this variable to check its value?

Thank you for your suggestion, I tried to print its value and I found that one of the elements in this list has a value of None, I'm not sure why this problem occurs, how can I fix it?

This sounds weird, especially when only one of the elements is None and the others are normal. To track down the error, could you also print out a few more variables, like raw_answers_second_round at https://github.com/google-deepmind/opro/blob/e81b2f573ce4e15755c70c2535279d6fb940b4b7/opro/evaluation/eval_utils.py#L772C57-L772C81, and raw_prompts_flattened that's sent to

def _prompt_a_list_in_parallel(
? And ideally more variables before them. Basically it would be useful to print out the intermediate variables to see whether each step in the prompting pipeline works as expected.

Following your suggestion, I printed "raw_prompts_flattened" without any problem, but when I print the value of "raw_answers" at /opro/evaluation/eval_utils.py, line708, I find that the elements in this list, each time the serial number 30 has a value of "None", and
it's input is: '"The gummy bear factory manufactures 300 gummy bears a minute. Each packet of gummy bears has 50 gummy bears inside. How long would it take for the factory to manufacture enough gummy bears to fill 240 packets, in minutes?\nLet's solve the problem.",
I don't know what's causing this problem

Hi, have you solved this problem? I have also encountered this problem.

I think it may be a problem with Gemini. If I switch to gpt-3.5, there will be no problem.