FudanSELab/ClassEval

Pass@1 greedy results are changing whenever I re-evaluate

Opened this issue · 1 comments

I'm using this command to evaluate Pass@1:

$ python evaluation.py --source_file_name GPT-4-Turbo_class_H_greedy --eval_data ClassEval_data --greedy 1
{
'class_partial_success': 0.58
'class_success': 0.37
'fun_partial_success': 0.8047808764940239
'fun_success': 0.6613545816733067
}

After rerun:

{
'class_partial_success': 0.58
'class_success': 0.36
'fun_partial_success': 0.8047808764940239
'fun_success': 0.6593625498007968
}

The issue you observed might be due to two main factors: the recent update to GPT-4 and the ongoing updates to our benchmarks, which could result in discrepancies between the current cases and those used in previous evaluation.