/I2EBench

[NeurIPS'24] I2EBench: A Comprehensive Benchmark for Instruction-based Image Editing

Primary LanguageJupyter NotebookApache License 2.0Apache-2.0

IΒ²EBench

Paper Dataset Download

IΒ²EBench: A Comprehensive Benchmark for Instruction-based Image Editing

Yiwei Ma, Jiayi Ji, Ke Ye, Weihuang Lin, Zhibin Wang, Yonghan Zheng, Qiang Zhou, Xiaoshuai Sun, Rongrong Ji

🌟Updates

  • [2024.9.25] Accepted by NeurIPS 2024 πŸ˜‹
  • [2024.12.4] Release the multi-round editing evaluation πŸš€

πŸ”Overview

overview

Overview of IΒ²EBench, an automated system for evaluating the quality of editing results generated by instruction-based image editing (IIE) models. We collected a dataset of over 2000+ images from public datasets and annotated them with corresponding original editing instructions. To diversify the instructions, we used ChatGPT to generate varied versions. With the collected images and the original/diverse editing instructions, we utilized existing IIE models to generate edited images. Subsequently, we developed an evaluation methodology to automatically assess the adherence of edited images to the provided instructions under different dimensions. We also implemented human evaluation to obtain human preferences for editing results of different IIE models. Finally, we analyzed the correlation between automated evaluation and human evaluation, confirming alignment with human perception.

πŸ“ŠEvaluation Results

score_radar

  • Comparison of radar charts for IΒ²EBench scores in different dimensions using (a) original instructions and (b) diverse instructions.

category_score

  • Comparison of radar charts for IΒ²EBench scores in different categories using (a) original instructions and (b) diverse instructions. The scores of all dimensions are normalized and averaged.

πŸ“Folder Detail

.
└── EditBench
    β”œβ”€β”€ EditData				### dataset with different dimensions
    β”‚   β”œβ”€β”€ BGReplacement
    β”‚   β”œβ”€β”€ ColorAlteration
    β”‚   β”œβ”€β”€ Counting
    β”‚   β”œβ”€β”€ Deblurring
    β”‚   β”œβ”€β”€ DirectionPerception
    β”‚   β”œβ”€β”€ HazeRemoval
    β”‚   β”œβ”€β”€ Lowlight
    β”‚   β”œβ”€β”€ NoiseRemoval
    β”‚   β”œβ”€β”€ ObjectRemoval
    β”‚   β”œβ”€β”€ RainRemoval
    β”‚   β”œβ”€β”€ RegionAccuracy
    β”‚   β”œβ”€β”€ Replacement
    β”‚   β”œβ”€β”€ ShadowRemoval
    β”‚   β”œβ”€β”€ SnowRemoval
    β”‚   β”œβ”€β”€ StyleAlteration
    β”‚   └── WatermarkRemoval
    β”œβ”€β”€ EditEval			    ### 1. with `diverse` editing instructions
    						   ### 2. evaluation results of 8 editing models(any2pix,
    						   ### hive,hqedit,iedit,instruct-diffusion,instructpix2pix,
    						   ### magicbrush,mgie) in every dimensions.
    β”‚   β”œβ”€β”€ BGReplacement        # including evaluation results of 8 editing models
    β”‚   β”œβ”€β”€ ColorAlteration      # including evaluation results of 8 editing models
    β”‚   β”œβ”€β”€ Counting             # ...
    β”‚   β”œβ”€β”€ Deblurring
    β”‚   β”œβ”€β”€ DirectionPerception
    β”‚   β”œβ”€β”€ HazeRemoval
    β”‚   β”œβ”€β”€ Lowlight
    β”‚   β”œβ”€β”€ NoiseRemoval
    β”‚   β”œβ”€β”€ ObjectRemoval
    β”‚   β”œβ”€β”€ RainRemoval
    β”‚   β”œβ”€β”€ RegionAccuracy
    β”‚   β”œβ”€β”€ Replacement
    β”‚   β”œβ”€β”€ ShadowRemoval
    β”‚   β”œβ”€β”€ SnowRemoval
    β”‚   β”œβ”€β”€ StyleAlteration
    β”‚   └── WatermarkRemoval
    β”œβ”€β”€ EditEval_ori        	 ### 1. with `original` editing instructions
    						   ### 2. evaluation results of 8 editing models(any2pix,
    						   ### hive,hqedit,iedit,instruct-diffusion,instructpix2pix,
    						   ### magicbrush,mgie) in every dimensions.
    β”‚   β”œβ”€β”€ BGReplacement        # including evaluation results of 8 editing models
    β”‚   β”œβ”€β”€ ColorAlteration      # including evaluation results of 8 editing models
    β”‚   β”œβ”€β”€ Counting             # ...
    β”‚   β”œβ”€β”€ Deblurring
    β”‚   β”œβ”€β”€ DirectionPerception
    β”‚   β”œβ”€β”€ HazeRemoval
    β”‚   β”œβ”€β”€ Lowlight
    β”‚   β”œβ”€β”€ NoiseRemoval
    β”‚   β”œβ”€β”€ ObjectRemoval
    β”‚   β”œβ”€β”€ RainRemoval
    β”‚   β”œβ”€β”€ RegionAccuracy
    β”‚   β”œβ”€β”€ Replacement
    β”‚   β”œβ”€β”€ ShadowRemoval
    β”‚   β”œβ”€β”€ SnowRemoval
    β”‚   β”œβ”€β”€ StyleAlteration
    β”‚   └── WatermarkRemoval
    β”œβ”€β”€ EditRank			    ### 1. with `diverse` editing instructions
    						   ### 2. rank results of 8 editing models(any2pix,
    						   ### hive,hqedit,iedit,instruct-diffusion,instructpix2pix,
    						   ### magicbrush,mgie) in every dimensions based on evaluation results
    β”‚   β”œβ”€β”€ BGReplacement.json    # rank results of 8 editing models
    β”‚   β”œβ”€β”€ ColorAlteration.json  # rank results of 8 editing models
    β”‚   β”œβ”€β”€ Counting.json         # ...
    β”‚   β”œβ”€β”€ Deblurring.json
    β”‚   β”œβ”€β”€ DirectionPerception.json
    β”‚   β”œβ”€β”€ HazeRemoval.json
    β”‚   β”œβ”€β”€ Lowlight.json
    β”‚   β”œβ”€β”€ NoiseRemoval.json
    β”‚   β”œβ”€β”€ ObjectRemoval.json
    β”‚   β”œβ”€β”€ RainRemoval.json
    β”‚   β”œβ”€β”€ RegionAccuracy.json
    β”‚   β”œβ”€β”€ Replacement.json
    β”‚   β”œβ”€β”€ ShadowRemoval.json
    β”‚   β”œβ”€β”€ SnowRemoval.json
    β”‚   β”œβ”€β”€ StyleAlteration.json
    β”‚   └── WatermarkRemoval.json
    β”œβ”€β”€ EditRank_ori			### 1. with `original` editing instructions
    						   ### 2. rank results of 8 editing models(any2pix,
    						   ### hive,hqedit,iedit,instruct-diffusion,instructpix2pix,
    						   ### magicbrush,mgie) in every dimensions based on evaluation results
    β”‚   β”œβ”€β”€ BGReplacement.json    # rank results of 8 editing models
    β”‚   β”œβ”€β”€ ColorAlteration.json  # rank results of 8 editing models
    β”‚   β”œβ”€β”€ Counting.json         # ...
    β”‚   β”œβ”€β”€ Deblurring.json
    β”‚   β”œβ”€β”€ DirectionPerception.json
    β”‚   β”œβ”€β”€ HazeRemoval.json
    β”‚   β”œβ”€β”€ Lowlight.json
    β”‚   β”œβ”€β”€ NoiseRemoval.json
    β”‚   β”œβ”€β”€ ObjectRemoval.json
    β”‚   β”œβ”€β”€ RainRemoval.json
    β”‚   β”œβ”€β”€ RegionAccuracy.json
    β”‚   β”œβ”€β”€ Replacement.json
    β”‚   β”œβ”€β”€ ShadowRemoval.json
    β”‚   β”œβ”€β”€ SnowRemoval.json
    β”‚   β”œβ”€β”€ StyleAlteration.json
    β”‚   └── WatermarkRemoval.json
    β”œβ”€β”€ EditResult			 	### 1. with `diverse` editing instructions
    						   ### 2. editing results of 8 editing models(any2pix,
    						   ### hive,hqedit,iedit,instruct-diffusion,instructpix2pix,
    						   ### magicbrush,mgie) in every dimensions based on evaluation results
    β”‚   β”œβ”€β”€ BGReplacement
    β”‚   β”œβ”€β”€ ColorAlteration
    β”‚   β”œβ”€β”€ Counting
    β”‚   β”œβ”€β”€ Deblurring
    β”‚   β”œβ”€β”€ DirectionPerception
    β”‚   β”œβ”€β”€ HazeRemoval
    β”‚   β”œβ”€β”€ Lowlight
    β”‚   β”œβ”€β”€ NoiseRemoval
    β”‚   β”œβ”€β”€ ObjectRemoval
    β”‚   β”œβ”€β”€ RainRemoval
    β”‚   β”œβ”€β”€ RegionAccuracy
    β”‚   β”œβ”€β”€ Replacement
    β”‚   β”œβ”€β”€ ShadowRemoval
    β”‚   β”œβ”€β”€ SnowRemoval
    β”‚   β”œβ”€β”€ StyleAlteration
    β”‚   └── WatermarkRemoval
    β”œβ”€β”€ EditResult_ori           ### 1. with `original` editing instructions
                                 ### 2. editing results of 8 editing models(any2pix,
                                 ### hive,hqedit,iedit,instruct-diffusion,instructpix2pix,
                                 ### magicbrush,mgie) in every dimensions based on evaluation results
    β”‚   β”œβ”€β”€ BGReplacement
    β”‚   β”œβ”€β”€ ColorAlteration
    β”‚   β”œβ”€β”€ Counting
    β”‚   β”œβ”€β”€ Deblurring
    β”‚   β”œβ”€β”€ DirectionPerception
    β”‚   β”œβ”€β”€ HazeRemoval
    β”‚   β”œβ”€β”€ Lowlight
    β”‚   β”œβ”€β”€ NoiseRemoval
    β”‚   β”œβ”€β”€ ObjectRemoval
    β”‚   β”œβ”€β”€ RainRemoval
    β”‚   β”œβ”€β”€ RegionAccuracy
    β”‚   β”œβ”€β”€ Replacement
    β”‚   β”œβ”€β”€ ShadowRemoval
    β”‚   β”œβ”€β”€ SnowRemoval
    β”‚   β”œβ”€β”€ StyleAlteration
    β”‚   └── WatermarkRemoval
    β”œβ”€β”€ eval_scripts                                 ### scripts for evaluation
    β”‚   β”œβ”€β”€ high_level_eval_stage1.py			    ## evaluation for high-level dimensions, e.g. BGReplacement.
                                                     # stage1: using LVLM(e.g. GPT4V) to ask edited images
                                                     #         questions(from jsons in `EditData`), get the
                                                     #         raw `VLM_judgement` outputs, saved in `EvalData`
    β”‚   β”œβ”€β”€ high_level_eval_stage2_final_judge.py    # stage2: using LLM(e.g. GPT4-turbo) with designed template
                                                     #         to get the more stable `final_judgement`
    β”‚   β”œβ”€β”€ low_level_eval.py                        ## evaluation for low-level dimensions, e.g. Deblurring
    β”‚   β”œβ”€β”€ metrics_utils                            ## utils for evaluations, e.g. GPT4V, GPT4-turbo, CLIP, SSIM
    β”‚   β”œβ”€β”€ sample_rank_gen.py                       ## generate script for `EditRank_ori` and `EditRank`
    β”‚   β”œβ”€β”€ summary.json                             ## generated by `summary.py`
    β”‚   β”œβ”€β”€ summary_ori.json                         ## generated by `summary.py`
    β”‚   β”œβ”€β”€ summary.py                               ## generate script for `summary.json` and `summary.json`,
                                                     #  describe metric scores for every models in every dimensions
    β”‚   β”œβ”€β”€ summary_model_type_avg_score.json        ## generated by `summary_model_type_avg_score.py`
    β”‚   └── summary_model_type_avg_score.py          ## generate script for `summary_model_type_avg_score.json`,
                                                     # describe metric scores for every editing models
                                                     # in every dimensions
    └── readme.md

πŸ€”How to evaluate with my own editing model

Check how to evaluate with my own editing model

πŸ–ŠοΈ Citation

@inproceedings{ma2024i2ebench,
  title={I2EBench: A Comprehensive Benchmark for Instruction-based Image Editing},
  author={Ma, Yiwei and Ji, Jiayi and Ye, Ke and Lin, Weihuang and Zheng, Yonghan and Zhou, Qiang and Sun, Xiaoshuai and Ji, Rongrong and others},
  booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
  year={2024}
}