🚀 Note: This repository is not currently updated. For the latest metrics, consider exploring: https://github.com/chaofengc/IQA-PyTorch
📧 Feel free to contact: ryanxingql@gmail.com
.
- v3: Added MS-SSIM index, BRISQUE, and PIQE; reimplemented PSNR and SSIM in Python; removed Ma et al. and PI due to low computation efficiency; removed FID as it is not an image quality evaluator.
- v2: Unified scripts for all algorithms.
- v1: Initial formal release.
metric | class | description | better | range | ref |
---|---|---|---|---|---|
Peak signal-to-noise ratio (PSNR) | FR | The ratio of the maximum pixel intensity to the power of the distortion. | higher | [0, inf) |
[WIKI] |
Structural similarity (SSIM) index | FR | Local similarity of luminance, contrast and structure of two image. | higher | (?, 1] |
[paper] [WIKI] |
Multi-scale structural similarity (MS-SSIM) index | FR | Based on SSIM; combine luminance information at the highest resolution level with structure and contrast information at several down-sampled resolutions, or scales. | higher | (?, 1] |
[paper] [code] |
Learned perceptual image patch similarity (LPIPS) | FR | Obtain L2 distance between AlexNet/SqueezeNet/VGG activations of reference and distorted images; train a predictor to learn the mapping from the distance to similarity score. Trainable. | lower | [0, ?) |
[paper] [official repo] |
Blind/referenceless image spatial quality evaluator (BRISQUE) | NR | Model Gaussian distributions of mean subtracted contrast normalized (MSCN) features; obtain 36-dim Gaussian parameters; train an SVM to learn the mapping from feature space to quality score. | lower | [0, ?) |
[paper] |
Natural image quality evaluator (NIQE) | NR | Mahalanobis distance between two multi-variate Gaussian models of 36-dim features from natural (training) and input sharp patches. | lower | [0, ?) |
[paper] |
Perception based image quality evaluator (PIQE) | NR | Similar to NIQE; block-wise. PIQE is less computationally efficient than NIQE, but it provides local measures of quality in addition to a global quality score. | lower | [0, 100] |
[paper] |
Notations:
- FR: Full-reference quality metric.
- NR: No-reference quality metric.
Archived:
metric | class | description | better | range | ref | where |
---|---|---|---|---|---|---|
Ma et al. (MA) | NR | Extract features in DCT, wavelet and PCA domains; train a regression forest to learn the mapping from feature space to quality score. Very slow! | higher | [0, 10] |
[paper] [official repo] | [v2] |
perceptual index (PI) | NR | 0.5 * ((10 - MA) + NIQE). Very slow due to MA! | lower | [0, ?) |
[paper] [official repo] | [v2] |
Fréchet inception distance (FID) | FR | Wasserstein-2 distance between two Gaussian models of InceptionV3 activations (fed with reference and distorted image data-sets, respectively). | lower | [0, ?) |
[paper] [cleanfid repo] | [v2] |
Subjective quality metric(s):
metric | description | better | range | ref |
---|---|---|---|---|
mean opinion score (MOS) | Image rating under certain standards. | higher | [0, 100] |
[BT.500] |
degradation/difference/differential MOS (DMOS) | Difference between MOS values of reference and distorted images. | lower | [0, 100] |
[ref1] [ref2] |
conda create -n iqa python=3.7 -y && conda activate iqa
python -m pip install pyyaml opencv-python tqdm pandas
# for psnr/ssim
python -m pip install scikit-image==0.18.2
# for ms-ssim/lpips
# test under cuda 10.x
python -m pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# for lpips
python -m pip install lpips==0.1.3
For BRISQUE and NIQE, MATLAB >= R2017b is required; for PIQE, MATLAB >= R2018b is required.
If you want to use main.py
to run MATLAB scripts, i.e., call MATLAB in Python, you should install MATLAB package in Conda environment. Check here. My solution:
# given linux
cd "matlabroot/extern/engines/python" # e.g., ~/Matlab/R2019b/extern/engines/python
conda activate iqa && python setup.py install
- Edit
opt.yml
. - Run:
conda activate iqa && [CUDA_VISIBLE_DEVICES=0] python main.py -case div2k_qf10 [-opt opt.yml -clean]
.[<args>]
are optional. - Output: CSV log files at
./logs/
.
Note:
tar
: target, e.g., enhanced compressed images.dst
: distorted, e.g., jpeg-compressed images.src
: source, e.g., raw/pristine images.- The list of the evaluated images is based on
tar_dir
.
We adopt Apache License v2.0. For other licenses, please refer to the references.
If you find this repository helpful, you may cite:
@misc{2021xing3,
author = {Qunliang Xing},
title = {Image Quality Assessment Toolbox},
howpublished = "\url{https://github.com/ryanxingql/image-quality-assessment-toolbox}",
year = {2021},
note = "[Online; accessed 11-April-2021]"
}