This repository serves as codebase for the project focused on applying effective neural network architectures for Depth Estimation along with researching best quantization methods to reduce their size. Project documentation can be found here.
# clone project
git clone https://github.com/lukasz-staniszewski/quantized-depth-estimation
cd quantized-depth-estimation
# [OPTIONAL] create conda environment
conda create -n myenv python=3.10.13
conda activate myenv
# install pytorch according to instructions
# https://pytorch.org/get-started/
# install requirements
pip install -r requirements.txt
# clone project
git clone https://github.com/lukasz-staniszewski/quantized-depth-estimation
cd quantized-depth-estimation
# create conda environment and install dependencies
conda env create -f environment.yaml -n myenv
# activate conda environment
conda activate myenv
export PYTHONPATH=$PWD
# train on CPU
python src/train.py trainer=cpu
# train on GPU
python src/train.py trainer=gpu
Train model with chosen experiment configuration from configs/experiment/
python src/train.py experiment=experiment_name.yaml
You can override any parameter from command line like this
python src/train.py trainer.max_epochs=20 data.batch_size=64
python src/eval.py ckpt_path=<PATH>
Configure quantization settings inside quantization config file and run:
python src/quantize.py ckpt_path=<PATH>
You can set up there your quantization scenario easily:
...
inference_speed: True # set to True if you want to check quantized model inference speed
quantization:
methods:
# - "fuse_bn" # fuse batch norm
- "ptq" # post training quantization
- "qat" # quantization aware training
ptq:
batches_limit: 250 # max number of batches for ptq calibration
qat:
max_epochs: 10 # max number of epochs for qat
quant_config:
dummy_input_shape: [1, 3, 224, 224]
is_per_tensor: False # True if per-tensor quantization, False if you prefer per-channel
is_asymmetric: True
backend: "qnnpack" # 'qnnpack' for mobile devices or 'fbgemm' for servers
disable_requantization_for_cat: True
use_cle: True # wether to use Cross Layer Normalization before doing PTQ/QAT
overwrite_set_ptq: True # if you don't use ptq, set it to False
python src/train.py experiment=nyu_efffnet
Download quantize.yaml
file from GitHub repository and put to configs/
directory.
Run:
python src/quantize.py ckpt_path=<CORRECT RUN PATH>/checkpoints/epoch_023.ckpt
Check those links:
- https://github.com/alibaba/TinyNeuralNetwork/blob/b6c78946d09b853071f55fb9b481ff632ea9568c/examples/quantization/specific/vit/vit_post.py
- https://github.com/alibaba/TinyNeuralNetwork/blob/b6c78946d09b853071f55fb9b481ff632ea9568c/examples/quantization/specific/mobileone/post.py
- https://github.com/alibaba/TinyNeuralNetwork/blob/b6c78946d09b853071f55fb9b481ff632ea9568c/examples/quantization/specific/mobileone/qat.py