KRF
Installation
- Install CUDA 10.1 / 10.2
- Set up python3 environment from requirement.txt:
pip3 install -r requirement.txt
- Install apex:
git clone https://github.com/NVIDIA/apex cd apex export TORCH_CUDA_ARCH_LIST="6.0;6.1;6.2;7.0;7.5" # set the target architecture manually, suggested in issue https://github.com/NVIDIA/apex/issues/605#issuecomment-554453001 pip3 install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ cd ..
- Install normalSpeed, a fast and light-weight normal map estimator:
git clone https://github.com/hfutcgncas/normalSpeed.git cd normalSpeed/normalSpeed python3 setup.py install --user cd ..
- Install tkinter through
sudo apt install python3-tk
- Compile for chamfer distance
cd krf/utils/distance python setup.py install
- Install KNN-CUDA
cd KNN-CUDA make make install
- Compile RandLA-Net operators:
cd krf/models/RandLA/ sh compile_op.sh
Create Dataset
-
LineMOD: Download the preprocessed LineMOD dataset from onedrive link or google drive link (refer from DenseFusion). Unzip it and link the unzipped
Linemod_preprocessed/
tokrf/datasets/linemod/Linemod_preprocessed
:ln -s path_to_unzipped_Linemod_preprocessed krf/dataset/linemod/
Generate rendered and fused data following raster_triangle.
-
YCB-Video: Download the YCB-Video Dataset from PoseCNN. Unzip it and link the unzipped
YCB_Video_Dataset
tokrf/datasets/ycb/YCB_Video_Dataset
:ln -s path_to_unzipped_YCB_Video_Dataset krf/datasets/ycb/
-
You can download pretrained complete networks and generated data here.
Then generate colored mesh point cloud for each objects by:
python generate_color_pts.py
-
Generate FFB6D estimate results
Download pretrained model FFB6D-LineMOD, FFB6D-YCB, move it to
train_log/linemod/checkpoints/
ortrain_log/ycb/checkpoints/
. Then modifygenerate_ds.sh
and generate estimate results by:bash generate_ds.sh
Training
- To train the network on YCB Dataset, run the following command:
bash train_ycb_refine_pcn.sh
- To train the network on LineMOD Dataset, run the following command:
# commands in train_ycb_refine_pcn.sh
n_gpu=6
cls='ape'
#ckpt_mdl="/home/zhanhz/FFB6D/ffb6d/train_log/linemod/checkpoints/${cls}/FFB6D_${cls}_REFINE_best.pth.tar"
python3 -m torch.distributed.launch --nproc_per_node=$n_gpu train_lm_refine_pcn.py --gpus=$n_gpu --cls=$cls #-checkpoint $ckpt_mdl
# end
bash train_ycb_refine_pcn.sh
Evaluation
- To evaluate our method on YCB Dataset, run the following command:
python ycb_refine_test.py -gpu=0 -ckpt=CHECKPOINT_PATH -use_pcld -use_rgb
- To evaluate our method on Occlusion LineMOD Dataset, run the following command for one class:
python lm_refine_test.py -gpu=0 -ckpt=CHECKPOINT_PATH -cls='ape' -use_pcld -use_rgb
or evaluate all class by:
bash test_occ_icp.sh