In this project, we present a novel neural representation for light field content and train the neural light field with lowtoys
dataset. We use a fully-connected neural network and different types of encoders to get the pixel values from 4D light field coordinates.
There are more concrete problems that have been solved.
- Implement Neural light field renderer:
LF_network
. We use the combination ofSIREN
andGegenbauer
encoder as our own network architecture. And we try some other encoder likepositional encoding
andhashencoding
, whileGegenbauer
encoder performs the best effect to generate training views and novel views. - We evaluation of our designs by showing the generated training views and novel views. We also show some results generated by other encoding methods.
- Implement the translational motion of the virtual viewing camera along
$x$ ,$y$ , and$z$ directions. - Implement the refocusing and change aperture size by reintroducing the effect of disparity.
python preprocess.py --img_dir ./data/lowtoy/ --save_dir ./patch_data/lowtoy_patches
python preprocess.py --img_dir ./data/lowtoy/ --save_dir ./patch_data_large/lowtoy_patches
python train_net.py --root_dir . --exp_name lowtoy_ggb_a_0.5_in_1 \
--trainset_dir ./patch_data_large/lowtoy_patches \
--encoding Gegenbauer --alpha 0.5 --in_feature_ratio 1.0 --num_epochs 3000
python train_net.py --root_dir . --exp_name lowtoy_pe_mxy_10_muv_10 \
--trainset_dir ./patch_data_large/lowtoy_patches \
--encoding DiscreteFourier --multires_xy 10 --multires_uv 10
If you want to test the results for tasks, please run "interpolator.py" and modify some static variables in "interpolator.py".
python interpolator.py
# The rest is just reference
python train_net.py --root_dir . --exp_name lowtoy_ggb_a_0.5_in_1 \
--trainset_dir ./patch_data_large/lowtoy_patches \
--encoding Gegenbauer --alpha 0.5 --in_feature_ratio 1.0 --num_epochs 3000
python train_net.py --root_dir . --exp_name lowtoy_ggb_a_0.5_in_0.5 \
--trainset_dir ./patch_data/lowtoy_patches \
--encoding Gegenbauer --alpha 0.5 --in_feature_ratio 0.5
python train_net.py --root_dir . --exp_name lowtoy_pe_mxy_10_muv_6 \
--trainset_dir ./patch_data/lowtoy_patches \
--encoding DiscreteFourier --multires_xy 10 --multires_uv 6
python train_net.py --root_dir . --exp_name lowtoy_pe_mxy_15_muv_6 \
--trainset_dir ./patch_data/lowtoy_patches \
--encoding DiscreteFourier --multires_xy 15 --multires_uv 6
python train_net.py --root_dir . --exp_name lowtoy_pe_mxy_10_muv_4 \
--trainset_dir ./patch_data/lowtoy_patches \
--encoding DiscreteFourier --multires_xy 10 --multires_uv 4
python train_net.py --root_dir . --exp_name lowtoy_pe_mxy_10_muv_10 \
--trainset_dir ./patch_data_large/lowtoy_patches \
--encoding DiscreteFourier --multires_xy 10 --multires_uv 10
python train_net.py --root_dir . --exp_name lowtoy_pe_mxy_10_muv_10_layer_6_hidden_256 \
--trainset_dir ./patch_data_large/lowtoy_patches \
--encoding DiscreteFourier --multires_xy 10 --multires_uv 10
python train_net.py --root_dir . --exp_name lowtoy_pe_mxy_6_muv_4 \
--trainset_dir ./patch_data/lowtoy_patches \
--encoding DiscreteFourier --multires_xy 6 --multires_uv 4
python train_net.py --root_dir . --exp_name lowtoy_he_test \
--trainset_dir ./patch_data_large/lowtoy_patches \
--encoding Hashencoding
python train_net.py --root_dir . --exp_name lowtoy_he_test2 \
--trainset_dir ./patch_data/lowtoy_patches \
--encoding Hashencoding