To explore this issue, I used Kolmogorov-Arnold Networks (KAN) with various basis functions to fit the NeRF equation based on nerfstudio.
X-KAN Models (Here are various KANs!)
TODO | Basis Functions | Mathtype | Acknowledgement |
---|---|---|---|
√ | B-Spline | Efficient-Kan | |
√ | Fourier | FourierKAN | |
√ | Gaussian RBF | FastKAN | |
√ | Radial Basis Function | RBFKAN | |
√ | FCN | - | FCN-KAN |
√ | FCN-Interpolation | - | FCN-KAN |
√ | 1st Chebyshev Polynomials | ChebyKAN | |
√ | 2nd-Chebyshev Polynomials | OrthogPolyKANs | |
√ | Jacobi polynomials | JacobiKAN | |
√ | Hermite polynomials | OrthogPolyKANs | |
√ | Gegenbauer polynomials | OrthogPolyKANs | |
√ | Legendre polynomials | OrthogPolyKANs | |
- | Laguerre polynomials | OrthogPolyKANs | |
√ | Bessel polynomials | OrthogPolyKANs | |
√ | Fibonacci polynomials | OrthogPolyKANs | |
More and More!!! | - | - | - |
Model Setting -> train_blender.sh
hidden_dim | hidden_dim_color | num_layers | num_layers_color | geo_feat_dim | appearance_embed_dim |
---|---|---|---|---|---|
8 | 8 | 1 | 1 | 7 | 8 |
nerf_synthetic: lego / 30k
Note, the
Nerfacto-MLP
model utilizes only 3 MLP layers instead of 8. There might be a bug as the previous test results were better with 1 MLP layer. I will review the code to investigate the issue.
Model | Layer Params |
Train Rays/Sec |
Train Time |
FPS |
PSNR |
SSIM |
LPIPS |
---|---|---|---|---|---|---|---|
Nerfacto-MLP | 1118 | ~190K | ~13m | 0.99 | 28.60 | 0.952 | 0.0346 |
BSplines-KAN | 8092 | ~37K | ~54 m | 0.19 | 32.33 | 0.965 | 0.0174 |
GRBF-KAN | 3748 | ~115K | ~19 m | 0.50 | 32.39 | 0.967 | 0.0172 |
RBF-KAN | 3512 | ~140K | ~15m | 0.71 | 32.57 | 0.966 | 0.0177 |
Fourier-KAN | 5222 | ~80K | ~25 m | 0.42 | 31.72 | 0.956 | 0.0241 |
FCN-KAN(Iters: 4k) | 5184 | ~4K | ~90m | 0.02 | 29.67 | 0.938 | 0.0401 |
FCN-Interpolation-KAN | 6912 | ~52K | ~40m | 0.21 | 32.67 | 0.965 | 0.0187 |
1st Chebyshev-KAN | 4396 | ~53K | ~40m | 0.34 | 28.56 | 0.924 | 0.0523 |
Jacobi-KAN | 3532 | ~72K | ~30m | 0.37 | 27.88 | 0.915 | 0.0553 |
Bessel-KAN | 3532 | ~76K | ~28m | 0.33 | 25.79 | 0.878 | 0.1156 |
2nd Chebyshev-KAN | 4396 | ~55K | ~39m | 0.33 | 28.53 | 0.924 | 0.0500 |
Fibonacci-KAN | 4396 | ~65K | ~32m | 0.34 | 28.30 | 0.922 | 0.0521 |
Gegenbauer-KAN | 4396 | ~53K | ~40m | 0.32 | 28.39 | 0.922 | 0.0514 |
Hermite-KAN | 4396 | ~55K | ~38m | 0.37 | 27.58 | 0.913 | 0.0591 |
Legendre-KAN | 4396 | ~55K | ~38m | 0.33 | 26.64 | 0.893 | 0.0986 |
360_v2: garden / 30k
, todo
# create python env
conda create --name nerfstudio -y python=3.8
conda activate nerfstudio
python -m pip install --upgrade pip
# install torch
pip install torch==2.0.1+cu117 torchvision==0.15.2+cu117 --extra-index-url https://download.pytorch.org/whl/cu117
conda install -c "nvidia/label/cuda-11.7.1" cuda-toolkit
# install tinycudann
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
# install nerfstudio
pip install nerfstudio==0.3.4
# pip install torchmetrics==0.11.4
# Tab command
ns-install-cli
# !!! If you use `ns-process-data`, please install this version opencv
pip install opencv-python==4.3.0.36
############# kan_basis_type #############
# mlp, bspline, grbf, rbf, fourier,
# fcn, fcn_inter, chebyshev, jacobi
# bessel, chebyshev2, finonacci, hermite
# legendre, gegenbauer
bash train_blender.sh
- KANeRF, A big thank you for this awesome work!
@Manual{kanerf, title = {Hands-On NeRF with KAN}, author = {Delin Qu, Qizhi Chen}, year = {2024}, url = {https://github.com/Tavish9/KANeRF}, }
- nerfstudio
@inproceedings{nerfstudio, title = {Nerfstudio: A Modular Framework for Neural Radiance Field Development}, author = { Tancik, Matthew and Weber, Ethan and Ng, Evonne and Li, Ruilong and Yi, Brent and Kerr, Justin and Wang, Terrance and Kristoffersen, Alexander and Austin, Jake and Salahi, Kamyar and Ahuja, Abhik and McAllister, David and Kanazawa, Angjoo }, year = 2023, booktitle = {ACM SIGGRAPH 2023 Conference Proceedings}, series = {SIGGRAPH '23} }