. ./create_env.sh
Please download the datasets from these links:
- NeRF synthetic: Download
nerf_synthetic.zip
from https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1 - LLFF: Download
nerf_llff_data.zip
from https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1
cd opt && . ./stega_{llff/syn}.sh [scene_name] [embed_img]
- At the first stage, a photorealistic radiance field will first be reconstructed if it doesn't exist on disk. Then the steganographic training at the second stage ends up with the steganographic NeRF and decoder.
- Select
{llff/syn}
according to your data type. For example, usellff
forflower
scene,syn
forlego
scene. [embed_img]
is the style image inside./data/watermarks
.
View the results by tensorboard.
You can also obtain the results and rendering the videos from the saved checkpoints.
Use opt/render_imgs.py
for the scenes on LLFF: python render_imgs.py <CHECKPOINT.npz> <Decoder.pt> <data_dir>
Use opt/render_imgs_circle.py
to render a spiral for the scenes on NeRF synthetic: python render_imgs_circle.py <CHECKPOINT.npz> <Decoder.pt> <data_dir>
- Dataset: Download
brandenburg_gate (4.0G)
from https://www.cs.ubc.ca/~kmyi/imw2020/data.html. More details to use this dataset can be found here.
- Code to be released; stays tuned.
We would like to thank ARF and Plenoxel authors for open-sourcing their implementations.
If you find this repo is helpful, please consider citing:
@inproceedings{li2022steganerf,
title={StegaNeRF: Embedding Invisible Information within Neural Radiance Fields},
author={Chenxin Li and Brandon Y. Feng and Zhiwen Fan and Panwang Pan and Zhangyang Wang},
booktitle={arxiv},
year={2022}
}