allenai/objaverse-rendering

rendering questions

Tycho-Xue opened this issue · 5 comments

Hi @mattdeitke , thanks for the rendering script! I used the engine of "CYCLES", but I found that the rendered images are quite noisy, not sure if I need to adjust the parameters. for example, it's not uncommon that the rendered images are over-exposure. the left is the rendered images, the right side is the model itself (looked up from your website, f13fa238c81c4648bd86c6378f8cf835).
image
image

I also noticed that sometimes the rendered images might be cropped or display a zig-zag pattern. I was wondering if you could provide some insights into how the parameters for rendering are determined and if you have any recommendations for tweaking them for a more generalized and better results. Thanks in advance!

Hi @Tycho-Xue,

Can you provide an example of the zig-zag pattern? I might know where this is coming from.

I'll have somebody look at the overexposure as well :)

Hi @Tycho-Xue, I've now fixed this issue. Please pull the updates from the latest commit. You should get the following renders now for this object:

000
001
002
003
004
005
006
007
008
009
010
011

Hi @mattdeitke , I'm currently trying running training code from OpenLRM.
They use Objaverse Rendering dataset for training.
So I'm trying to properly prepare the dataset.
However, I'm so new to AI, not sure how to prepare the data mentioned here.
I've successfully run the distributed rendering command mentioned in the README.md, and createad images :

python3 scripts/distributed.py \
  --num_gpus <NUM_GPUs> \
  --workers_per_gpu <WORKERS_PER_GPU> \
  --input_models_path <INPUT_MODELS_PATH>

But there is no pose, nor intrinsics.npy at all.
How can I prepare them?
It would be great if you could help me out.
Thank you for your great work!

Hi @mattdeitke , I'm currently trying running training code from OpenLRM. They use Objaverse Rendering dataset for training. So I'm trying to properly prepare the dataset. However, I'm so new to AI, not sure how to prepare the data mentioned here. I've successfully run the distributed rendering command mentioned in the README.md, and createad images :

python3 scripts/distributed.py \
  --num_gpus <NUM_GPUs> \
  --workers_per_gpu <WORKERS_PER_GPU> \
  --input_models_path <INPUT_MODELS_PATH>

But there is no pose, nor intrinsics.npy at all. How can I prepare them? It would be great if you could help me out. Thank you for your great work!

May you will generate pose manually.
In the "save_images" method, add the following code:

# save the c2w matrix
c2w_matrix = np.array(cam.matrix_world)
m_output_dir = os.path.join(args.output_dir, object_uid,'pose')
os.makedirs(m_output_dir, exist_ok=True)
with open(os.path.join(args.output_dir, object_uid,'pose', f"{i:03d}.txt"), 'w') as f:
    np.savetxt(f, c2w_matrix, fmt='%f')