lukasHoel/text2room

Questions about bad generating results

Coinc1dens opened this issue · 2 comments

Hello, this is a very surprising work!
But when I used the scripts to generate a library scene, which follows the default configs, I got some bad results for after_generation.ply and fuse_final.ply. Specifically, I found some objects in the first mesh file are predicted with wrong depth, and they are put in a strange way. After stage 2, the inpainted pixels are projected into wrong positions and many of them are floating in the sky, that made the whole scene in a mess. I wonder if this is a normal situation that I need to adjust my generating process and re-run for multiple times to get a better result, or I didn't use it in a proper way?

Hi, yeah it can sometimes happen that the depth is predicted wrongly, especially if the object is kind of out-of-distribution for the depth model. There are two possible solutions:

  • Accept that some objects are at weird positions and only fine-tune stage 2 parameters accordingly. You can look at opt.py to finetune parameters min_camera_distance_to_mesh, min_depth_quantil_to_mesh, max_inpaint_ratio, completion_dilate_iters for your specific scene and re-do stage 2 again. Probably I would start with lowering completion_dilate_iters=3 and increasing min_depth_quantil_to_mesh=3.0.

  • Re-do stage 1 and try to modify the prompts to generate scenes, such that all specified object types are already placed kind of right after the first stage.

OK I will try it later, thanks for your answering!