hwjiang1510/LEAP

Objaverse trained weights.

firasvree opened this issue · 3 comments

Hey,
First of all thanks a lot, the idea is so fascinating !
Are you considering sharing the trained weights of the Objaverse/XL dataset?

Hi,

Thanks for your interest in our work.

We are working on scaling up the model size for training on large-scale data. Stay tuned!

Hi, any update on this? Is it possible to release the model weights trained on Objaverse?

Hi, any update on this? Is it possible to release the model weights trained on Objaverse?

Hi,

Thanks for your interest in our work.

Unfortunately we will not release the Objaverse weights. After talking with Adobe LRM group, we realized we need much more GPUs for training the model (compared to the concurrent work PF-LRM). Our previous trial on scaling up is not very successful given the limited number of GPUs we have. Besides, our Objaverse numbers reported in the paper is from a model trained on Objaverse with Zero123 renderings, which is highly noisy and makes the quality not satisfying enough. We recommend to retrain the model using filtered data.

Best,
Hanwen