Kaolin / Omniverse Infrastructure
albertotono opened this issue · 2 comments
Dear Kaolin Team,
I was curious about some inputs regarding the best setup for:
- Training
- Visualization
- Inference / Deployment
- Working.
-
Any suggestions on the best GPUs for training with Kaolin/Omniverse (
- RTX or RTX A series for a workstation?
- or would be better to use AWS or other servers/clusters? ).
- Would you recommend Intel or AMD CPU?
-
If I am training my model with Kaolin locally, and I use the 3D Checkpoints for Omniverse/Visualization, I need at least RTX 3. or RTX A3. ( Omniverse is not working with RTX 2080 Ti, in the recent release).
- So how can I better connect with an additional AWS server for Omniverse/Visualization [Thanks Matias]?In this way, I can use the Dash3D.
- Do you have a recommendation for Jupyter Notebook's USD viewers or Weight and Bias dashboard viewer for USD?
- Or do I have to use Omniverse View?
Ideally, during training, I would like ready-paper visualizations similar to this, maybe with a .png background
-
For inference and Model deployment to test an application in Omniverse of a model ( for example connecting with HugginFace for Demo, or Custom Application), should I have a different AWS E2 instance? what would you recommend? Triton ?
-
I am trying to avoid the headache of going over Route53, Nitro Enclaves, isolated environments Nginx, E2 instance G5: https://register.nvidia.com/flow/nvidia/gtcfall2022/attendeeportal/page/sessioncatalog/session/16603228195160010EFL [Thanks Kellan] to have Nucleus and enable a Design interface for some quick HCI Studies for a user interface? To whom should I reach out for support? Furthermore, anything I should be aware of while I deploy my model?
- Would you recommend Intel or AMD CPU?
Thanks again for the attention and input in advance.
FYI: this is the project for more information
Hi @albertotono ,
Thank you for your interest in Kaolin.
There are a lot of questions, I think it would be better for reference to other users to separate into multiple issues :)
Closing this issue, I guess the questions could be summarized with. With whom should I talk to at NVIDIA for checking GPUs, CPUs requirements for Omniverse and DL training/inference, etc?