TensorRT official webui
lucasjinreal opened this issue · 3 comments
lucasjinreal commented
when will release?
78Alpha commented
Probably when it can be run with ease on a majority of computers. So not in the near future.
lucasjinreal commented
I found Tensorrt official's version always get black image on sd 2.1
78Alpha commented
That is one of the known issues.
- Black images
- No dynamic shape by default (limited to the shape you setup with)
- Dynamic shapes have limits (the combination of shapes and batch size can't exceed a certain value see pull request)
- ONNX conversion issues (file gets split, file contains duplications, various version can't be output on various hardware combinations)
- TensorRT conversion issues (Fails without a proper error. Ex. "An error occurred because an error occurred")
- ONNX parsing issues on various versions
- TensorRT models are version specific (Model built on 8.6.X will not work on any version above or below)
- TensorRT model built on one GPU will not work on other GPUs (Model built on 4090 won't work other cards, but may load on other 40 series with speed penalty)
- TensorRT leverages other libraries that aren't specified (C/C++ compilers, it won't mention them but it fails just the same, easier to remedy on linux)
- TensorRT can't be used with controlnet currently
- TensorRT can't be trained on (Lora Dreambooth, and hypernetwork)
- Polygraphy errors (ranges from wrong architecture, wrong version, overclock issues, etc.)
- Optimizations are VRAM dependent, You could get a 200% increase on 24 GB, and only get a 4% on 6 GB.
For my own personal take, even when someone compiles a model with the same GPU as myself, it still doesn't load, gives the same Error code 1. Even on verbose it doesn't say anything of substance happened to prevent it from loading, but nevertheless, it fails.
Most developers opted to use AITemplate, but that is Linux only. On WSL2 it works easily, gets stuck every now and then, but functions.