Issues
- 4
is it possible to support 512x512
#2 opened by huangxin168 - 4
how to use this in python?
#17 opened by debasishaimonk - 0
如何实现fp16
#21 opened by fanghaiquan1 - 3
inference is too..........oo slow
#19 opened by Liuqh12 - 0
通过C++得到的engien模型, 如何用python程序推理?
#20 opened by fanghaiquan1 - 1
onnx to tensorrt
#18 opened by luhairong11 - 1
- 1
Are there any plans to support CodeFormer?
#15 opened by a869072989 - 2
1024*1024的onnx文件作者有吗
#14 opened by hanziying - 2
Directly convert GFPGAN1.4.onnx into GFPGAN1.4.engine and then receive API usage error when run images
#13 opened by nelsontseng0704 - 2
- 2
Cuda failure: 700
#12 opened by lschaupp - 2
encountering error while loading shared libraries: libnvinfer_plugin.so.8 when convert onnx model
#10 opened by nelsontseng0704 - 2
Can we directly convert gfpgan?
#9 opened by einsqing - 3
Will you support batch operation?
#8 opened by Life93 - 4
Windows version?
#6 opened by rohaantahir - 2
Alternative Model download links
#7 opened by prakyath-07 - 6
onnx model
#1 opened by carter54 - 1
when will it support more
#3 opened by dcming - 5
when you support GFPGANv1.3?
#4 opened by SeanLiu081 - 2