My evaluation using mac m2 pro
Opened this issue · 7 comments
Hi, this is not an issue just an appreciation, after some modifications :
- @requirements onnxruntime==1.18.1 instead of onnxruntime-gpu==1.18.0
- @fast_live_portait_pipeline self.providers = ['CoreMLExecutionProvider', 'CPUExecutionProvider']
- @config.py dir_path = os.path.join(current_dir, './live_portrait_onnx_weights', main_key)
I got [00:31<00:00, 1.96s/it] vs [00:30<00:00, 1.91s/it] using kaggle with P100, not bad
inference : python run_live_portrait.py -v 'experiment_examples/examples/driving/d1.mp4' -i 'experiment_examples/examples/source/s1.jpg'
The memory usage is high though, since I only got 16gb of RAM, but great speed and kaggle onnx GPU seems to work
keep up the good job !
edit : can we optimize the memory usage more ? there's pull request in the main repo about lazy loading (not read it thoroughly yet)
I found this message 'Context leak detected, msgtracer returned -1' while animating but it then finish the job, what is this message mean ?
@x4080 That’s mean some node im onnx is skip and fall back to CPU , that’s reason why it slower than official
@x4080 Can you create pull request ? I spend time to do TensorRT
@aihacker111 Sorry, I'm not experienced with pull request, you can add my code if you like
@x4080 never mind, I’ll update later
@x4080 Please accept a invitation, you can manage git and push into repo, don't need pull request
@aihacker111, I'll try to add the changes