lucidrains/DALLE-pytorch

faster inference

rom1504 opened this issue ยท 5 comments

using caching @borzunov implemented 10x faster generation at https://github.com/learning-at-home/dalle-pytorch/pull/3/files

I think this could be useful

oh nice! yeah, I can get this done for both dalle and nuwa in one go when I get a free stretch of time

Hey! I'll make a PR to this repo with a finished version of this code today or tomorrow :)

๐Ÿ™ ๐Ÿ™

I think the code is ready, see #409. This git branch is based on the branch from #408 (however, there's no direct dependency, so one can use cached inference without merging code for sharing weights).

@borzunov merged! thank you for this amazing contribution!