Invalid Syntax when trying to run the first time
GonrasK opened this issue · 10 comments
I followed all the instructions and still I'm getting this error -
aphantasia git:(master) python clip_fft.py -t "the text" --size 1280-720
File "clip_fft.py", line 112
Ys = [torch.randn(*Y.shape).cuda() for Y in [Yl_in, *Yh_in]]
- please always quote full error log, from the start till the end. it's impossible to tell what's wrong from such cut (it doesn't even include error message).
- i cannot reproduce it here - the mentioned command runs without problems.
besides, line 112 belongs to the function for DWT-based generation, which is used if explicitly set--dwt
(not presented in your command). so it seems something is corrupt or edited on your side. try downloading fresh repo again; if the problem persists, let's check your actions step by step.
no it wasn't - you dropped the most important part about syntax error.
what is your python version? the repo is supported on 3.7. if might also work on 3.5, but i can't check that. older versions are definitely out of scope.
You are definitely right. Python was outdated. But now I have a different problem -
➜ aphantasia git:(master) sudo python3 clip_fft.py -t "the text" --size 1280-720
using model ViT-B/32
topic text: the text
Traceback (most recent call last):
File "clip_fft.py", line 470, in <module>
main()
File "clip_fft.py", line 394, in main
txt_enc = enc_text(a.in_txt)
File "clip_fft.py", line 372, in enc_text
emb = model_clip.encode_text(clip.tokenize(txt).cuda())
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/torch/cuda/__init__.py", line 166, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Isn't it possible to run it with CPU only even though I installed pytorch with CPU support?
no way..
in theory it would work, but soo prohibitively slow, it just doesn't make any sense
use colab, if you're out of local resources
i presume, the question is answered now, so i'm closing the issue. feel free to reopen it if needed (or start a new one in case of different problem).
@eps696 how to make it happen without a nvidia gpu
(base) wenke@wenkedeMac-mini aphantasia % python clip_fft.py -t "greate wall" --size 1280-720
Traceback (most recent call last):
File "clip_fft.py", line 477, in <module>
main()
File "clip_fft.py", line 304, in main
params, image_f, sz = fft_image(shape, 0.01, a.decay, a.resume)
File "clip_fft.py", line 225, in fft_image
params, size = resume_fft(resume, shape, decay_power, sd=sd)
File "clip_fft.py", line 205, in resume_fft
params = 0.01 * torch.randn(*params_shape).cuda()
File "/opt/anaconda3/lib/python3.8/site-packages/torch/cuda/__init__.py", line 208, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
@wanghaisheng quick answer: no way.
longer answer: while this may be possible in theory, cpu-only processing should be prohibitively slow, so it doesn't make sense for me to invest any time to prove that.
EDIT: just realized that exactly this was discussed in this issue. please let me know, what was not clear in the answer above, so that i'd try to use some other words, if i have to repeat it third time.
what about amd gpu ?@eps696
no way either. most current neural libraries (including those used here) use cuda on nvidia.
as far as i know, it may be possible to port it somehow on top of opencl platform for amd, but that's totally out of scope for such repo. use colab instead.