Error on running `sample_text_to_3d.ipynb`
nourheshamshaheen opened this issue · 8 comments
I receive this error when running on python=3.10, pytorch=1.11.0+cu113, torchvision=0.12.0+cu113
TypeError Traceback (most recent call last)
Cell In[8], line 4
1 batch_size = 4
2 guidance_scale = 15.0
----> 4 latents = sample_latents(
5 batch_size=batch_size,
6 model=model,
7 diffusion=diffusion,
8 guidance_scale=guidance_scale,
9 model_kwargs=dict(texts=[prompt] * batch_size),
10 progress=True,
11 clip_denoised=True,
12 use_fp16=True,
13 use_karras=True,
14 karras_steps=64,
15 sigma_min=1e-3,
16 sigma_max=160,
17 s_churn=0,
18 )
File ~/.conda/envs/cv2/lib/python3.10/site-packages/shap_e/diffusion/sample.py:62, in sample_latents(batch_size, model, diffusion, model_kwargs, guidance_scale, clip_denoised, use_fp16, use_karras, karras_steps, sigma_min, sigma_max, s_churn, device, progress)
60 with torch.autocast(device_type=device.type, enabled=use_fp16):
61 if use_karras:
---> 62 samples = karras_sample(
63 diffusion=diffusion,
64 model=model,
65 shape=sample_shape,
66 steps=karras_steps,
67 clip_denoised=clip_denoised,
68 model_kwargs=model_kwargs,
69 device=device,
70 sigma_min=sigma_min,
71 sigma_max=sigma_max,
72 s_churn=s_churn,
73 guidance_scale=guidance_scale,
74 progress=progress,
75 )
76 else:
77 internal_batch_size = batch_size
File ~/.conda/envs/cv2/lib/python3.10/site-packages/shap_e/diffusion/k_diffusion.py:113, in karras_sample(*args, **kwargs)
111 def karras_sample(*args, **kwargs):
112 last = None
--> 113 for x in karras_sample_progressive(*args, **kwargs):
114 last = x["x"]
115 return last
File ~/.conda/envs/cv2/lib/python3.10/site-packages/shap_e/diffusion/k_diffusion.py:181, in karras_sample_progressive(diffusion, model, shape, steps, clip_denoised, progress, model_kwargs, device, sigma_min, sigma_max, rho, sampler, s_churn, s_tmin, s_tmax, s_noise, guidance_scale)
178 else:
179 guided_denoiser = denoiser
--> 181 for obj in sample_fn(
182 guided_denoiser,
183 x_T,
184 sigmas,
185 progress=progress,
186 **sampler_args,
187 ):
188 if isinstance(diffusion, GaussianDiffusion):
189 yield diffusion.unscale_out_dict(obj)
File ~/.conda/envs/cv2/lib/python3.10/site-packages/torch/autograd/grad_mode.py:43, in _DecoratorContextManager._wrap_generator.<locals>.generator_context(*args, **kwargs)
40 try:
41 # Issuing `None` to a generator fires it up
42 with self.clone():
---> 43 response = gen.send(None)
45 while True:
46 try:
47 # Forward the response to our caller and get its next request
File ~/.conda/envs/cv2/lib/python3.10/site-packages/shap_e/diffusion/k_diffusion.py:265, in sample_heun(denoiser, x, sigmas, progress, s_churn, s_tmin, s_tmax, s_noise)
263 if gamma > 0:
264 x = x + eps * (sigma_hat**2 - sigmas[i] ** 2) ** 0.5
--> 265 denoised = denoiser(x, sigma_hat * s_in)
266 d = to_d(x, sigma_hat, denoised)
267 yield {"x": x, "i": i, "sigma": sigmas[i], "sigma_hat": sigma_hat, "pred_xstart": denoised}
File ~/.conda/envs/cv2/lib/python3.10/site-packages/shap_e/diffusion/k_diffusion.py:173, in karras_sample_progressive.<locals>.guided_denoiser(x_t, sigma)
171 x_t = th.cat([x_t, x_t], dim=0)
172 sigma = th.cat([sigma, sigma], dim=0)
--> 173 x_0 = denoiser(x_t, sigma)
174 cond_x_0, uncond_x_0 = th.split(x_0, len(x_0) // 2, dim=0)
175 x_0 = uncond_x_0 + guidance_scale * (cond_x_0 - uncond_x_0)
File ~/.conda/envs/cv2/lib/python3.10/site-packages/shap_e/diffusion/k_diffusion.py:160, in karras_sample_progressive.<locals>.denoiser(x_t, sigma)
159 def denoiser(x_t, sigma):
--> 160 _, denoised = model.denoise(
161 x_t, sigma, clip_denoised=clip_denoised, model_kwargs=model_kwargs
162 )
163 return denoised
File ~/.conda/envs/cv2/lib/python3.10/site-packages/shap_e/diffusion/k_diffusion.py:99, in GaussianToKarrasDenoiser.denoise(self, x_t, sigmas, clip_denoised, model_kwargs)
98 def denoise(self, x_t, sigmas, clip_denoised=True, model_kwargs=None):
---> 99 t = th.tensor(
100 [self.sigma_to_t(sigma) for sigma in sigmas.cpu().numpy()],
101 dtype=th.long,
102 device=sigmas.device,
103 )
104 c_in = append_dims(1.0 / (sigmas**2 + 1) ** 0.5, x_t.ndim)
105 out = self.diffusion.p_mean_variance(
106 self.model, x_t * c_in, t, clip_denoised=clip_denoised, model_kwargs=model_kwargs
107 )
TypeError: 'float' object cannot be interpreted as an integer
I have this same issue, running the notebook in JupyterLab.
Python implementation : CPython
Python version : 3.10.9
IPython version : 8.10.0
pytorch version : 1.12.1
torchvision version : 0.13.1
Compiler : Clang 14.0.6
OS : Darwin
Release : 22.5.0
Machine : arm64
Processor : arm
CPU cores : 10
Architecture: 64bit
same here
for me everything works with pytorch 2.x instead of 1.x
@nebomuk Thanks for that input! I install the latest pytorch (conda install pytorch torchvision -c pytorch
), reset the repository, and rebuilt shap-e
. I'm running it now in Jupyter Notebook (was having issue with the widgets in Lab) and no longer getting the error.
I'm running this on an M1 for the last 15 minutes but still haven't seen any progress on this step:
I see the kernel is busy and is taking 100% of CPU but it's still 0/64 on progress. How long should this notebook take to successfully run? Is there a configuration I can make in Jupyter Notebook to increase the allotted resources?
I also had some weird progress reporting. Sometimes the progress would start delayed, results appear delayed or i had to click it twice. But at least there doesn't seem to be a bug in shap-e because in the end everything worked.
In my case, because I'm on M1 and don't have access to cuda
it's running on CPU and it's just unusably slow. I'll have to try on a different machine's GPU.
I'm work with pytorch version 1.13.1 and i have the same error (TypeError: 'float' object cannot be interpreted as an integer).
I solve the error converting in integer with int(), in the line 106. Here are the solution:
t = th.tensor(
[int(self.sigma_to_t(sigma)) for sigma in sigmas.cpu().numpy()],
dtype=th.long,
device=sigmas.device,
)
there is an addon that simplified the use of the model within a 3d modeling software check it out https://devbud.gumroad.com/l/Shap-e