luciddreamer-cvlab/LucidDreamer

out of memory

tiandaoyuxi opened this issue · 5 comments

image

……
65 / 70
66 / 70
67 / 70
68 / 70
69 / 70
70 / 70
Reading Training Transforms
Loading Training Cameras
Loading Preset Cameras
Number of points at initialisation : 1283959
Traceback (most recent call last):
File "C:\DATA\LucidDreamer\run.py", line 53, in
ld.create(rgb_cond, txt_cond, neg_txt_cond, args.campath_gen, args.seed, args.diff_steps)
File "C:\DATA\LucidDreamer\luciddreamer.py", line 188, in create
self.scene = Scene(self.traindata, self.gaussians, self.opt)
File "C:\DATA\LucidDreamer\scene_init_.py", line 33, in init
self.gaussians.create_from_pcd(info.point_cloud, self.cameras_extent)
File "C:\DATA\LucidDreamer\scene\gaussian_model.py", line 136, in create_from_pcd
dist2 = torch.clamp_min(distCUDA2(torch.from_numpy(np.asarray(pcd.points)).float().cuda()), 0.0000001)
MemoryError: bad allocation: cudaErrorMemoryAllocation: out of memory

Display card 3090 24GB memory

test append Error report

105 / 105
Reading Training Transforms
Loading Training Cameras
Loading Preset Cameras
Number of points at initialisation : 1615299
Traceback (most recent call last):
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\gradio\queueing.py", line 455, in call_prediction
output = await route_utils.call_process_api(
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\gradio\blocks.py", line 1533, in process_api
result = await self.call_function(
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\gradio\blocks.py", line 1151, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\gradio\utils.py", line 678, in wrapper
response = f(*args, **kwargs)
File "C:\DATA\LucidDreamer\luciddreamer.py", line 170, in run
gaussians = self.create(
File "C:\DATA\LucidDreamer\luciddreamer.py", line 188, in create
self.scene = Scene(self.traindata, self.gaussians, self.opt)
File "C:\DATA\LucidDreamer\scene_init_.py", line 33, in init
self.gaussians.create_from_pcd(info.point_cloud, self.cameras_extent)
File "C:\DATA\LucidDreamer\scene\gaussian_model.py", line 136, in create_from_pcd
dist2 = torch.clamp_min(distCUDA2(torch.from_numpy(np.asarray(pcd.points)).float().cuda()), 0.0000001)
MemoryError: bad allocation: cudaErrorMemoryAllocation: out of memory
Traceback (most recent call last):
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\gradio\queueing.py", line 455, in call_prediction
output = await route_utils.call_process_api(
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\gradio\blocks.py", line 1533, in process_api
result = await self.call_function(
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\gradio\blocks.py", line 1151, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\gradio\utils.py", line 678, in wrapper
response = f(*args, **kwargs)
File "C:\DATA\LucidDreamer\luciddreamer.py", line 170, in run
gaussians = self.create(
File "C:\DATA\LucidDreamer\luciddreamer.py", line 188, in create
self.scene = Scene(self.traindata, self.gaussians, self.opt)
File "C:\DATA\LucidDreamer\scene_init_.py", line 33, in init
self.gaussians.create_from_pcd(info.point_cloud, self.cameras_extent)
File "C:\DATA\LucidDreamer\scene\gaussian_model.py", line 136, in create_from_pcd
dist2 = torch.clamp_min(distCUDA2(torch.from_numpy(np.asarray(pcd.points)).float().cuda()), 0.0000001)
MemoryError: bad allocation: cudaErrorMemoryAllocation: out of memory

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\gradio\queueing.py", line 493, in process_events
response = await self.call_prediction(awake_events, batch)
File "C:\Users\ws3-01\anaconda3\envs\lucid\lib\site-packages\gradio\queueing.py", line 464, in call_prediction
raise Exception(str(error) if show_error else None) from error
Exception: None

same here, earlier today it did work, but then I couldn't get Gradio to work so reinstalled, and now I get out of memory with the same prompt and settings

Hello, I am currently figuring out the cause of the error.
I will inform the solution after solving the error.

@tiandaoyuxi @murcje
I attribute the error to the wrong installation with whl files.
Installing rasterizer and simple-knn using whl files are now causing unexpected behaviors.
We need some time to fix the whl installation, so please re-create the environment by using the former installation commands.
We have updated the installation process in README.
Sorry for making inconvenience.

@tiandaoyuxi @murcje
I attribute the error to the wrong installation with whl files.
Installing rasterizer and simple-knn using whl files are now causing unexpected behaviors.
We need some time to fix the whl installation, so please re-create the environment by using the former installation commands.
We have updated the installation process in README.
Sorry for making inconvenience.

Thanks! I'll try it on.