SociallyIneptWeeb/AICoverGen

need help

Opened this issue · 2 comments

Microsoft Windows [Version 10.0.19045.3693]
(c) Microsoft Corporation. All rights reserved.

C:\Users\Jaideep\Documents\loki\AICoverGen-main\AICoverGen-main\src>python webui.py
2023-12-13 18:56:07 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-12-13 18:56:07 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-12-13 18:56:07 | INFO | faiss.loader | Loading faiss.
2023-12-13 18:56:07 | INFO | faiss.loader | Successfully loaded faiss.
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
2023-12-13 18:56:28 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-12-13 18:56:28 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
2023-12-13 18:56:32 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-12-13 18:56:32 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
100%|██████████████████████████████████████████████████████████████████████████████████| 31/31 [00:14<00:00, 2.19it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 31/31 [00:12<00:00, 2.51it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 31/31 [00:08<00:00, 3.55it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 31/31 [00:08<00:00, 3.57it/s]
6%|█████▏ | 1/16 [00:01<00:28, 1.89s/it]Exception in thread Thread-11:
Traceback (most recent call last):
File "C:\Users\Jaideep\AppData\Local\Programs\Python\Python39\lib\threading.py", line 950, in _bootstrap_inner
self.run()
File "C:\Users\Jaideep\AppData\Local\Programs\Python\Python39\lib\threading.py", line 888, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Jaideep\Documents\loki\AICoverGen-main\AICoverGen-main\src\mdx.py", line 194, in _process_wave
processed_wav = self.model.istft(processed_spec.to(self.device))
File "C:\Users\Jaideep\Documents\loki\AICoverGen-main\AICoverGen-main\src\mdx.py", line 51, in istft
x = x.contiguous()
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 4.00 GiB total capacity; 136.05 MiB already allocated; 24.76 MiB free; 144.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
6%|█████▏ | 1/16 [00:03<00:51, 3.42s/it]
Traceback (most recent call last):
File "C:\Users\Jaideep\Documents\loki\AICoverGen-main\AICoverGen-main\src\main.py", line 281, in song_cover_pipeline
orig_song_path, vocals_path, instrumentals_path, main_vocals_path, backup_vocals_path, main_vocals_dereverb_path = preprocess_song(song_input, mdx_model_params, song_id, is_webui, input_type, progress)
File "C:\Users\Jaideep\Documents\loki\AICoverGen-main\AICoverGen-main\src\main.py", line 188, in preprocess_song
_, main_vocals_dereverb_path = run_mdx(mdx_model_params, song_output_dir, os.path.join(mdxnet_models_dir, 'Reverb_HQ_By_FoxJoy.onnx'), main_vocals_path, invert_suffix='DeReverb', exclude_main=True, denoise=True)
File "C:\Users\Jaideep\Documents\loki\AICoverGen-main\AICoverGen-main\src\mdx.py", line 262, in run_mdx
wave_processed = -(mdx_sess.process_wave(-wave, m_threads)) + (mdx_sess.process_wave(wave, m_threads))
File "C:\Users\Jaideep\Documents\loki\AICoverGen-main\AICoverGen-main\src\mdx.py", line 234, in process_wave
assert len(processed_batches) == len(waves), 'Incomplete processed batches, please reduce batch size!'
AssertionError: Incomplete processed batches, please reduce batch size!

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\Jaideep\AppData\Local\Programs\Python\Python39\lib\site-packages\gradio\routes.py", line 442, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Jaideep\AppData\Local\Programs\Python\Python39\lib\site-packages\gradio\blocks.py", line 1392, in process_api
result = await self.call_function(
File "C:\Users\Jaideep\AppData\Local\Programs\Python\Python39\lib\site-packages\gradio\blocks.py", line 1097, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Jaideep\AppData\Local\Programs\Python\Python39\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Jaideep\AppData\Local\Programs\Python\Python39\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\Jaideep\AppData\Local\Programs\Python\Python39\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\Jaideep\AppData\Local\Programs\Python\Python39\lib\site-packages\gradio\utils.py", line 703, in wrapper
response = f(*args, **kwargs)
File "C:\Users\Jaideep\Documents\loki\AICoverGen-main\AICoverGen-main\src\main.py", line 316, in song_cover_pipeline
raise_exception(str(e), is_webui)
File "C:\Users\Jaideep\Documents\loki\AICoverGen-main\AICoverGen-main\src\main.py", line 83, in raise_exception
raise gr.Error(error_msg)
gradio.exceptions.Error: 'Incomplete processed batches, please reduce batch size!'
2023-12-13 18:57:55 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error"
2023-12-13 18:57:55 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"

Could seem like your GPU might not have enough VRAM?