Prinsphield/ELEGANT

RuntimeError: Couldn't open shared event: <torch_20488_2149800689_event>, error code: <2> at ..\src\TH\THAllocator.c:218

c1a1o1 opened this issue · 2 comments

E:\Users\Raytine\Anaconda3\python.exe F:/zhaiyao/ELEGANT-master/ELEGANT.py -m train -a Bangs Mustache -g 0
Namespace(attributes=['Bangs', 'Mustache'], gpu=[0], input=None, linear=False, matrix=False, mode='train', restore=None, size=None, swap=False, swap_list=[], target=None)
F:\zhaiyao\ELEGANT-master\nets.py:64: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
nn.init.normal(m.weight, 0, 0.02)
F:\zhaiyao\ELEGANT-master\nets.py:121: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
nn.init.normal(m.weight, 0, 0.02)
F:\zhaiyao\ELEGANT-master\nets.py:169: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
nn.init.normal(m.weight, 0, 0.02)
F:\zhaiyao\ELEGANT-master\nets.py:176: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
nn.init.normal(m.weight, 0, 0.02)
F:\zhaiyao\ELEGANT-master\nets.py:177: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_.
nn.init.constant(m.bias, 0)
Traceback (most recent call last):
File "", line 1, in
File "E:\Users\Raytine\Anaconda3\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "E:\Users\Raytine\Anaconda3\lib\multiprocessing\spawn.py", line 115, in _main
Traceback (most recent call last):
File "F:/zhaiyao/ELEGANT-master/ELEGANT.py", line 470, in
prepare(preparation_data)
File "E:\Users\Raytine\Anaconda3\lib\multiprocessing\spawn.py", line 226, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "E:\Users\Raytine\Anaconda3\lib\multiprocessing\spawn.py", line 278, in _fixup_main_from_path
main()
File "F:/zhaiyao/ELEGANT-master/ELEGANT.py", line 450, in main
run_name="mp_main")
File "E:\Users\Raytine\Anaconda3\lib\runpy.py", line 254, in run_path
model.train()
pkg_name=pkg_name, script_name=fname)
File "E:\Users\Raytine\Anaconda3\lib\runpy.py", line 96, in _run_module_code
File "F:/zhaiyao/ELEGANT-master/ELEGANT.py", line 271, in train
mod_name, mod_spec, pkg_name, script_name)
File "E:\Users\Raytine\Anaconda3\lib\runpy.py", line 85, in _run_code
A, y_A = next(self.dataset.gen(attribute_id, True))
File "F:\zhaiyao\ELEGANT-master\dataset.py", line 102, in gen
exec(code, run_globals)
File "F:\zhaiyao\ELEGANT-master\ELEGANT.py", line 5, in
for data in dataloader: from dataset import config, MultiCelebADataset

File "F:\zhaiyao\ELEGANT-master\dataset.py", line 6, in
File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 451, in iter
import torch
File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch_init_.py", line 78, in
from torch._C import *
ImportError: DLL load failed: 页面文件太小,无法完成操作。
return _DataLoaderIter(self)
File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 239, in init
w.start()
File "E:\Users\Raytine\Anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "E:\Users\Raytine\Anaconda3\lib\multiprocessing\context.py", line 212, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "E:\Users\Raytine\Anaconda3\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "E:\Users\Raytine\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 66, in init
reduction.dump(process_obj, to_child)
File "E:\Users\Raytine\Anaconda3\lib\multiprocessing\reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
Exception ignored in: <bound method _DataLoaderIter.del of <torch.utils.data.dataloader._DataLoaderIter object at 0x000001BAD6FA6630>>
Traceback (most recent call last):
File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 349, in del
self._shutdown_workers()
File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 328, in _shutdown_workers
self.worker_result_queue.get()
File "E:\Users\Raytine\Anaconda3\lib\multiprocessing\queues.py", line 345, in get
return ForkingPickler.loads(res)
File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\multiprocessing\reductions.py", line 86, in rebuild_storage_filename
storage = cls._new_shared_filename(manager, handle, size)
RuntimeError: Couldn't open shared event: <torch_30504_3734633734_event>, error code: <2> at ..\src\TH\THAllocator.c:218
Exception ignored in: <bound method _DataLoaderIter.del of <torch.utils.data.dataloader._DataLoaderIter object at 0x000001BAD5F4CF28>>
Traceback (most recent call last):
File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 349, in del
self._shutdown_workers()
File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 328, in _shutdown_workers
self.worker_result_queue.get()
File "E:\Users\Raytine\Anaconda3\lib\multiprocessing\queues.py", line 345, in get
return ForkingPickler.loads(res)
File "E:\Users\Raytine\Anaconda3\lib\site-packages\torch\multiprocessing\reductions.py", line 86, in rebuild_storage_filename
storage = cls._new_shared_filename(manager, handle, size)
RuntimeError: Couldn't open shared event: <torch_20488_2149800689_event>, error code: <2> at ..\src\TH\THAllocator.c:218

Check that you have datasets correctly placed in the directory.

Hi, I have the same problem, how did you solve it?