[BUG]运行dense teacher的时候,输入pods_train --dir ., 会出现ValueError: not allowed to raise maximum limit的错误
mary-0830 opened this issue · 1 comments
mary-0830 commented
Please use this template to submit your problem or your question will NOT be properly tackled.
- We recommend that you check the existing issues before creating a new issue to see if anyone has encountered the same issue as you.
- Please check whether the software version is the latest version first. If not, you can try to update to the latest version and see if the problem is resolved.
In order for us to know the problem you are facing more quickly, please provide the following basic information.
- OS version:e.g. Ubuntu 18.04
- Environment version: Python / PyTorch / CUDA / cuDNN version
- cvpods version:You can use
git log
to get the corresponding commit id
The following information is recommended to be provided in text rather than screenshots for future retrieval. If your needs do not apply, the following template can be modified, but please keep the content above this bank.
I ran the following command:
Put your command here.
命令:pods_train --dir .
报错如下:
2022-07-20 14:19:28.874 | INFO | __main__:<module>:155 - Create soft link to outputs
Traceback (most recent call last):
File "/hd-4t/ljj/cvpods/tools/train_net.py", line 157, in <module>
launch(
File "/hd-4t/ljj/cvpods/cvpods/engine/launch.py", line 45, in launch
comm.configure_nccl()
File "/hd-4t/ljj/cvpods/cvpods/utils/distributed/comm.py", line 88, in configure_nccl
resource.setrlimit(resource.RLIMIT_NOFILE, (32768, 32768))
ValueError: not allowed to raise maximum limit
Expect the following results:
Put expected output here.
Actual results:
Pur error message here.
poodarchu commented
this is because your running vm doesn't allow users to change resource limit.
resource.setrlimit(resource.RLIMIT_NOFILE, (32768, 32768))