aliyun/NeWCRFs

UnicodeDecodeError because the code and txt files are not utf-8

YuhsiHu opened this issue · 2 comments

Hey! Thank you for your great work!
I found that these files are not utf-8. So there will be
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf1 in position 8: invalid continuation byte
when we train or eval.
Could you please update so that other researchers can follow the README without any error? Thank you for your time!

Even if I changed all files to utf-8, there was still error when I ran train.py and the train.py would be latin1 format again. That is weird ...

python newcrfs/train.py configs/arguments_train_nyu.txt
mkdir: 无法创建目录"./models/newcrfs_nyu": 没有那个文件或目录
cp: 无法创建普通文件'./models/newcrfs_nyu': 没有那个文件或目录
cp: 无法创建普通文件'./models/newcrfs_nyu': 没有那个文件或目录
You have specified --do_online_eval.
This will evaluate the model every eval_freq 1000 steps and save best models for individual eval metrics.
configs/arguments_train_nyu.txt
== Use GPU: 0 for training
/home/hyx/anaconda3/envs/newcrfs/lib/python3.8/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2228.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
== Load encoder backbone from: /media/hyx/dataset/NeWCRFs/swin_transformer/swin_large_patch4_window7_224_22k.pth
Traceback (most recent call last):
  File "newcrfs/train.py", line 441, in <module>
    main()
  File "newcrfs/train.py", line 435, in main
    mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
  File "/home/hyx/anaconda3/envs/newcrfs/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn
    return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  File "/home/hyx/anaconda3/envs/newcrfs/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
    while not context.join():
  File "/home/hyx/anaconda3/envs/newcrfs/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 160, in join
    raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException: 

-- Process 0 terminated with the following error:
Traceback (most recent call last):
  File "/home/hyx/anaconda3/envs/newcrfs/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
    fn(i, *args)
  File "/media/hyx/code/NeWCRFs/newcrfs/train.py", line 186, in main_worker
    model = NewCRFDepth(version=args.encoder, inv_depth=False, max_depth=args.max_depth, pretrained=args.pretrain)
  File "/media/hyx/code/NeWCRFs/newcrfs/networks/NewCRFDepth.py", line 91, in __init__
    self.init_weights(pretrained=pretrained)
  File "/media/hyx/code/NeWCRFs/newcrfs/networks/NewCRFDepth.py", line 101, in init_weights
    self.backbone.init_weights(pretrained=pretrained)
  File "/media/hyx/code/NeWCRFs/newcrfs/networks/swin_transformer.py", line 584, in init_weights
    load_checkpoint(self, pretrained, strict=False)
  File "/media/hyx/code/NeWCRFs/newcrfs/networks/newcrf_utils.py", line 214, in load_checkpoint
    checkpoint = _load_checkpoint(filename, map_location)
  File "/media/hyx/code/NeWCRFs/newcrfs/networks/newcrf_utils.py", line 190, in _load_checkpoint
    checkpoint = torch.load(filename, map_location=map_location)
  File "/home/hyx/anaconda3/envs/newcrfs/lib/python3.8/site-packages/torch/serialization.py", line 713, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/home/hyx/anaconda3/envs/newcrfs/lib/python3.8/site-packages/torch/serialization.py", line 920, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xab in position 3: invalid start byte

Sorry for opening this issue because I found that my files on machine are encrypted for data security, which lead to the change of encoding format.