roym899/pose_and_shape_evaluation

No matching distribution found for yoco

Closed this issue · 14 comments

Hello, thank you for your valuable work! I'm planning to test my own network using this toolbox, but when I install it, I faced the following problem:

ERROR: Could not find a version that satisfies the requirement yoco (from versions: none)
ERROR: No matching distribution found for yoco

I cannot find solutions from the internet, how to fix this? Is it caused by the version mismatch of the some package?

Screenshot from 2022-02-23 15-29-17

It should work with newer Python versions (>= 3.7) if you can change it. I will check if everything works with Python 3.6 and let you know when it's fixed.

It should be fixed for 3.6 now 🙂
Feel free to close the issue if it works for you.

Hi, thanks for your quick response! The environment problem has been fixed, and I have configured everything I need. But the evaluation speed on my machine (3090 GPU) is really slow... How long will it take you to complete the test?

Uploading Screenshot from 2022-02-24 10-31-52.png…

It should be much faster. Could you post your output from
pip list
and
python -m torch.utils.collect_env

pip list

`Package Version


absl-py 0.15.0
addict 2.3.0
aiohttp 3.8.1
aiosignal 1.2.0
anyio 3.5.0
appdirs 1.4.4
apturl 0.5.2
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
asn1crypto 0.24.0
astor 0.8.1
async-generator 1.10
async-timeout 4.0.1
asynctest 0.13.0
attrs 21.2.0
Babel 2.9.1
backcall 0.2.0
beautifulsoup4 4.10.0
bleach 4.1.0
blessings 1.7
Brlapi 0.6.6
cached-property 1.5.2
cachetools 4.2.2
certifi 2020.12.5
cffi 1.15.0
chardet 4.0.0
charset-normalizer 2.0.9
click 8.0.1
cmake 3.18.4.post1
command-not-found 0.3
contextvars 2.4
cpas-toolbox 0.1.0
cryptography 2.1.4
cupshelpers 1.0
cycler 0.10.0
Cython 0.29.21
dataclasses 0.8
decorator 4.4.2
defer 1.0.6
defusedxml 0.7.1
deprecation 2.1.0
distro 1.5.0
distro-info 0.18ubuntu0.18.04.1
einops 0.3.2
entrypoints 0.4
filelock 3.4.1
Flask 2.0.1
frozenlist 1.2.0
future 0.18.2
gast 0.2.2
gdown 4.3.1
google 3.0.0
google-auth 1.30.0
google-auth-oauthlib 0.4.4
google-pasta 0.2.0
gpustat 0.6.0
grpcio 1.41.0
h5py 2.10.0
httplib2 0.9.2
idna 2.10
idna-ssl 1.1.0
imagecorruptions 1.1.0
imageio 2.9.0
imgaug 0.4.0
immutables 0.16
importlib-metadata 4.8.1
ipykernel 5.5.6
ipython 7.16.3
ipython-genutils 0.2.0
ipywidgets 7.6.5
itsdangerous 2.0.1
jedi 0.17.2
Jinja2 3.0.1
joblib 1.1.0
json5 0.9.6
jsonschema 3.2.0
jupyter-client 7.1.2
jupyter-core 4.9.2
jupyter-packaging 0.10.6
jupyter-server 1.13.1
jupyterlab 3.2.9
jupyterlab-pygments 0.1.2
jupyterlab-server 2.10.3
jupyterlab-widgets 1.0.2
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.2
keyring 10.6.0
keyrings.alt 3.0
kiwisolver 1.3.1
language-selector 0.1
launchpadlib 1.10.6
lazr.restfulclient 0.13.5
lazr.uri 1.0.3
llvmlite 0.32.1
louis 3.5.0
macaroonbakery 1.1.3
Mako 1.0.7
Markdown 3.3.4
MarkupSafe 2.0.1
matplotlib 3.3.4
meshio 4.4.6
mistune 0.8.4
mmcv 1.1.2
multidict 5.2.0
MultiScaleDeformableAttention 1.0
nbclassic 0.3.5
nbclient 0.5.9
nbconvert 6.0.7
nbformat 5.1.3
nest-asyncio 1.5.4
netifaces 0.10.4
networkx 2.5.1
nn-distance 0.0.0
notebook 6.4.8
numba 0.49.1
numpy 1.19.5
nvidia-ml-py3 7.352.0
oauth 1.0.1
oauthlib 3.1.0
olefile 0.45.1
open3d 0.14.1
opencv-python 4.5.5.62
opencv-python-headless 4.5.5.62
opt-einsum 3.3.0
packaging 20.9
pandas 1.1.5
pandocfilters 1.5.0
parso 0.7.1
pexpect 4.2.1
pickleshare 0.7.5
Pillow 8.2.0
pip 21.3.1
pointnet2-ops 3.0.0
progress 1.5
prometheus-client 0.13.1
prompt-toolkit 3.0.28
protobuf 3.16.0
psutil 5.8.0
ptyprocess 0.7.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pybind11 2.8.1
pycairo 1.16.2
pycocotools 2.0.2
pycparser 2.21
pycrypto 2.6.1
pycups 1.9.73
Pygments 2.11.2
PyGObject 3.26.1
pymacaroons 0.13.0
PyNaCl 1.1.2
pyparsing 2.4.7
pyRFC3339 1.0
pyrsistent 0.18.0
PySocks 1.7.1
python-apt 1.6.5+ubuntu0.7
python-dateutil 2.8.1
python-debian 0.1.32
pytorchyolo 1.2.0
pytz 2018.3
pyvista 0.32.1
PyWavelets 1.1.1
pyxdg 0.25
PyYAML 6.0
pyzmq 22.3.0
reportlab 3.4.0
requests 2.25.1
requests-oauthlib 1.3.0
requests-unixsocket 0.1.5
rsa 4.7.2
ruamel.yaml 0.17.21
ruamel.yaml.clib 0.2.6
scikit-build 0.11.1
scikit-image 0.17.2
scikit-learn 0.24.2
scipy 1.4.1
scooby 0.5.9
SecretStorage 2.3.1
Send2Trash 1.8.0
setuptools 56.1.0
Shapely 1.7.1
simplejson 3.13.2
six 1.16.0
sklearn 0.0
sniffio 1.2.0
soupsieve 2.2.1
ssh-import-id 5.7
system-service 0.3
systemd-python 234
tensorboard 1.15.0
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.0
tensorboardX 2.2
tensorflow-estimator 1.15.1
tensorflow-gpu 1.15.0
termcolor 1.1.0
terminado 0.12.1
terminaltables 3.1.0
testpath 0.5.0
threadpoolctl 3.0.0
tifffile 2020.9.3
tikzplotlib 0.9.12
timm 0.4.12
tomlkit 0.9.2
torch 1.10.1+cu113
torchaudio 0.10.1+cu113
torchsummary 1.5.1
torchvision 0.11.2+cu113
tornado 6.1
tqdm 4.60.0
traitlets 4.3.3
trimesh 3.10.1
typing-extensions 3.10.0.2
ubuntu-drivers-common 0.0.0
ufw 0.36
unattended-upgrades 0.1
urllib3 1.26.4
usb-creator 0.3.3
virtualenv 15.1.0
vtk 9.1.0
wadllib 1.3.2
wcwidth 0.2.5
webencodings 0.5.1
websocket-client 1.2.3
Werkzeug 2.0.2
wheel 0.37.0
widgetsnbextension 3.5.2
wrapt 1.13.2
wslink 1.3.0
xkit 0.0.0
yapf 0.30.0
yarl 1.7.2
yoco 1.0.2
zipp 3.6.0
zope.interface 4.3.2`

python -m torch.utils.collect_env

`PyTorch version: 1.10.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A

OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.25

Python version: 3.6.9 (default, Dec 8 2021, 21:08:43) [GCC 8.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-96-generic-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 11.3.109
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090

Nvidia driver version: 470.74
cuDNN version: Probably one of the following:
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] pytorchyolo==1.2.0
[pip3] torch==1.10.1+cu113
[pip3] torchaudio==0.10.1+cu113
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.11.2+cu113
[conda] Could not collect`

Thanks. My guess is that scipy 1.4.1's KDTree is the bottleneck. Unfortunately that's the last version for Python 3.6. I will see, if I can reproduce the problem and do anything about it in the next few days.

Until then, you could try a more recent Python version if that's possible (e.g., with pyenv). 3.7-3.9 should all be fine.

I will try a newer python later, and try to test on other machines. Hope it works for me! Thanks !!!

Hi, sorry to disturb you again, I have evaluated SPD for the REAL275 dataset on two different python3.7 machines, one took 5 hours and the other took 2 hours, I think there may still be some version issues, but the speed should be normal now. But I can't complete the whole evaluation procedure with CASS and ASMNet, and the segmentation fault occurs. I have encountered this problem on both machines.
Screenshot from 2022-02-26 10-37-09

For CASS, this error occurs in foldingnet.py
Screenshot from 2022-02-26 10-34-11

For ASM-Net, this error occurs in pointnet.py
Screenshot from 2022-02-26 10-38-52

I guess is there some hidden bug in the code? I haven't solved this problem yet, can you provide some advice?

Also I have two other questions:

(1) There seems to be some errors in 3.1 Problem Definition in the paper, "the camera is at the origin of frame w (world)" , and the object pose estimation is defined as wO = wToO, which means the transformation from the object coordinate frame to the world coordinate frame. The correct definition should be the transformation from the object coordinate frame to the camera coordinate frame. Is it wrong here?

(2) in the code RedWood_dataset.py, the link to download the dataset does not exist.

 https://drive.google.com/u/0/uc?id=1PMvIblsXWDxEJykVwhUk_QEjy4_bmDU

Those times sound more reasonable. You can speed it further up by passing --fast_eval True which will only evaluate every 10th sample. The results are pretty much the same, given the sequential nature of the REAL275 dataset.

I can't seem to reproduce the problem right now. I have mostly used torch with CUDA 10.2, but I just tried a fresh install with 11.3 and it worked fine as well. Maybe you can try a fresh environment with this requirements.txt file and see if that helps?

pip install -r requirements.txt -f https://download.pytorch.org/whl/cu113/torch_stable.html

On the other two questions:
(1) We use world and camera as synonyms, hence the first sentence "the camera is at the origin of frame w".
(2) The full URL is https://drive.google.com/u/0/uc?id=1PMvIblsXWDxEJykVwhUk_QEjy4_bmDU-&export=download. I's just a linebreak in the string

"https://drive.google.com/u/0/uc?id=1PMvIblsXWDxEJykVwhUk_QEjy4_bmDU"
"-&export=download"

Do you still have this problem or figured out a way to solve it? Otherwise I will close this issue since I can't reproduce it.

Oh, sorry for not replying to you these days. I successfully tested the results of SPD, CASS and ASMNet on REAL275 and RedWood75 using python3.8. But the inference time of each model on REAL275 takes 2-3 hours, which is very slow. Can you tell me how long it will take you to test these models on your machine? and the versions of cuda and pytorch?

I've been working on my own models these days, but it didn't work yet. I suspect it's a problem with the memory of the machine. How much memory does the machine you use have?

This speed is similar for me on the REAL275 dataset. We evaluate sample-by-sample and there are 16000 sample, where each sample can take around a second because we're computing the metrics on the CPU (which is the bottleneck right now). In general I would recommend not to do this evaluation too often, since it's meant to be the test set, therefore I wouldn't consider the slow speed to be a big issue. But if you want a faster feedback cycle --fast_eval True can be used for REAL275, giving essentially the same results, but only evaluating every 10th sample. I'm sure there are some simple optimizations that could be done to speed up the metrics.

I have tried it with two different machines. 12GB VRAM + 94GB RAM and 8 GB VRAM + 16 GB RAM both with CUDA 10.2 and CUDA 11.3 and the most recent torch versions.

Got it! Thank you very much!