warp CTC building is failed
sw005320 opened this issue · 8 comments
@ysk24ok, warp CTC building is failed (possibly due to pep440 introduced in pip20.3???)
https://github.com/espnet/espnet/runs/1473939967
Could you check this?
@ysk24ok Could you check this please for CPU tag also.
I am having problems trying to build cpu based container because of the same problem:
warpctc-pytorch==0.2.1+torch14.cpu from ... has different version in metadata:0.2.1
It seems pip 20.3 doesn't allow mismatch between version in METADATA (0.2.1) and that in the package name (0.2.1+torchXX.cudaYY).
When I apply the following patch to match the version and try to upload the wheel to testpypi.org ,
diff --git a/pytorch_binding/wheel/rename_wheels.py b/pytorch_binding/wheel/rename_wheels.py
index d6072a5..a08a810 100644
:
return out.decode('utf-8').split()[-2][:-1].replace('.', '')
+def get_torch_version():
+ major_ver, minor_ver = torch.__version__.split('.')[:2]
+ return major_ver + minor_ver
+
+
+def get_local_version_identifier(enable_gpu):
+ local_version_identifier = '+torch{}'.format(get_torch_version())
+ if enable_gpu:
+ local_version_identifier += ".cuda{}".format(get_cuda_version())
+ else:
+ local_version_identifier += ".cpu"
+ return local_version_identifier
+
+
if torch.cuda.is_available() or "CUDA_HOME" in os.environ:
enable_gpu = True
# For CUDA10.1, libcublas-10-2 is installed
@@ -75,9 +89,10 @@ ext_modules = [
)
]
+base_version = "88.77.66"
setup(
name="warpctc_pytorch",
- version="0.2.1",
+ version=base_version + get_local_version_identifier(enable_gpu),
description="Pytorch Bindings for warp-ctc maintained by ESPnet",
url="https://github.com/espnet/warp-ctc",
author=','.join([the following error occurs.
$ twine upload -r testpypi dist/warpctc_pytorch-88.77.66+torch16.cuda102-cp38-cp38-manylinux1_x86_64.whl
Uploading distributions to https://test.pypi.org/legacy/
Enter your username: espnet
/opt/pyenv/versions/3.8.5/lib/python3.8/site-packages/twine/auth.py:72: UserWarning: No recommended backend was available. Install a recommended 3rd party backend package; or, install the keyrings.alt package if you want to use the non-recommended backends. See https://pypi.org/project/keyring for details.
warnings.warn(str(exc))
Enter your password:
Uploading warpctc_pytorch-88.77.66+torch16.cuda102-cp38-cp38-manylinux1_x86_64.whl
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.00M/3.00M [00:02<00:00, 1.55MB/s]
NOTE: Try --verbose to see response content.
HTTPError: 400 Bad Request from https://test.pypi.org/legacy/
'88.77.66+torch16.cuda102' is an invalid value for Version. Error: Can't use PEP 440 local versions. See https://packaging.python.org/specifications/core-metadata for more information.The error is raised here in wheelhouse.
We can't upload wheels which use PEP440 local version identifier (so far we manage to avoid this because of version mismatch).
The solution is:
- use
pip<20.3, then we can download wheels from PyPI. - serve wheels outside of PyPI. PyTorch serve wheels which uses PEP440 local versions here.
Oh, I found an another solution,
- stop using PEP440 local versions, but this leads to lots of wheel packages, such as
warpctc_pytorchXX_cudaYY.
I have uploaded warpctc_pytorch wheel to https://github.com/ysk24ok/wheel_serving_test and found that we can download a wheel like this even when pip >= 20.3.
[root@545fd607d18e pytorch_binding]# pip3 --version
pip 20.3.1 from /opt/pyenv/versions/3.8.5/lib/python3.8/site-packages/pip (python 3.8)
[root@545fd607d18e pytorch_binding]# pip3 install warpctc_pytorch==88.77.66+torch16.cuda102 -f https://github.com/ysk24ok/wheel_serving_test/blob/main/warpctc_pytorch-88.77.66+torch16.cuda102-cp38-cp38-manylinux1_x86_64.whl?raw=true
Looking in links: https://github.com/ysk24ok/wheel_serving_test/blob/main/warpctc_pytorch-88.77.66+torch16.cuda102-cp38-cp38-manylinux1_x86_64.whl?raw=true
WARNING: Skipping page https://github.com/ysk24ok/wheel_serving_test/blob/main/warpctc_pytorch-88.77.66+torch16.cuda102-cp38-cp38-manylinux1_x86_64.whl?raw=true because the HEAD request got Content-Type: application/octet-stream.The only supported Content-Type is text/html
Collecting warpctc_pytorch==88.77.66+torch16.cuda102
Using cached https://github.com/ysk24ok/wheel_serving_test/blob/main/warpctc_pytorch-88.77.66+torch16.cuda102-cp38-cp38-manylinux1_x86_64.whl?raw=true (3.1 MB)
Installing collected packages: warpctc-pytorch
Successfully installed warpctc-pytorch-88.77.66+torch16.cuda102It's a hassle for users to specify the full URL for -f option, but in this way we don't need a server to host wheels.
If there is no objection, I'll proceed in this way.
That sounds like the best solution for now.
Please go ahead.
Until pip 21 is released, we can use the old resolver by adding --use-deprecated=legacy-resolver during installation of warpctc-pytorch. This successfully does the installation as needed. Reference.
user@ip-xx-xx-xx-xx:~/projects/espnet/tools$ pip install --use-deprecated=legacy-resolver warpctc-pytorch==0.2.1+torch14.cuda100
Collecting warpctc-pytorch==0.2.1+torch14.cuda100
Using cached warpctc_pytorch-0.2.1%2Btorch14.cuda100-cp38-cp38-manylinux1_x86_64.whl (3.0 MB)
Installing collected packages: warpctc-pytorch
Successfully installed warpctc-pytorch-0.2.1
That sounds good. Thanks, @chintu619!
@ysk24ok, how about this solution?
@chintu619 Thanks for sharing.
But I think using the old resolver by --use-deprecated=legacy-resolver is only the short term solution.
Sooner or later, we should move our warpctc_pytorch wheels from PyPI so that the new resolver can fetch them.