Python 3 support
Opened this issue · 5 comments
I have been able to compile PyAlembic for Python 3.7 on Linux and have a Python 3.7 testing environment.
We are slowly moving our tools to Python 3 and one of them use alembic/cask to generate Alembic file from scratch.
My proposition is that I do it myself and submit a PR. The plan is that cask would be Python 2/3 compatible. I will try to rely on builtins:
from __future__ import (absolute_import,
division,
print_function,
unicode_literals)
And nothing more do don't add new dependencies (no six
or future
).
Is there any interest in this? Are you already working on it?
Thanks in advance!
This will definitely need to happen, and soon. Not sure if anyone is actively working on that, however. I am happy to do that, or at least contribute. I agree with the goal of making cask python 2/3 compatible and only using builtins to do it.
Thanks, so for now I will just show the obvious problems and explain how I fix them to continue to work. I would be a "prototype support" in order to see every potential problems to fix.
I don't say I know everything about Python 2 to 2/3 upgrade, but we have upgraded many of our tools (a 5 year old pipeline) to Python 2/3 so I have some experience in this domain.
First one is C++ function signature. Alembic functions took char*
for strings. This mean I have to convert Py3 str
to bytes
(as a reminder, Py2 str
are bytes
, except when __future__.unicode_literals
).
An example (Python 2 with __futur__.unicode_literals
):
> md.set(k, v)
E ArgumentError: Python argument types in
E MetaData.set(MetaData, unicode, unicode)
E did not match C++ signature:
E set(Alembic::AbcCoreAbstract::v12::MetaData {lvalue}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > key, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > data)
What it means here, is that Python 2 unicode can't be passed to the set
function because there is no function signature for unicode.
I'm not an expert with Boost.Python, maybe it's easy to add a unicode
signature to PyAlembic.
I'm currently trying to explicitly convert to bytes
:
def _to_bytes(s):
if sys.version_info.major >= 3:
return bytes(s, 'ascii')
else:
# implies __future__ is imported and string are unicode by default
return bytes(s)
(Don't focus on the if Py3
branch for now)
And replace in multiple places in the code where bytes
are needed:
# before
for k, v in self.metadata.items():
meta.set(k, v)
# after
for k, v in self.metadata.items():
k = _to_bytes(k)
v = _to_bytes(v)
meta.set(k, v)
# before
if self.is_compound() and self.iobject:
meta.set('schema', self.iobject.getMetaData().get('schema'))
# after
if self.is_compound() and self.iobject:
meta.set(_to_bytes('schema'),
self.iobject.getMetaData().get(_to_bytes('schema')))
# etc.
But this doesn't fix all the problem, specially in DeepDict.
I suspect cask
use strings all around because Python 2 being a bytes
by default, you can have a 1:1 relation between Python and PyAlembic without thinking about it:
Python str <---> PyAlembic
Without even talking about Python 3 yet, simply putting __futur__.unicode_literals
break everything.
I think they are two ways to fix this:
- Make PyAlembic dealing with Python String by default.
- Making
cask
dealing withbytes
internally:Python str <---> bytes <---> PyAlembic
.
I will try to see if there is another way to deal with that.
Any advise?
So with explicit conversion, the string problem disappear and another appear.
I took a pass at this and saw it merged with #19 .
Thanks, looks like you faced the same problem as me but found a more elegant solution. Your code is better than what I've done (we dropped cask internally to directly use PyAlembic).
It's not clear in your PR if you consider the job done (all tests pass?). Can I close this issue?
I've added a followup request #20 . The unit tests themselves needed porting to Python 3, but now they are running. I added support for running on Windows while I was in there.