parallel_party - Custom Python module wrapping CUDA in a pythonic way (test). A general purpose Python multithreading module offering easy and pythonic access to CPU or GPU parallelization (CUDA). This is a test or proof of concept i have done for learning purposes, and it is not ment to become any larger or evolve into a serious project.
Its features include:
- Process Python types (lists etc.) in parallel on the GPU
- DCC agnostic
- Accelleration structures for minimized uploading to GPU memory or naive complete-process computation
- Extracting data out of Python types and packaging them again is not threadsafe and therefore has to be done serial. That is a major limiting factor right now, and a big hit for performance. Im not sure if there is any way to speed this up? Ideas or suggestions are welcome!
- Wrapping: Boost Python
- Python Version: 2.6x64, 2.7x64
- CUDA: 5.5
- Maya Versions: 2012, 2013, 2013.5, 2014