HybridBackend is a high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster.
-
Memory-efficient loading of categorical data
-
Communication-efficient training and evaluation at scale
-
GPU-efficient orchestration of embedding layers
-
Easy to use with existing AI workflows
Linux Distro | CUDA | Python | Tensorflow | URL |
---|---|---|---|---|
Ubuntu 18.04 | 11.4 | 3.6 | 1.15.5 | registry.cn-shanghai.aliyuncs.com/pai-dlc/hybridbackend:0.6-tf1.15-py3.6-cu114-ubuntu18.04 |
See PAI DLC for more information.
GLIBC | CUDA | Python | Tensorflow | Command |
---|---|---|---|---|
>= 2.7 |
11.4 | 3.6 | >=1.15, < 2.0 |
pip install hybridbackend-cu114 |
>= 2.4 |
- | 3.6 | >=1.15, < 2.0 |
pip install hybridbackend-cpu |
>= 2.4 |
- | 3.6 | >=1.14, < 1.15 |
pip install hybridbackend-cpu-legacy |
A minimal example:
import tensorflow as tf
import hybridbackend.tensorflow as hb
def model_fn(features, labels, mode, params):
# ..
dense_features = hb.keras.layers.DenseFeatures(columns)
# ...
# ...
estimator = hb.estimator.Estimator(model_fn, model_dir=model_dir)
estimator.train_and_evaluate(train_spec, eval_spec)
Please see documentation for more information.
HybridBackend is licensed under the Apache 2.0 License.
-
Please see Contributing Guide before your first contribution.
-
Please register as an adopter if your organization is interested in adoption. We will discuss RoadMap with registered adopters in advance.
-
Please cite HybridBackend in your publications if it helps:
@article{zhang2022picasso, title={PICASSO: Unleashing the Potential of GPU-centric Training for Wide-and-deep Recommender Systems}, author={Zhang, Yuanxing and Chen, Langshi and Yang, Siran and Yuan, Man and Yi, Huimin and Zhang, Jie and Wang, Jiamang and Dong, Jianbo and Xu, Yunlong and Song, Yue and others}, journal={arXiv preprint arXiv:2204.04903}, year={2022} }
If you would like to share your experiences with others, you are welcome to contact us in DingTalk: