DeepRec is a recommendation engine based on TensorFlow 1.15, Intel-TensorFlow and NVIDIA-TensorFlow.
Sparse model is a type of deep learning model that accounts for a relatively high proportion of discrete feature calculation logic in the model structure. Discrete features are usually expressed as non-numeric features that cannot be directly processed by algorithms such as id, tag, text, and phrases. They are widely used in high-value businesses such as search, advertising, and recommendation.
DeepRec has been deeply cultivated since 2016, which supports core businesses such as Taobao Search, recommendation and advertising. It precipitates a list of features on basic frameworks and has excellent performance in sparse models training. Facing a wide variety of external needs and the environment of deep learning framework embracing open source, DeepeRec open source is conducive to establishing standardized interfaces, cultivating user habits, greatly reducing the cost of external customers working on cloud and establishing the brand value.
DeepRec has super large-scale distributed training capability, supporting model training of trillion samples and 100 billion Embedding Processing. For sparse model scenarios, in-depth performance optimization has been conducted across CPU and GPU platform. It contains 3 kinds of features to improve usability and performance for super-scale scenarios.
- Embedding Variable.
- Dynamic Dimension Embedding Variable.
- Adaptive Embedding Variable.
- Multiple Hash Embedding Variable.
- Multi-tier Hybrid Embedding Storage
- Asynchronous Distributed Training Framework, such as grpc+seastar, FuseRecv, StarServer etc.
- Synchronous Distributed Training Framework (GPU), such as HybridBackend, Sparse Operation Kits (SOK) etc.
- Runtime Optimization, such as CPU memory allocator (PRMalloc), GPU memory allocator, Cost based and critical path first Executor etc.
- Runtime Optimization (GPU), support multiple CUDA compute stream and CUDA Graph.
- Operator level optimization, such as BF16 mixed precision optimization, sparse operator optimization and EmbeddingVariable on PMEM and GPU, new hardware feature enabling, etc.
- Graph level optimization, such as AutoGraphFusion, SmartStage, AutoPipeline, StrutureFeature, MicroBatch etc.
- Compilation optimization, support BladeDISC, XLA etc.
- Incremental model loading and exporting.
- Super-scale sparse model distributed serving.
- Multi-tier hybrid storage and multi backend supported.
- Online deep learning with low latency.
- High performance inference framework SessionGroup (share-nothing architecture), with multiple threadpool and multiple CUDA stream supported.
CPU Platform
alideeprec/deeprec-build:deeprec-dev-cpu-py36-ubuntu18.04
GPU Platform
alideeprec/deeprec-build:deeprec-dev-gpu-py36-cu116-ubuntu18.04
Configure
$ ./configure
Compile for CPU and GPU defaultly
$ bazel build -c opt --config=opt //tensorflow/tools/pip_package:build_pip_package
Compile for CPU and GPU: ABI=0
$ bazel build --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" --host_cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" -c opt --config=opt //tensorflow/tools/pip_package:build_pip_package
Compile for CPU optimization: oneDNN + Unified Eigen Thread pool
$ bazel build -c opt --config=opt --config=mkl_threadpool //tensorflow/tools/pip_package:build_pip_package
Compile for CPU optimization and ABI=0
$ bazel build --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" --host_cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" -c opt --config=opt --config=mkl_threadpool //tensorflow/tools/pip_package:build_pip_package
$ ./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
$ pip3 install /tmp/tensorflow_pkg/tensorflow-1.15.5+${version}-cp36-cp36m-linux_x86_64.whl
alideeprec/deeprec-release:deeprec2210-cpu-py36-ubuntu18.04
alideeprec/deeprec-release:deeprec2210-gpu-py36-cu116-ubuntu18.04
Build Type | Status |
---|---|
Linux CPU | |
Linux GPU | |
Linux CPU Serving |
Chinese: https://deeprec.readthedocs.io/zh/latest/
English (WIP): https://deeprec.readthedocs.io/en/latest/
Join the Official Discussion Group on DingTalk