/Anakin

High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.

Primary LanguageC++Apache License 2.0Apache-2.0

Anakin

Build Status License Coverage Status

Welcome to the Anakin GitHub.

Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineers and is a large-scale application of industrial products.

Please refer to our release announcement to track the latest feature of Anakin.

Features

  • Flexibility

    Anakin supports a wide range of neural network architectures and different hardware platforms. It is easy to run Anakin on GPU / x86 / ARM platform.

  • High performance

    In order to give full play to the performance of hardware, we optimized the forward prediction at different levels.

    • Automatic graph fusion. The goal of all performance optimizations under a given algorithm is to make the ALU as busy as possible. Operator fusion can effectively reduce memory access and keep the ALU busy.

    • Memory reuse. Forward prediction is a one-way calculation. We reuse the memory between the input and output of different operators, thus reducing the overall memory overhead.

    • Assembly level optimization. Saber is a underlying DNN library for Anakin, which is deeply optimized at assembly level. Performance comparison between Anakin, TensorRT and Tensorflow-lite, please refer to the benchmark tests.

Installation

It is recommended to check out the docker installation guide. before looking into the build from source guide.

For ARM, please refer run on arm.

Benchmark

It is recommended to check out the readme of benchmark.

Documentation

We provide English and Chinese documentation.

Ask Questions

You are welcome to submit questions and bug reports as Github Issues.

Copyright and License

Anakin is provided under the Apache-2.0 license.