This repository provides pre-built TensorFlow for C/C++ (headers + libraries) and CMake.
Maintainer: Vassilios Tsounis
Affiliation: Robotic Systems Lab, ETH Zurich
Contact: tsounisv@ethz.ch
This repository provides TensorFlow libraries with the following specifications:
- Provided versions:
1.13.2
(Default) - Supports Ubuntu 18.04 LTS (GCC >=7.4).
- Provides variants for CPU-only and Nvidia GPU respectively.
- All variants are built with full CPU optimizations available for
amd64
architectures. - GPU variants are built to support compute capabilities:
5.0
,6.1
,7.0
,7.2
,7.5
NOTE: This repository does not include or bundle the source TensorFlow repository.
First clone this repository:
git clone https://github.com/leggedrobotics/tensorflow-cpp.git
or if using SSH:
git clone git@github.com:leggedrobotics/tensorflow-cpp.git
To install the special version of Eigen requried by TensorFlow that we also bundle in this repository:
cd tensorflow/eigen
mkdir build && cd build
cmake -DCMAKE_INSTALL_PREFIX=~/.local -DCMAKE_BUILD_TYPE=Release ..
make install -j
NOTE: We recommend installing to ~/.local
in order to prevent conflicts with other version of Eigen which may be installed via apt
. Eigen exports its package during the build step, so CMake will default to finding the one we just installed unless a HINT
is used or CMAKE_PREFIX_PATH
is set to another location.
These are the options for using the TensorFlow CMake package:
Option 1 (Recommended): Installing into the (local) file system
cd tensorflow/tensorflow
mkdir build && cd build
cmake -DCMAKE_INSTALL_PREFIX=~/.local -DCMAKE_BUILD_TYPE=Release ..
make install -j
NOTE: The CMake will download the pre-built headers and binaries at build time and should only happen on the first run.
Option 2 (Advanced): Create symbolic link to your target workspace directory:
ln -s /<SOURCE-PATH>/tensorflow/tensorflow <TARGET-PATH>/
For example, when including as part of larger CMake build or in a Catkin workspace
ln -s ~/git/tensorflow/tensorflow ~/catkin_ws/src/
TensorFlow CMake can be included in other projects either using the find_package
command:
...
find_package(TensorFlow CONFIG REQUIRED)
...
or alternatively included directly into other projects using the add_subdirectory
command
...
add_subdirectory(/<SOURCE-PATH>/tensorflow/tensorflow)
...
NOTE: By default the CMake package will select the CPU-only variant of a given library version and defining/setting the TF_USE_GPU
option variable reverts to the GPU-enabled variant.
User targets such as executables and libraries can now include the TensorFlow::TensorFlow
CMake target using the target_link_libraries
command.
add_executable(tf_hello src/main.cpp)
target_link_libraries(tf_hello PUBLIC TensorFlow::TensorFlow)
target_compile_features(tf_hello PRIVATE cxx_std_14)
NOTE: For more information on using CMake targets please refer to this excellent article.
A complete example is included in this repository to provide boilerplate CMake for developers of dependent projects and packages.
If a specialized build of TensorFlow (e.g. different verion of CUDA, NVIDIA Compute Capability, AVX etc) is required, then the following steps can be taken:
- Follow the standard instructions for installing system dependencies.
NOTE: For GPU-enabled systems, additional steps need to be taken. - View and/or modify our utility script for step-by-step instructions for building, extracting and packaging all headers and libraries generated by Bazel from building TensorFlow.
- Set the
TENSORFLOW_ROOT
variable with the name of the resulting directory:
cmake -DTENSORFLOW_ROOT=~/.tensorflow/lib -DCMAKE_INSTALL_PREFIX=~/.local -DCMAKE_BUILD_TYPE=Release ..