/tensorflow-rust

Rust language bindings for TensorFlow

Primary LanguageRustApache License 2.0Apache-2.0

SIG Rust TensorFlow

Version Build status

TensorFlow Rust provides idiomatic Rust language bindings for TensorFlow.

Notice: This project is still under active development and not guaranteed to have a stable API.

Getting Started

Since this crate depends on the TensorFlow C API, it needs to be downloaded or compiled first. This crate will automatically download or compile the TensorFlow shared libraries for you, but it is also possible to manually install TensorFlow and the crate will pick it up accordingly.

Prerequisites

If the TensorFlow shared libraries can already be found on your system, they will be used. If your system is x86-64 Linux or Mac, a prebuilt binary will be downloaded, and no special prerequisites are needed.

Otherwise, the following dependencies are needed to compile and build this crate, which involves compiling TensorFlow itself:

  • git
  • bazel
  • Python Dependencies numpy, dev, pip and wheel
  • Optionally, CUDA packages to support GPU-based processing

The TensorFlow website provides detailed instructions on how to obtain and install said dependencies, so if you are unsure please check out the docs for further details.

Some of the examples use TensorFlow code written in Python and require a full TensorFlow installation.

The minimum supported Rust version is 1.58.

Usage

Add this to your Cargo.toml:

[dependencies]
tensorflow = "0.19.1"

and this to your crate root:

extern crate tensorflow;

Then run cargo build -j 1. The tensorflow-sys crate's build.rs now either downloads a pre-built, basic CPU only binary (the default) or compiles TensorFlow if forced to by an environment variable. If TensorFlow is compiled during this process, since the full compilation is very memory intensive, we recommend using the -j 1 flag which tells cargo to use only one task, which in turn tells TensorFlow to build with only one task. Though, if you have a lot of RAM, you can obviously use a higher value.

To include the especially unstable API (which is currently the expr module), use --features tensorflow_unstable.

For now, please see the Examples for more details on how to use this binding.

Tensor Max Display

When printing or debugging a tensor, it will print every element by default, this can be modified by changing an environment variable:

TF_RUST_DISPLAY_MAX=5

Which will truncate the values if they exceed the limit:

let values: Vec<u64> = (0..100000).collect();
let t = Tensor::new(&[2, 50000]).with_values(&values).unwrap();
dbg!(t);
t = Tensor<u64> {
    values: [
        [0, 1, 2, 3, 4, ...],
        ...
    ],
    dtype: uint64,
    shape: [2, 50000]
}

GPU Support

To enable GPU support, use the tensorflow_gpu feature in your Cargo.toml:

[dependencies]
tensorflow = { version = "0.19.1", features = ["tensorflow_gpu"] }

Manual TensorFlow Compilation

If you want to work against unreleased/unsupported TensorFlow versions or use a build optimized for your machine, manual compilation is the way to go.

See tensorflow-sys/README.md for details.

FAQ's

Why does the compiler say that parts of the API don't exist?

The especially unstable parts of the API (which is currently the expr module) are feature-gated behind the feature tensorflow_unstable to prevent accidental use. See http://doc.crates.io/manifest.html#the-features-section. (We would prefer using an #[unstable] attribute, but that doesn't exist yet.)

How do I...?

Try the documentation first, and see if it answers your question. If not, take a look at the examples folder. Note that there may not be an example for your exact question, but it may be answered by an example demonstrating something else.

If none of the above help, you can ask your question on TensorFlow Rust Google Group.

Contributing

Developers and users are welcome to join the TensorFlow Rust Google Group.

Please read the contribution guidelines on how to contribute code.

This is not an official Google product.

RFCs are issues tagged with RFC. Check them out and comment. Discussions are welcomed. After all, that is the purpose of Request For Comment!

License

This project is licensed under the terms of the Apache 2.0 license.