TensorFlow Datasets provides many public datasets as tf.data.Datasets
.
- List of datasets
- Getting started:
- Features & performances:
- Add your dataset:
- API docs
Note: tf.data
is a builtin library in
TensorFlow which builds efficient data pipelines.
TFDS (this library) uses tf.data
to
build an input pipeline when you load a dataset.
Table of Contents
pip install tensorflow-datasets
# Requires TF 1.5+ to be installed.
# Some datasets require additional libraries; see setup.py extras_require
pip install tensorflow
# or:
pip install tensorflow-gpu
Join our Google group to receive updates on the project.
import tensorflow_datasets as tfds
import tensorflow as tf
# Here we assume Eager mode is enabled (TF2), but tfds also works in Graph mode.
# Construct a tf.data.Dataset
ds_train = tfds.load('mnist', split='train', shuffle_files=True)
# Build your input pipeline
ds_train = ds_train.shuffle(1000).batch(128).prefetch(10)
for features in ds_train.take(1):
image, label = features['image'], features['label']
Try it interactively in a Colab notebook.
All datasets are implemented as subclasses of tfds.core.DatasetBuilder
. TFDS
has two entry points:
tfds.builder
: Returns thetfds.core.DatasetBuilder
instance, giving control overbuilder.download_and_prepare()
andbuilder.as_dataset()
.tfds.load
: Convenience wrapper which hides thedownload_and_prepare
andas_dataset
calls, and directly returns thetf.data.Dataset
.
import tensorflow_datasets as tfds
# The following is the equivalent of the `load` call above.
# You can fetch the DatasetBuilder class by string
mnist_builder = tfds.builder('mnist')
# Download the dataset
mnist_builder.download_and_prepare()
# Construct a tf.data.Dataset
ds = mnist_builder.as_dataset(split='train')
# Get the `DatasetInfo` object, which contains useful information about the
# dataset and its features
info = mnist_builder.info
print(info)
This will print the dataset info content:
tfds.core.DatasetInfo(
name='mnist',
version=1.0.0,
description='The MNIST database of handwritten digits.',
homepage='http://yann.lecun.com/exdb/mnist/',
features=FeaturesDict({
'image': Image(shape=(28, 28, 1), dtype=tf.uint8),
'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=10)
},
total_num_examples=70000,
splits={
'test': <tfds.core.SplitInfo num_examples=10000>,
'train': <tfds.core.SplitInfo num_examples=60000>
},
supervised_keys=('image', 'label'),
citation='"""
@article{lecun2010mnist,
title={MNIST handwritten digit database},
author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
journal={ATT Labs [Online]. Available: http://yann. lecun. com/exdb/mnist},
volume={2},
year={2010}
}
"""',
)
You can also get details about the classes (number of classes and their names).
info = tfds.builder('cats_vs_dogs').info
info.features['label'].num_classes # 2
info.features['label'].names # ['cat', 'dog']
info.features['label'].int2str(1) # "dog"
info.features['label'].str2int('cat') # 0
As a convenience for users that want simple NumPy arrays in their programs, you
can use tfds.as_numpy
to return a generator that yields NumPy array
records out of a tf.data.Dataset
. This allows you to build high-performance
input pipelines with tf.data
but use whatever you'd like for your model
components.
train_ds = tfds.load("mnist", split="train")
train_ds = train_ds.shuffle(1024).batch(128).repeat(5).prefetch(10)
for example in tfds.as_numpy(train_ds):
numpy_images, numpy_labels = example["image"], example["label"]
You can also use tfds.as_numpy
in conjunction with batch_size=-1
to
get the full dataset in NumPy arrays from the returned tf.Tensor
object:
train_ds = tfds.load("mnist", split=tfds.Split.TRAIN, batch_size=-1)
numpy_ds = tfds.as_numpy(train_ds)
numpy_images, numpy_labels = numpy_ds["image"], numpy_ds["label"]
Note that the library still requires tensorflow
as an internal dependency.
Please include the following citation when using tensorflow-datasets
for a
paper, in addition to any citation specific to the used datasets.
@misc{TFDS,
title = {{TensorFlow Datasets}, A collection of ready-to-use datasets},
howpublished = {\url{https://www.tensorflow.org/datasets}},
}
Adding a dataset is really straightforward by following our guide.
Request a dataset by opening a Dataset request GitHub issue.
And vote on the current set of requests by adding a thumbs-up reaction to the issue.
This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.
If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!
If you're interested in learning more about responsible AI practices, including fairness, please see Google AI's Responsible AI Practices.
tensorflow/datasets
is Apache 2.0 licensed. See the LICENSE
file.