This repository contains implementations and illustrative code to accompany DeepMind publications. Along with publishing papers to accompany research conducted at DeepMind, we release open-source environments, data sets, and code to enable the broader research community to engage with our work and build upon it, with the ultimate goal of accelerating scientific progress to benefit society. For example, you can build on our implementations of the Deep Q-Network or Differential Neural Computer, or experiment in the same environments we use for our research, such as DeepMind Lab or StarCraft II.
If you enjoy building tools, environments, software libraries, and other infrastructure of the kind listed below, you can view open positions to work in related areas on our careers page.
For a full list of our publications, please see https://deepmind.com/research/publications/
- RL Unplugged: Benchmarks for Offline Reinforcement Learning
- Disentangling by Subspace Diffusion (GEOMANCER)
- What can I do here? A theory of affordances in reinforcmenet learning, ICML 2020
- Scaling data-driven robotics with reward sketching and batch reinforcement learning, RSS 2020
- The Option Keyboard: Combining Skills in Reinforcement Learning, NeurIPS 2019
- VISR - Fast Task Inference with Variational Intrinsic Successor Features, ICLR 2020
- Unveiling the predictive power of static structure in glassy systems, Nature Physics 2020
- Multi-Object Representation Learning with Iterative Variational Inference (IODINE)
- AlphaFold CASP13, Nature 2020
- Unrestricted Adversarial Challenge
- Hierarchical Probabilistic U-Net (HPU-Net)
- Training Language GANs from Scratch, NeurIPS 2019
- Temporal Value Transport, Nature Communications 2019
- Continual Unsupervised Representation Learning (CURL), NeurIPS 2019
- Unsupervised Learning of Object Keypoints (Transporter), NeurIPS 2019
- BigBiGAN, NeurIPS 2019
- Deep Compressed Sensing, ICML 2019
- Side Effects Penalties
- PrediNet Architecture and Relations Game Datasets
- Unsupervised Adversarial Training, NeurIPS 2019
- Graph Matching Networks for Learning the Similarity of Graph Structured Objects, ICML 2019
- REGAL: Transfer Learning for Fast Optimization of Computation Graphs
This is not an official Google product.