This repository contains an unofficial implementation of "Combinatorial Optimization Layers" in PyTorch, i.e. a neural network layer that allows a combinatorial optimization solver to be included into the forward propagation of the network, while still maintaining differentiability of the network for backpropagation.
The implementation is based on the ideas presented in the following two papers:
- "Learning with Combinatorial Optimization Layers: a Probabilistic Approach"; Guillaume Dalle, Léo Baty, Louis Bouvier and Axel Parmentier
- "Combinatorial Optimization enriched Machine Learning to solve the Dynamic Vehicle Routing Problem with Time Windows"; Léo Baty, Kai Jungel, Patrick S. Klein, Axel Parmentier and Maximilian Schiffer
Note that an implementation of Combinatorial Optimization Layers already exists in Julia, see InferOpt.jl.
The main functionality of this repo is the COptLayer
. It is an nn.Module
which takes the following arguments at initialization:
solver
: Method, representing the solver of a combinatorial optimization problen, that takes a as an input atorch.Tensor
of shape(b, *input_dim)
(whereinput_dim
is the dimensionality of the argument of the solver) and returns atorch.Tensor
of shape(b, *sol_dim)
.num_samples
: Number of samples taken in the Monte-Carlo estimationsmoothing
: ...
With given arguments num_samples=n
and smoothing=s
, the forward-method of the COptLayer
returns
For more mathematical details about the forward- and backward-propagation of Combinatorial Optimization Layers, see ???
A few example usages can be found under the Demo
directory.
We provide a wrapper which, given a python function that solves one instance of a combinatorial optimization problem on a torch.Tensor
of shape input_dim
, returns a callable function that can solve multiple instances of the combinatorial optimization problem on batched inputs of shape (batch_size, *input_dim)
.
This is done via the numba library, which compiles the python function to a CUDA kernel.
For an example usage, see Demo.
TODO!
TODO!