/Differential-Programming

Differential Programming/Differentiable Programming

Differential-Programming

关于Differential Programming/Differentiable Programming/DDP/Automatic differentiation/Differentiable neural computer的论文,文章,教程,幻灯片和项目的列表。

  • What is differential programming? How is it related to functional programming?
    Differential programming, also known as Dynamic Differential Programming (DDP) is an optimization procedure for path planning used in control theory and robotics:
    DDP is an algorithm that solves locally-optimal trajectories given a cost function over some space. In essence it works by locally-approximating the cost function at each point in the trajectory. It uses this approximation to finds the optimal change to the trajectory (via a set of actions) that minimizes some cost metric (e.g. cumulative cost). In the limit it converges to the optimal trajectory. 摘自https://www.quora.com/What-is-differential-programming-How-is-it-related-to-functional-programming

  • 'Deep Learning est mort. Vive Differentiable Programming'
    Yann LeCun:OK, Deep Learning has outlived its usefulness as a buzz-phrase.
    Deep Learning est mort. Vive Differentiable Programming!

Yeah, Differentiable Programming is little more than a rebranding of the modern collection Deep Learning techniques, the same way Deep Learning was a rebranding of the modern incarnations of neural nets with more than two layers.

But the important point is that people are now building a new kind of software by assembling networks of parameterized functional blocks and by training them from examples using some form of gradient-based optimization.

An increasingly large number of people are defining the networks procedurally in a data-dependent way (with loops and conditionals), allowing them to change dynamically as a function of the input data fed to them. It's really very much like a regular progam, except it's parameterized, automatically differentiated, and trainable/optimizable. Dynamic networks have become increasingly popular (particularly for NLP), thanks to deep learning frameworks that can handle them such as PyTorch and Chainer (note: our old deep learning framework Lush could handle a particular kind of dynamic nets called Graph Transformer Networks, back in 1994. It was needed for text recognition).

People are now actively working on compilers for imperative differentiable programming languages. This is a very exciting avenue for the development of learning-based AI.
Important note: this won't be sufficient to take us to "true" AI. Other concepts will be needed for that, such as what I used to call predictive learning and now decided to call Imputative Learning. More on this later

“Differentiable programming”: This is the idea of viewing a program (or a circuit) as a graph of differentiable modules that can be trained with backprop. This points towards the possibility of not just learning to recognize patterns (as with feed-forward neural nets) but to produce algorithms (with loops, recursion, subroutines, etc). There are a few papers on this from DeepMind, FAIR, and others, but it’s rather preliminary at the moment. “差分编程”:这是将程序(或电路)视为可以用backprop训练的可微模块的图形的想法。 这表明不仅可以学习识别模式(如前馈神经网络),还可以生成算法(包括循环,递归,子程序等)。 有较少来自于DeepMind,FAIR和其他的文章,但目前处于初步阶段。

Papers

slides

material