Link : paper list

total submisstion : 4,854
acceptance rate : 21%

best papers (reference)

  • Non-Delusional Q-Learning and Value-Iteration from researchers at Google AI: The paper first identified a fundamental problem in Q-learning, “delusional bias”, and demonstrated its detrimental consequences; then proposed a new policy-consistent backup operator that can fully resolve the problem of delusion. Click the link to read the full paper.

  • Optimal Algorithms for Non-Smooth Distributed Optimization in Networks from researchers at Huawei Noah’s Ark Lab, Microsoft Research, MSR-INRIA Joint Centre, PSL Research University, and University of Washington. The paper studied the distributed optimization of non-smooth convex functions and proposed two algorithms to solve the problem — multi-step primal-dual (MSPD) and distributed randomized smoothing (DRS). Click the link to read the full paper.

  • Nearly Tight Sample Complexity Bounds for Learning Mixtures of Gaussians via Sample Compression Schemes from researchers at McMaster University, University of Waterloo, University of British Columbia, and McGill University. The paper proposed a general technique for distribution learning, then employed this technique in the important setting of mixtures of Gaussians. Click the link to read the full paper.

  • Neural Ordinary Differential Equations from a team of Vector Institute researchers at University of Toronto. The paper parameterized the continuous dynamics of hidden units using an ordinary differential equation (ODE) specified by a neural network and developed a new family of deep neural network models for time-series modeling, supervised learning, and density estimation. Click the link to read the full paper.

  • The Test of Time Award goes to The Tradeoffs of Large-Scale Learning, a NIPS 2007 paper from researchers at NEC laboratories of America and Google Zurich. The paper developed a theoretical framework that takes into account the effect of approximate optimization on learning algorithms. Click the link to read the full paper.

Go to Wiki!