A collection of Topology Methods in Deep Learning
Subsampling Methods For Persistent Homology (ICML 2015): https://arxiv.org/abs/1406.1901
Sliced Wasserstein Kernel for Persistence Diagrams (ICML 2017): https://arxiv.org/abs/1706.03358
Deep Learning with Topological Signatures (NIPS 2017): https://arxiv.org/abs/1707.04041
What Does it Mean to Learn in a Deep Networks? (CVPR 2019): http://openaccess.thecvf.com/content_CVPR_2019/papers/Corneanu_What_Does_It_Mean_to_Learn_in_Deep_Networks_And_CVPR_2019_paper.pdf
Neural Persistence (ICLR 2019): https://arxiv.org/abs/1812.09764
Topological Data Analysis of Decision Boundaries with Application to Model Selection (ICML 2019): https://arxiv.org/abs/1805.09949
Connectivity-Optimized Representation Learning via Persistent Homology (ICML 2019): https://arxiv.org/abs/1906.09003
PersLay: A Simple and Versatile Neural Network Layer for Persistence Diagrams (Preprint 2019): https://arxiv.org/abs/1904.09378
Topological Autoencoders (ICML 2020): https://arxiv.org/abs/1906.00722
Computing the Test Error with a Testing Set (CVPR 2020): https://arxiv.org/pdf/2005.00450.pdf
On Characterizing the Capacity of Neural Networks Using Algebraic Topology (Preprint 2017): https://arxiv.org/abs/1802.04443
The Power of Depth for Feedforward Neural Networks (COLT 2016) https://arxiv.org/abs/1512.03965
Loss Landscapes of Regularized Linear Autoencoders (ICML 2019): https://arxiv.org/abs/1901.08168
On The Need For Topology-aware Generative Models For Manifold-based Defenses (ICLR 2020) https://openreview.net/pdf?id=r1lF_CEYwS