ryanchankh
Interested in developing principled deep learning algorithms
University of PennsylvaniaPhiladelphia, PA
Pinned Repositories
cifar100coarse
Build PyTorch CIFAR100 using coarse labels
L0L1L4NormSn
Analysis of L0, L1 and L4 Norm in S1
l4_dictionary_learning
Experiments on L4-based Dictionary Learning
mcr2
Official Implementation of Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction (2020)
power_iteration
Implementation of Different Power Iteration Methods
redunet
Official PyTorch Implementation of Deep Networks from the Principle of Rate Reduction (2021)
redunet_demo
redunet_paper
Official NumPy Implementation of Deep Networks from the Principle of Rate Reduction (2021)
style_transfer
Implementation of Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. 2016. “Image Style Transfer Using Convolutional Neural Networks.”
VariationalInformationPursuit
Official Implementation for Variational Information Pursuit for Interpretable Predictions (ICLR 2023)
ryanchankh's Repositories
ryanchankh/mcr2
Official Implementation of Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction (2020)
ryanchankh/redunet_demo
ryanchankh/redunet_paper
Official NumPy Implementation of Deep Networks from the Principle of Rate Reduction (2021)
ryanchankh/cifar100coarse
Build PyTorch CIFAR100 using coarse labels
ryanchankh/redunet
Official PyTorch Implementation of Deep Networks from the Principle of Rate Reduction (2021)
ryanchankh/style_transfer
Implementation of Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. 2016. “Image Style Transfer Using Convolutional Neural Networks.”
ryanchankh/VariationalInformationPursuit
Official Implementation for Variational Information Pursuit for Interpretable Predictions (ICLR 2023)
ryanchankh/LDR
The official PyTorch implementation of the paper: Xili Dai, Shengbang Tong, et al. "Closed-Loop Data Transcription to an LDR via Minimaxing Rate Reduction.".
ryanchankh/mae
PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
ryanchankh/SimCLR
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
ryanchankh/vision-transformers-cifar10
Let's train vision transformers (ViT) for cifar 10!
ryanchankh/barks
A simple, minimalistic theme for Hugo.
ryanchankh/captum
Model interpretability and understanding for PyTorch
ryanchankh/CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
ryanchankh/dgm23_GPT
A minimal and efficient Pytorch implementation of OpenAI's GPT (Generative Pretrained Transformer).
ryanchankh/FT-CLIP
CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet
ryanchankh/GraphVQA
GraphVQA: Language-Guided Graph Neural Networks for Scene Graph Question Answering
ryanchankh/INVASE
Codebase for INVASE: Instance-wise Variable Selection - 2019 ICLR
ryanchankh/ip-omp
ryanchankh/ISONet
Deep Isometric Learning for Visual Recognition (ICML 2020)
ryanchankh/ISTA-Net-PyTorch
ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing, CVPR2018 (PyTorch Code)
ryanchankh/latent_ode
Code for "Latent ODEs for Irregularly-Sampled Time Series" paper
ryanchankh/neuron-descriptions
Natural Language Descriptions of Deep Visual Features, ICLR 2022
ryanchankh/pytorch-cifar
95.47% on CIFAR10 with PyTorch
ryanchankh/ryanchankh.github.io
A beautiful, simple, clean, and responsive Jekyll theme for academics
ryanchankh/ryanchankh.github.io_archive
Personal Website
ryanchankh/SparseScatNet
Code implementation of paper: Deep Network Classification by Scattering and Homotopy Dictionary Learning
ryanchankh/STAM-Sequential-Transformers-Attention-Model
Official implementation of "Consistency driven Sequential Transformers Attention Model for Partially Observable Scenes" [CVPR'22]
ryanchankh/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
ryanchankh/ViLT
Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"