Pinned Repositories
CODA
We propose a novel adversarial example generation technique (i.e., CODA) for testing deep code models. Its key idea is to use code differences between the target input and reference inputs to guide the generation of adversarial examples.
Adversarial-example-paper
automated-interpretability
deep-learning-uncertainty
Literature survey, paper reviews, experimental setups and a collection of implementations for baselines methods for predictive uncertainty estimation in deep learning models.
feature-entropy
Feature entropy gives the quantification of importance degree of individual units in CNNs
neuralcollapse
Code reproducing Neural Collapse phenomenon on MSE and cross-entropy loss
NSGen
POT
POT : Python Optimal Transport
Testing-Zoo
unknownhl's Repositories
unknownhl/Adversarial-example-paper
unknownhl/automated-interpretability
unknownhl/deep-learning-uncertainty
Literature survey, paper reviews, experimental setups and a collection of implementations for baselines methods for predictive uncertainty estimation in deep learning models.
unknownhl/feature-entropy
Feature entropy gives the quantification of importance degree of individual units in CNNs
unknownhl/neuralcollapse
Code reproducing Neural Collapse phenomenon on MSE and cross-entropy loss
unknownhl/NSGen
unknownhl/POT
POT : Python Optimal Transport
unknownhl/Testing-Zoo