Pinned Repositories
AnalysisBySynthesis
Adversarially Robust Neural Network on MNIST.
foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
imagecorruptions
Python package to corrupt arbitrary images.
model-vs-human
Benchmark your model on out-of-distribution datasets with carefully collected human comparison data (NeurIPS 2021 Oral)
openimages2coco
Convert Open Images annotations into MS Coco format to make it a drop in replacement
robust-detection-benchmark
Code, data and benchmark from the paper "Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming" (NeurIPS 2019 ML4AD)
robustness
Robustness and adaptation of ImageNet scale models. Pre-Release, stay tuned for updates.
siamese-mask-rcnn
Siamese Mask R-CNN model for one-shot instance segmentation
slow_disentanglement
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding
stylize-datasets
A script that applies the AdaIN style transfer method to arbitrary datasets
Bethge Lab's Repositories
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
bethgelab/imagecorruptions
Python package to corrupt arbitrary images.
bethgelab/siamese-mask-rcnn
Siamese Mask R-CNN model for one-shot instance segmentation
bethgelab/model-vs-human
Benchmark your model on out-of-distribution datasets with carefully collected human comparison data (NeurIPS 2021 Oral)
bethgelab/robust-detection-benchmark
Code, data and benchmark from the paper "Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming" (NeurIPS 2019 ML4AD)
bethgelab/stylize-datasets
A script that applies the AdaIN style transfer method to arbitrary datasets
bethgelab/robustness
Robustness and adaptation of ImageNet scale models. Pre-Release, stay tuned for updates.
bethgelab/openimages2coco
Convert Open Images annotations into MS Coco format to make it a drop in replacement
bethgelab/slow_disentanglement
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding
bethgelab/AnalysisBySynthesis
Adversarially Robust Neural Network on MNIST.
bethgelab/frequency_determines_performance
Code for the paper: "No Zero-Shot Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance"
bethgelab/game-of-noise
Trained model weights, training and evaluation code from the paper "A simple way to make neural networks robust against diverse image corruptions"
bethgelab/InDomainGeneralizationBenchmark
bethgelab/slurm-monitoring-public
Monitor your high performance infrastructure configured over slurm using TIG stack
bethgelab/DataTypeIdentification
Code for the ICLR'24 paper: "Visual Data-Type Understanding does not emerge from Scaling Vision-Language Models"
bethgelab/magapi-wrapper
Wrapper around Microsoft Academic Knowledge API to retrieve MAG data
bethgelab/testing_visualizations
Code for the paper " Exemplary Natural Images Explain CNN Activations Better than Feature Visualizations"
bethgelab/docker-deeplearning
Development of new unified docker container (WIP)
bethgelab/notorious_difficulty_of_comparing_human_and_machine_perception
Code for the three case studies: Closed Contour Detection, Synthetic Visual Reasoning Test, Recognition Gap
bethgelab/lifelong-benchmarks
Benchmarks introduced in the paper: "Lifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress"
bethgelab/tools
bethgelab/mmdetection
Fork of the MMDetection Toolbox containing the Robustness Benchmark from the paper "Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming" (merged)
bethgelab/sort-and-search
Code for the paper: "Lifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress"
bethgelab/mnist_challenge
A challenge to explore adversarial robustness of neural networks on MNIST.
bethgelab/bwki-weekly-tasks
BWKI Task of the week
bethgelab/DeepLabCut
Markerless tracking of user-defined features with deep learning
bethgelab/defensive-distillation
bethgelab/foolbox-zoo-dummy
bethgelab/inference_results_v1.1
bethgelab/texture-vs-shape
Pre-trained models, data, code & materials from the paper "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness" (ICLR 2019 Oral)