LukasStruppek
PhD Student at Artificial Intelligence and Machine Learning Lab at TU Darmstadt. Working on privacy and security of AI and deep learning systems.
Artificial Intelligence and Machine Learning LabDarmstadt, Germany
Pinned Repositories
Adversarial_LLMs
Source code for our paper "Exploring the Adversarial Capabilities of Large Language Models" (ICLR 2024 Workshop).
Class_Attribute_Inference_Attacks
Source code for our paper "Image Classifiers Leak Sensitive Attributes About Their Classes".
Exploiting-Cultural-Biases-via-Homoglyphs
[Journal of Artificial Intelligence Research] Source code for our paper "Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis".
LukasStruppek
lukasstruppek.github.io
My personal website
Moderne-Programmierkonzepte-Unterlagen
Unterlagen zur Vorlesung Moderne Programmierkonzepte
Plug-and-Play-Attacks
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be Careful What You Smooth For".
Rickrolling-the-Artist
[ICCV 2023] Source code for our paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".
Robust_Training_on_Poisoned_Samples
Source code for our paper "Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data" (NeurIPS 2023 Workshop).
Learning-to-Break-Deep-Perceptual-Hashing
[FAccT 2022] Source code for our paper "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash".
LukasStruppek's Repositories
LukasStruppek/Rickrolling-the-Artist
[ICCV 2023] Source code for our paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".
LukasStruppek/Plug-and-Play-Attacks
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be Careful What You Smooth For".
LukasStruppek/Exploiting-Cultural-Biases-via-Homoglyphs
[Journal of Artificial Intelligence Research] Source code for our paper "Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis".
LukasStruppek/Class_Attribute_Inference_Attacks
Source code for our paper "Image Classifiers Leak Sensitive Attributes About Their Classes".
LukasStruppek/Robust_Training_on_Poisoned_Samples
Source code for our paper "Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data" (NeurIPS 2023 Workshop).
LukasStruppek/Adversarial_LLMs
Source code for our paper "Exploring the Adversarial Capabilities of Large Language Models" (ICLR 2024 Workshop).
LukasStruppek/LukasStruppek
LukasStruppek/lukasstruppek.github.io
My personal website
LukasStruppek/Moderne-Programmierkonzepte-Unterlagen
Unterlagen zur Vorlesung Moderne Programmierkonzepte
LukasStruppek/PyTorch-Project-Template
Basic template for PyTorch proects.