Hi there 👋
I have recently finished my PhD ("DPhil" with AIMS CDT) in Machine Learning at OATML (at the University of Oxford). Here is my quick online CV 🤗
🎓 Education & 💼 Industry Experience
-
DPhil Computer Science
University of Oxford, supervised by Prof Yarin Gal, Oxford, UK, Oct 2018 -- Summer 2023_
Deep active learning and data subset selection using information theory and Bayesian neural networks. -
Research Engineer (Intern)
Opal Camera, Remote, Oct 2022 -- Dec 2022
Validation Pipeline for Gesture Control System. -
Resident Fellow
Newspeak House, London, UK, Jan 2018 -- Jul 2018
AI & Politics event series, science communication. -
Performance Research Engineer
DeepMind, London, UK, Oct 2016 -- Aug 2017
TensorFlow performance improvements (custom CUDA kernels) & profiling (i.a. “Neural Episodic Control”); automated agent regression testing. -
Software Engineer
Google, Zürich, CH, Jul 2013 -- Sep 2016
App & testing infrastructure; latency optimization; front-end development (Dart/GWT). -
MSc Computer Science
Technische Universität München, München, DE, Sep 2009 -- Oct 2012
Thesis “Assisted Object Placement”. -
BSc Mathematics
Technische Universität München, München, DE, Sep 2009 -- Mar 2012
Thesis “Discrete Elastic Rods”. -
BSc Computer Science
Technische Universität München, München, DE, Sep 2007 -- Sep 2009
Thesis “Multi-Tile Terrain Rendering with OGL/Equalizer”.
🧑🔬 Research
📚 Publications
Conference Proceedings
[1] J. Mukhoti*, A. Kirsch*, J. van Amersfoort, P. H. Torr, and Y. Gal, "Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty," CVPR 2023, 2023.
[2] F. Bickford Smith*, A. Kirsch*, S. Farquhar, Y. Gal, A. Foster, and T. Rainforth, "Prediction-Oriented Bayesian Active Learning," AISTATS, 2023.
[3] S. Mindermann*, J. M. Brauner*, M. T. Razzak*, A. Kirsch, et al., "Prioritized Training on Points that are Learnable, Worth Learning, and not yet Learnt," ICML, 2022.
[4] A. Jesson*, P. Tigas*, J. van Amersfoort, A. Kirsch, U. Shalit, and Y. Gal, "Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data," NeurIPS, 2021.
[5] A. Kirsch*, J. van Amersfoort*, and Y. Gal, "BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning," NeurIPS, 2019.
Journal Articles
[6] A. Kirsch, "Black-Box Batch Active Learning for Regression", TMLR, 2023.
[7] A. Kirsch, "Does ‘Deep Learning on a Data Diet’ reproduce? Overall yes, but GraNd at Initialization does not", TMLR, 2023.
[8] A. Kirsch*, S. Farquhar*, P. Atighehchian, A. Jesson, F. Branchaud-Charron, Y. Gal, "Stochastic Batch Acquisition: A Simple Baseline for Deep Active Learning", TMLR, 2023.
[9] A. Kirsch and Y. Gal, "A Note on "Assessing Generalization of SGD via Disagreement"," TMLR, 2022.
[10] A. Kirsch and Y. Gal, "Unifying Approaches in Data Subset Selection via Fisher Information and Information-Theoretic Quantities," TMLR, 2022.
Workshop Papers
[11] D. Tran, J. Liu, M. W. Dusenberry, et al., "Plex: Towards Reliability using Pretrained Large Model Extensions," Principles of Distribution Shifts & First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward, ICML 2022.
[12] A. Kirsch, J. Kossen, and Y. Gal, "Marginal and Joint Cross-Entropies & Predictives for Online Bayesian Inference, Active Learning, and Active Sampling," Updatable Machine Learning, ICML 2022, 2022.
[13] A. Kirsch, J. Mukhoti, J. van Amersfoort, P. H. Torr, and Y. Gal, "On Pitfalls in OoD Detection: Entropy Considered Harmful," Uncertainty in Deep Learning, 2021.
[14] A. Kirsch, T. Rainforth, and Y. Gal, "Active Learning under Pool Set Distribution Shift and Noisy Data," SubSetML, 2021.
[15] A. Kirsch*, S. Farquhar*, and Y. Gal, "A Simple Baseline for Batch Active Learning with Stochastic Acquisition Functions," SubSetML, 2021.
[16] A. Kirsch and Y. Gal, "A Practical & Unified Notation for Information-Theoretic Quantities in ML," SubSetML, 2021.
[17] A. Kirsch, C. Lyle, and Y. Gal, "Scalable Training with Information Bottleneck Objectives," Uncertainty in Deep Learning,
[18] A. Kirsch, C. Lyle, and Y. Gal, "Learning CIFAR-10 with a Simple Entropy Estimator Using Information Bottleneck Objectives," Uncertainty in Deep Learning, 2020.
📝 Reviewing
NeurIPS 2019 (Top Reviewer), AAAI 2020, AAAI 2021, ICLR 2021, NeurIPS 2021 (Outstanding Reviewer), NeurIPS 2022, NeurIPS 2022, TMLR, CVPR 2023.
🎯 Interests & Skills
Active Learning, Subset Selection, Information Theory, Information Bottlenecks, Uncertainty Quantification, Python, PyTorch, Jax, C++, CUDA, TensorFlow.