The Machine Learning Research Network is a forum for ML researchers, based at the University of Sheffield.
Coordinated by the Machine Learning group in the Department of Computer Science, with support from the Statistics group in the School of Mathematics and Statistics, we host a series of activities and events.
The aim of the network is to promote collaboration and to provide support for researchers and students who are working with or have interest in machine learning topics. We also promote the Open Data Science Initiative and the philosophy of resource sharing in the research community, particularly research software.
Neil Lawrence is coming to speak on the 17th June!
Title: Deep Gaussian Processes: A Motivation and Introduction
Where: Diamond, DIA - Lecture Theatre 3
Date: Friday, 17th June, 2022
Time: 16:30 (talk starts at 17:00).
Abstract: Modern machine learning methods have driven significant advances in artificial intelligence, with notable examples coming from Deep Learning, enabling super-human performance in the game of Go and highly accurate prediction of protein folding e.g. AlphaFold. In this talk we look at deep learning from the perspective of Gaussian processes. Deep Gaussian processes extend the notion of deep learning to propagate uncertainty alongside function values. We’ll explain why this is important and show some simple examples.
Neil previously was Professor of machine learning here in Sheffield. He continued to develop applied probabilistic machine learning while director of machine learning at Amazon in Cambridge. Then, in 2019, Neil was appointed as the inaugural DeepMind Professor of Machine Learning at the University of Cambridge.
If you're working in the field of ML (in particular early careers, or PhD, MSc) we'd be very happy to have your posters up before/after the talk.
Pre-talk drinks/food 16:30-17:00; Talk 17:00-18:00; Drinks/food following talk 18:00-18:30.
To find out more about this event, email us.
Please email me if there are any interesting ML papers you'd like to chat about in a future meeting.
Below is the provisional schedule for the sessions and leaders:
Paper Title (link) | Leader | Time / Place | Notes |
---|---|---|---|
Spatio-Temporal Statistics with R; Chapter 5: Dynamic Spatio-Temporal Models (link) | Chris and Richard | Note it's been moved back |
|
Collett, Thomas S., et al. "Coordinating compass-based and nest-based flight directions during bumblebee learning and return flights." Journal of Experimental Biology 216.6 (2013): 1105-1113. (link) | Mike Smith | 07/04/22 4pm-5pm, Ada Lovelace | |
Wilson, Andrew G., and Pavel Izmailov. "Bayesian deep learning and a probabilistic perspective of generalization." Advances in neural information processing systems 33 (2020): 4697-4708. (link) | Mike Smith | (delayed a week) | |
24/02/22 | (No JC this week as Wessel Bruinsma is visiting to speak about "Meta-Learning as Prediction Map Approximation") | ||
"Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm."Advances in Neural Information Processing Systems 29 (2016) (link) | Magnus Ross | 10/02/22 4pm-5pm, Ada Lovelace | |
A tutorial "PDE-constrained optimization and the adjoint method" (link) | Chris Lanyon | 25/11/21 4pm-5pm, Ada Lovelace | |
"Bayesian computing with INLA: a review." (second attempt!) (link) | Mike Smith | 11/11/21 4pm-5pm, Ada Lovelace | We had a lot of questions and didn't get to the end of the paper. Hopefully at this meeting we can work through the questions from last time. |
"Bayesian computing with INLA: a review." (link) | Mike Smith | 14/10/21 4pm-5pm, Ada Lovelace |
- 9th July, 2020. Multi-resolution Multi-task Gaussian Processes: London air pollution. Ollie Hamelijnck, The Alan Turing Institute, London. video
- 2nd July 2020. Learning Theory for Continual and Meta-Learning. Prof Christoph Lampert, Institute of Science and Technology Austria. video
- 18th June, 2020. Neural circuit redundancy, stability, and variability in developmental brain disorders. Dr Cian O'Donnell, University of Bristol. video
- 11th June, 2020. The geometry of abstraction in artificial and biological neural networks. Prof Stefano Fusi, Columbia University, USA. video
Some of those involved, include: