Seminars

The Intelligent Systems Group organises bi-weekly research seminars which takes place in the Merchant Venturers Building on Thursdays. We also organise problem workshops with companies and other interested parties. these are talks by industrialists, companies in the area of finance, healthcare companies, and many other areas who have an application which would involve machine learning or computational statistics. They are keen to establish a collaborative link with ISL members. They have typically indicated that they wish to co-invest in support of this objective. Because these are not our regular academic seminars they can be of much shorter duration than the usual 50 minute duration and typically consist in the presentation of the topic of interest, and discussion of data they have available. The presentation is informal and followed by a discussion. Given the nature of these talks, no Abstract is given and the title may be omitted. ISL members, affiliates and UoB academic staff from other faculties are welcome to attend and we are always keen to facilitate developing contacts.

 

 

Note that time and location of the seminars varies between the weeks. 

Manifolds of Shape via Gaussian Process Latent Variable Models, Dr. Neill Campbell, University of Bath, 2nd of February, 15:00-16:00, MVB 1.06

Abstract: In this talk we will look at Gaussian Processes and Latent Variable Models, in particular focusing on how they may be used to learn generative, probabilistic models of shape. As well as looking at some of the theory behind the models I will show a number of real-world applications of such models with the domains of computer vision and graphics. I will also provide details of the challenges in this area and some early results of new work.
Bio: Neill CampbellI is a lecturer in the Department of Computer Science at the University of Bath  in Computer Vision, Graphics and Machine Learning. He also hold an Honorary Lecturer position in the Virtual Environments and Computer Graphics Group in the Department of Computer Science at University College London where he was formerly a Research Associate working with Jan Kautz andSimon Prince on synthesizing and editing photorealistic visual objects funded by the EPSRC. Prior to this Neill was a Research Associate in the Computer Vision Group of the Machine Intelligence Laboratory, in the Department of Engineering at the University of Cambridge working on the EU Hydrosys Project led by Ed Rosten. Neill completed his PhD, in the Computer Vision Group at the University of Cambridge, under the supervision of Roberto Cipolla and the guidance of George Vogiatzis and Carlos Hernández.

Prof. Andrea Sgarro, University of Trieste, 9th of February, 14:00-15:00, MVB 1.06

Abstract: Back in 1967 the Croat linguist. Muljacic had used a fuzzy generalization of the Hamming distance between binary strings to classify Romance languages. In 1956 Cl. Shannon had introduced the notion of codeword distinguishability in zero-error information theory. Distance and distinguishability are subtly different notions, even if, with distances as those usually met in coding theory (with the exception of zero-error information theory, which is definitely non-metric), the need for string distinguishabilities evaporates, since the distinguishability turns out to be an obvious and trivial function of the distance. Fuzzy Hamming distinguishabilities derived from Muljacic distances, instead, are not that trivial, and must be considered explicitly. They are quite easy to compute, however, and we show how they could be applied in coding theory to channels with erasures and blurs. The new tool of fuzzy Hamming distinguishability appears to be quite promising to extend Muljacic approach from linguistic classification to linguistic evolution.
Bio: Andrea Sgarro is full professor of Theoretical Computer Science at the University of Trieste. His research interests include information theory and codes, cryptography, bioinformatics, soft computing, management of incomplete knowledge and computational linguistics. He is responsible for the scientific section of the Circolo della Cultura e delle Arti of Trieste, and is quite active in scientific communication: his books Secret Codes, Mondadori, and Cryptography, Muzzio, for the first time have introduced cryptology to an Italian-speaking audience. In his free time he enjoys languages, of which he speaks a dozen with varying degrees of competence, and plays the one-keyed transverse baroque flute.

CANCELLED Prof. Mark Girolami, University College London, 23rd of February, 14:00-15:00, MVB 1.06

Abstract: Ambitious mathematical models of highly complex natural phenomena are challenging to analyse, and more and more computationally expensive to evaluate. This is a particularly acute problem for many tasks of interest and numerical methods will tend to be slow, due to the complexity of the models, and potentially lead to sub-optimal solutions with high levels of uncertainty which needs to be accounted for and subsequently propagated in the statistical reasoning process. This talk will introduce our contributions to an emerging area of research defining a nexus of applied mathematics, statistical science and computer science, called “probabilistic numerics”. The aim is to consider numerical problems from a statistical viewpoint, and as such provide numerical methods for which numerical error can be quantified and controlled in a probabilistic manner. This philosophy will be illustrated on problems ranging from predictive policing via crime modelling to computer vision, where probabilistic numerical methods provide a rich and essential quantification of the uncertainty associated with such models and their computation.
Bio: Mark Girolami is Professor of Statistics in the Department of Statistical Science at Imperial College London. Prior to joining Imperial College, Mark held Chairs in Computing and Inferential Science at the University of Glasgow, in Statistics at UCL and subsequently Warwick University. In 2011 he was elected to the Fellowship of the Royal Society of Edinburgh when he was also awarded a Royal Society Wolfson Research Merit Award. He was one of the founding Executive Directors of the Alan Turing Institute for Data Science from 2015 to 2016. He is an EPSRC Established Career Research Fellow and Director of the Lloyds Register Foundation-Turing Programme on Data Centric Engineering of The Alan Turing Institute. He is currently an Associate Editor for J. R. Statist. Soc. C, Journal of Computational and Graphical Statistics, Statistics & Computing, and Area Editor for Pattern Recognition Letters. He is a member of the Research Section of the Royal Statistical Society.
Problem workshop with Piccadilly Group, 23rd of March, 15:00-16:00, MVB 1.06

Abstract: In this session, we”ll hear from the CEO of Piccadilly Group, Dan Hooper and CTO, Adam Smith, who will outline the underlying issues and challenges in the management of software testing and technology delivery within banking, and how we see AI addressing many of these challenges.

Problem Statement: The group discussion will focus on the practical challenges
of developing artificial intelligence and machine learning for use in this
space.

About Piccadilly Group:Piccadilly Group is the UK’s leading Test and Intelligence Agency dedicated to Financial Services, providing specialist skills, bespoke product development
and expert consultancy knowledge across the entire test landscape.

Indian Buffet process for model selection in convolved multiple-output Gaussian processes, Dr Mauriciou Alvarez, University of Sheffield, 4th of May, 15:00-16:00, MVB 1.06

Abstract: Multi-output Gaussian processes have received increasing attention during the last few years as a natural mechanism to extend the powerful flexibility of Gaussian processes to the setup of multiple output variables. The key point here is the ability to design kernel functions that allow exploiting the correlations between the outputs while fulfilling the positive definiteness requisite for the covariance function. Alternatives to construct these covariance functions are the linear model of coregionalization and process convolutions. Each of these methods demands the specification of the number of latent Gaussian processes used to build the covariance function for the outputs. We propose the use of an Indian Buffet process as a way to perform model selection over the number of latent Gaussian processes. This type of model is particularly important in the context of latent force models, where the latent forces are associated with physical quantities like protein profiles or latent forces in mechanical systems. We use variational inference to estimate posterior distributions over the variables involved and show examples of the model performance over artificial data and several real-world datasets.
Bio: Dr. Álvarez received a degree in Electronics Engineering (B. Eng.) with Honours, from Universidad Nacional de Colombia in 2004, a master degree in Electrical Engineering (M. Eng.) from Universidad Tecnológica de Pereira, Colombia in 2006, and a Ph.D. degree in Computer Science from The University of Manchester, UK, in 2011. After finishing his Ph.D., Dr. Álvarez joined the Department of Electrical Engineering at Universidad Tecnológica de Pereira, Colombia, where he was appointed as a Faculty member until Dec 2016. From January 2017, Dr. Álvarez was appointed as Lecturer in Machine Learning at the Department of Computer Science of the University of Sheffield, UK.

Dr. Álvarez is interested in machine learning in general, its interplay with mathematics and statistics, and its applications. In particular, his research interests include probabilistic models, kernel methods, and stochastic processes. He works on the development of new approaches and the application of Machine Learning in areas that include applied neuroscience, systems biology, and humanoid robotics.

Probabilistic and Bayesian deep learning, Dr Andreas Damianou, Amazon Research, 15th of May, 14:00-15:00, MVB 1.06

Abstract: In this talk I will firstly motivate the need for introducing probabilistic and Bayesian flavor to “traditional” deep learning approaches. For example, Bayesian treatment of neural network parameters is an elegant way of avoiding overfitting and “heuristics” in optimization, while providing a solid mathematical grounding. Moreover, introducing ideas from Bayesian uncertainty treatment and probabilistic graphical models, allows for a higher level of reasoning which is needed for solving non-perceptual tasks, such as transfer/unsupervised learning and decision making. In the talk I will highlight the deep Gaussian process family of approaches, which can be seen as non-parametric Bayesian neural networks. Unfortunately, combining deep nets with probabilistic reasoning is challenging, because uncertainty needs to be propagated across the neural network during inference. This comes in addition to the (easier) propagation of gradients (e.g. back-propagation). Therefore, as part of my talk I will talk about approximation methods to tackle the aforementioned computational issue, such as variational, amortized and black-box inference.
Bio: Andreas Damianou completed my PhD studies under Neil Lawrence in Sheffield, and subsequently pursued a post-doc in the intersection of machine learning and bio-inspired robotics. He have now moved to the industry as a machine learning scientist, based in Cambridge, UK. His area of interest is machine learning, and more specifically: Bayesian non-parametrics (focusing on both data efficiency and scalability), representation learning, uncertainty quantification, big data. In a recent work he seeks to bridge the gap between representation learning and decision-making, with applications in robotics and data science pipelines. Personal website.

Deep probabilistic models for weakly supervised structured prediction, Diane Bouchacourt, University of Oxford, 8th of June, 15:00-16:00, MVB 1.06

Abstract: Structured prediction refers to the prediction of a structured, complex output given an input value. This task is challenging as there is often uncertainty on the output. In this setting, deep probabilistic networks are powerful tools to learn the distribution of the structure to predict. Such models parametrise the distribution of the data with a neural network. This allows reasoning under uncertainty and decision making, according to the task at hand. However, while we can easily gather a large amount of data observations, retrieving ground-truth values of the output to predict is costly, if not infeasible. In this talk, I will present how to employ deep probabilistic models to perform structured prediction for computer vision tasks; both in the supervised and weakly supervised setting when only part of the ground-truthlabelingis available.

Bio: Diane Bouchacourt is a PhD student in the Optimization for Vision and Learning (OVAL Group) at the Department of Engineering Science at University of Oxford. She works under the co-supervision of M Pawan Kumar at the University of Oxford and Sebastian Nowozin at Microsoft Research Cambridge. Her research focuses on developing novel optimization algorithms and deep probabilistic models for structured output prediction. She is currently focusing on unsupervised and supervised learning of generative models based on neural networks.