The Intelligent Systems Group periodically organises academic research seminars which take place in the Merchant Venturers Building, normally on Mondays. We also organise problem workshops with companies and other interested parties. these are talks by industrialists, companies in the area of finance, healthcare companies, and many other areas who have an application which would involve machine learning or computational statistics. They are keen to establish a collaborative link with ISL members. They have typically indicated that they wish to co-invest in support of this objective. Because the latter are not our regular academic seminars they can be of much shorter duration than the usual 50 minute duration and typically consist in the presentation of the topic of interest, and discussion of data they have available. The presentation is informal and followed by a discussion. Given the nature of these talks, no Abstract is given and the title may be omitted. ISL members, affiliates and UoB academic staff from other faculties are welcome to attend and we are always keen to facilitate developing contacts.

Note that time and location of the seminars varies between the weeks. 


“Event reasoning for transport video surveillance”. Huiyu Zhou, University of Leicester. 23rd January 2018, 13:00 – 14:00, QB 1.11 (Queen’s Building).

Abstract: The aim of transport video surveillance is to provide robust security camera solutions for mass transit systems, ports, subways, city buses and train stations. As we have known, numerous security threats exist within the transportation sector, including crime, harassment, liability suits and vandalism. Possible solutions have been directed to insulate transportation system from security threats and to make the system safer for passengers. In this talk, I will introduce our solution to deal with several challenges in transports, in particular, city buses. In general, I will structure this talk into the following four sections: (1) The techniques that we have developed for automatically extracting and selecting features from face images for robust age recognition, (2) An effective combination of facial and full body measurements for gender classification, (3) Human tracking and trajectory clustering approaches to handle challenging circumstances such as occlusions and pose variations, and (4) event reasoning in smart transport video surveillance.

Bio: Dr. Huiyu Zhou obtained a Bachelor of Engineering degree in Radio Technology from Huazhong University of Science and Technology of China, and a Master of Science degree in Biomedical Engineering from University of Dundee of United Kingdom, respectively. He was then awarded a Doctor of Philosophy degree in Computer Vision from Heriot-Watt University, Edinburgh, United Kingdom. Dr. Zhou presently is a Reader at Department of Informatics, University of Leicester, United Kingdom. He has published widely in the field. He was the recipient of “CVIU 2012 Most Cited Paper Award”, “ICPRAM 2016 Best Paper Award” and shortlisted for “ICPRAM 2017 Best Student Paper Award” and “MBEC 2006 Nightingale Prize”. Dr. Zhou serves as the Editor-in-Chief of “Recent Advances in Electrical & Electronic Engineering” and Associate Editor of “IEEE Transaction on Human-Machine Systems”, and is on the Editorial Boards of several refereed journals. He is one of the Technical Committee of “Information Assurance & Intelligent Multimedia-Mobile Communication in IEEE SMC Society”, “Robotics Task Force” and “Biometrics Task Force” of the Intelligent Systems Applications Technical Committee, IEEE Computational Intelligence Society. He has given over 50 invited talks at international conferences, industry and universities, and has served as a chair for 30 international conferences and workshops. His research work has been or is being supported by UK EPSRC, EU ICT, MRC, Innovate UK, Leverhulme Trust, Invest NI and industry.

Modeling disease propagation in networks: source-finding and influence maximization“. Po-Ling Loh, University of Wisconsin-Madison. 29th January 2018, 12:00 – 13:00, QB F101c (Queen’s Building).

Abstract: We present several recent results concerning stochastic modeling of disease propagation over a network. In the first setting, nodes are infected one at a time, starting from a single infected individual, and the goal is to infer the source of the infection based on a snapshot of infected individuals. We show that if the underlying graph is a tree and possesses a certain regular structure, it is possible to construct confidence sets for the diffusion source with size independent of the number of infected nodes. Furthermore, the confidence sets we construct possess an attractive property of “persistence,” meaning they eventually settle down as the disease spreads over the network. In the second setting, nodes are infected in waves according to linear threshold or independent cascade models. We establish upper and lower bounds for the influence of a subset of nodes in the network, where the influence is defined as the expected number of infected nodes at the conclusion of the epidemic. We quantify the gap between our upper and lower bounds in the case of the linear threshold model and illustrate the gains of our upper bounds for independent cascade models in relation to existing results. Importantly, our lower bounds are monotonic and submodular, implying that a greedy algorithm for influence maximization is guaranteed to produce a maximizer within a 1-1/e factor of the truth. This is joint work with Justin Khim and Varun Jog.



Manifolds of Shape via Gaussian Process Latent Variable Models, Dr. Neill Campbell, University of Bath, 2nd of February, 15:00-16:00, MVB 1.06

Abstract: In this talk we will look at Gaussian Processes and Latent Variable Models, in particular focusing on how they may be used to learn generative, probabilistic models of shape. As well as looking at some of the theory behind the models I will show a number of real-world applications of such models with the domains of computer vision and graphics. I will also provide details of the challenges in this area and some early results of new work.
Bio: Neill CampbellI is a lecturer in the Department of Computer Science at the University of Bath  in Computer Vision, Graphics and Machine Learning. He also hold an Honorary Lecturer position in the Virtual Environments and Computer Graphics Group in the Department of Computer Science at University College London where he was formerly a Research Associate working with Jan Kautz andSimon Prince on synthesizing and editing photorealistic visual objects funded by the EPSRC. Prior to this Neill was a Research Associate in the Computer Vision Group of the Machine Intelligence Laboratory, in the Department of Engineering at the University of Cambridge working on the EU Hydrosys Project led by Ed Rosten. Neill completed his PhD, in the Computer Vision Group at the University of Cambridge, under the supervision of Roberto Cipolla and the guidance of George Vogiatzis and Carlos Hernández.

Prof. Andrea Sgarro, University of Trieste, 9th of February, 14:00-15:00, MVB 1.06

Abstract: Back in 1967 the Croat linguist. Muljacic had used a fuzzy generalization of the Hamming distance between binary strings to classify Romance languages. In 1956 Cl. Shannon had introduced the notion of codeword distinguishability in zero-error information theory. Distance and distinguishability are subtly different notions, even if, with distances as those usually met in coding theory (with the exception of zero-error information theory, which is definitely non-metric), the need for string distinguishabilities evaporates, since the distinguishability turns out to be an obvious and trivial function of the distance. Fuzzy Hamming distinguishabilities derived from Muljacic distances, instead, are not that trivial, and must be considered explicitly. They are quite easy to compute, however, and we show how they could be applied in coding theory to channels with erasures and blurs. The new tool of fuzzy Hamming distinguishability appears to be quite promising to extend Muljacic approach from linguistic classification to linguistic evolution.
Bio: Andrea Sgarro is full professor of Theoretical Computer Science at the University of Trieste. His research interests include information theory and codes, cryptography, bioinformatics, soft computing, management of incomplete knowledge and computational linguistics. He is responsible for the scientific section of the Circolo della Cultura e delle Arti of Trieste, and is quite active in scientific communication: his books Secret Codes, Mondadori, and Cryptography, Muzzio, for the first time have introduced cryptology to an Italian-speaking audience. In his free time he enjoys languages, of which he speaks a dozen with varying degrees of competence, and plays the one-keyed transverse baroque flute.

CANCELLED Prof. Mark Girolami, University College London, 23rd of February, 14:00-15:00, MVB 1.06

Abstract: Ambitious mathematical models of highly complex natural phenomena are challenging to analyse, and more and more computationally expensive to evaluate. This is a particularly acute problem for many tasks of interest and numerical methods will tend to be slow, due to the complexity of the models, and potentially lead to sub-optimal solutions with high levels of uncertainty which needs to be accounted for and subsequently propagated in the statistical reasoning process. This talk will introduce our contributions to an emerging area of research defining a nexus of applied mathematics, statistical science and computer science, called “probabilistic numerics”. The aim is to consider numerical problems from a statistical viewpoint, and as such provide numerical methods for which numerical error can be quantified and controlled in a probabilistic manner. This philosophy will be illustrated on problems ranging from predictive policing via crime modelling to computer vision, where probabilistic numerical methods provide a rich and essential quantification of the uncertainty associated with such models and their computation.
Bio: Mark Girolami is Professor of Statistics in the Department of Statistical Science at Imperial College London. Prior to joining Imperial College, Mark held Chairs in Computing and Inferential Science at the University of Glasgow, in Statistics at UCL and subsequently Warwick University. In 2011 he was elected to the Fellowship of the Royal Society of Edinburgh when he was also awarded a Royal Society Wolfson Research Merit Award. He was one of the founding Executive Directors of the Alan Turing Institute for Data Science from 2015 to 2016. He is an EPSRC Established Career Research Fellow and Director of the Lloyds Register Foundation-Turing Programme on Data Centric Engineering of The Alan Turing Institute. He is currently an Associate Editor for J. R. Statist. Soc. C, Journal of Computational and Graphical Statistics, Statistics & Computing, and Area Editor for Pattern Recognition Letters. He is a member of the Research Section of the Royal Statistical Society.
Problem workshop with Piccadilly Group, 23rd of March, 15:00-16:00, MVB 1.06

Abstract: In this session, we”ll hear from the CEO of Piccadilly Group, Dan Hooper and CTO, Adam Smith, who will outline the underlying issues and challenges in the management of software testing and technology delivery within banking, and how we see AI addressing many of these challenges.

Problem Statement: The group discussion will focus on the practical challenges
of developing artificial intelligence and machine learning for use in this

About Piccadilly Group:Piccadilly Group is the UK’s leading Test and Intelligence Agency dedicated to Financial Services, providing specialist skills, bespoke product development
and expert consultancy knowledge across the entire test landscape.

Indian Buffet process for model selection in convolved multiple-output Gaussian processes, Dr Mauriciou Alvarez, University of Sheffield, 4th of May, 15:00-16:00, MVB 1.06

Abstract: Multi-output Gaussian processes have received increasing attention during the last few years as a natural mechanism to extend the powerful flexibility of Gaussian processes to the setup of multiple output variables. The key point here is the ability to design kernel functions that allow exploiting the correlations between the outputs while fulfilling the positive definiteness requisite for the covariance function. Alternatives to construct these covariance functions are the linear model of coregionalization and process convolutions. Each of these methods demands the specification of the number of latent Gaussian processes used to build the covariance function for the outputs. We propose the use of an Indian Buffet process as a way to perform model selection over the number of latent Gaussian processes. This type of model is particularly important in the context of latent force models, where the latent forces are associated with physical quantities like protein profiles or latent forces in mechanical systems. We use variational inference to estimate posterior distributions over the variables involved and show examples of the model performance over artificial data and several real-world datasets.
Bio: Dr. Álvarez received a degree in Electronics Engineering (B. Eng.) with Honours, from Universidad Nacional de Colombia in 2004, a master degree in Electrical Engineering (M. Eng.) from Universidad Tecnológica de Pereira, Colombia in 2006, and a Ph.D. degree in Computer Science from The University of Manchester, UK, in 2011. After finishing his Ph.D., Dr. Álvarez joined the Department of Electrical Engineering at Universidad Tecnológica de Pereira, Colombia, where he was appointed as a Faculty member until Dec 2016. From January 2017, Dr. Álvarez was appointed as Lecturer in Machine Learning at the Department of Computer Science of the University of Sheffield, UK.

Dr. Álvarez is interested in machine learning in general, its interplay with mathematics and statistics, and its applications. In particular, his research interests include probabilistic models, kernel methods, and stochastic processes. He works on the development of new approaches and the application of Machine Learning in areas that include applied neuroscience, systems biology, and humanoid robotics.

Probabilistic and Bayesian deep learning, Dr Andreas Damianou, Amazon Research, 15th of May, 14:00-15:00, MVB 1.06

Abstract: In this talk I will firstly motivate the need for introducing probabilistic and Bayesian flavor to “traditional” deep learning approaches. For example, Bayesian treatment of neural network parameters is an elegant way of avoiding overfitting and “heuristics” in optimization, while providing a solid mathematical grounding. Moreover, introducing ideas from Bayesian uncertainty treatment and probabilistic graphical models, allows for a higher level of reasoning which is needed for solving non-perceptual tasks, such as transfer/unsupervised learning and decision making. In the talk I will highlight the deep Gaussian process family of approaches, which can be seen as non-parametric Bayesian neural networks. Unfortunately, combining deep nets with probabilistic reasoning is challenging, because uncertainty needs to be propagated across the neural network during inference. This comes in addition to the (easier) propagation of gradients (e.g. back-propagation). Therefore, as part of my talk I will talk about approximation methods to tackle the aforementioned computational issue, such as variational, amortized and black-box inference.
Bio: Andreas Damianou completed my PhD studies under Neil Lawrence in Sheffield, and subsequently pursued a post-doc in the intersection of machine learning and bio-inspired robotics. He have now moved to the industry as a machine learning scientist, based in Cambridge, UK. His area of interest is machine learning, and more specifically: Bayesian non-parametrics (focusing on both data efficiency and scalability), representation learning, uncertainty quantification, big data. In a recent work he seeks to bridge the gap between representation learning and decision-making, with applications in robotics and data science pipelines. Personal website.

Deep probabilistic models for weakly supervised structured prediction, Diane Bouchacourt, University of Oxford, 8th of June, 15:00-16:00, MVB 1.06

Abstract: Structured prediction refers to the prediction of a structured, complex output given an input value. This task is challenging as there is often uncertainty on the output. In this setting, deep probabilistic networks are powerful tools to learn the distribution of the structure to predict. Such models parametrise the distribution of the data with a neural network. This allows reasoning under uncertainty and decision making, according to the task at hand. However, while we can easily gather a large amount of data observations, retrieving ground-truth values of the output to predict is costly, if not infeasible. In this talk, I will present how to employ deep probabilistic models to perform structured prediction for computer vision tasks; both in the supervised and weakly supervised setting when only part of the ground-truthlabelingis available.

Bio: Diane Bouchacourt is a PhD student in the Optimization for Vision and Learning (OVAL Group) at the Department of Engineering Science at University of Oxford. She works under the co-supervision of M Pawan Kumar at the University of Oxford and Sebastian Nowozin at Microsoft Research Cambridge. Her research focuses on developing novel optimization algorithms and deep probabilistic models for structured output prediction. She is currently focusing on unsupervised and supervised learning of generative models based on neural networks.