Free Energy Principle attempts to model the emergence of intelligent behavior as a self-organizing principle of reducing the Free Energy in an agent. We are interested in coupling this theory with advances in Deep Learning and Variational Inference. We would like to investigate how new developments in generative models can lead to implementations of cognitive architectures that are trained and guided by the Free Energy Principle.
Verification is the domain of computer science aiming at checking and certifying computer systems. Computer systems are used at all levels of society and peoples' lives and it is paramount to verify that they behave the way they are designed to and that we expect (think, e.g., about planes auto-pilot or self-driving cars).
Unfortunately, the verification of complex systems encounters limits: there is no universal fully automated way to verify every system and one needs to find a good trade-off between the constraints of time, memory space and accuracy, which are often difficult to overcome.
The aim of this project is to apply learning techniques in verification to improve the efficiency of algorithms which certify computer systems and to compute fast accurate models for real-life systems.
More precisely, the project will focus on automata. Automata are one of the mathematical tools used in verification to model computer or real-life systems. Giving certifications on these systems often boils down to running some algorithms on the corresponding automata. The efficiency of such algorithms usually depends on the size of the considered automaton. Minimizing automata is thus a paramount problem in verification to verify large computer or real-life systems faster.
Our objective is studying the minimization of some streaming models of quantitative automata using machine learning techniques. More precisely, we are interested in cost register automata over the tropical semiring. These automata are streaming models, in the sense that the input is not stored but received as a stream of data and dealt with on the fly, thus being particularly suitable for the treatment of big data. They are also suited to deal with optimisation problems: minimizing the resource consumption of a system or computing the worst-case running time of a program, for example.
This project aims at adapting the classic Angluin L* algorithm that learns automata and weighted automata, to learn optimisation models of automata.
You will be developing associative/reinforcement learning models (AL) of the brain and exploring how such models can be applied to build AI technology and solve AI problems.
AL is a fundamental cognitive process by which animals acquire causal information about the events in their environment, enabling them to predict relevant cues and direct their actions to achieve goals. These mechanisms are core to bottom-up approaches to natural intelligence and thus key to the developing of AI algorithms and architectures intended to emulate intelligent behaviour.
Scaling-up to human cognitive performance is the major challenge for AL. Higher-order cognition is often assumed to entail the ability to manipulate symbolic representations that encompass semantics that are untied to the physical world. We contend that incorporating associative mechanisms within a hierarchical representational process using CNNs may be a step forward in that direction.
Anomaly detection in computer vision is defined as an unsupervised learning task where the goal is to identify abnormal patterns in images or videos. These patterns are by definition infrequent, and anomalies are rarely annotated. In recent years, AI-based techniques such as deep generative models have started to gain popularity in this field [Kiran et al., J. Imaging 2018, 4(2), 36]. One approach consists in using Variational Autoencoders (VAE) to learn how to reconstruct a dataset of normal images and in assuming that the model will not properly reconstruct an abnormal test image.
The aim of this project is to use these approaches to develop fast and robust quality control and artefacts detection techniques for medical images, with a special focus on cardiac MR images. For this application the training set for the deep generative model would consist of uncorrupted cardiac images, and the testing set would be designed to include a variety of artefacts (e.g. motion, wrap-around and susceptibility artefacts). Automated techniques for artefacts detection will have an important and tangible impact, both in the clinic (to identify corrupted scans in real-time and trigger new acquisitions) and in medical research (to exclude problematic images from dataset-wide analyses).
Starting with the Turing Test, several AI challenges have been proposed to establish whether machines can simulate (or replicate) intelligence. Often, such challenges refer to reasoning tasks (deductive and common-sense reasoning, reasoning under uncertainty, planning …).
Recently, a new approach has been proposed that targets cognitive processes that are typically considered as orthogonal to problem solving and decision making. It is argued that, independently of rational thought, creativity is what makes us human, an essential component of so called Artificial General Intelligence, and that we consider creative people as gifted with (a special kind of) intelligence. Ergo, one of AI’s objectives should be to build (abstract) machines that are able to show creative skills. Work on replicating creativity has focused so far on painting and music.
In this project, we are interested in replicating and generating poetry. In particular, Deep Learning architectures and algorithms have been successfully applied to Natural Language Processing (NLP) tasks such as automated translation, sentiment analysis and text mining. We are interested in developing Deep Learning techniques to recreate poetry and to create poetry that would pass a “poetry competition” following the “Ada Lovelace” test by Marcus du Sautoy.
Currently a lot of state-of-the-art models like Mask R-CNN successfully solve advanced datasets like MS COCO, but a lot of transfer learning and hacking is required to solve specialized problems like early tumor detection, lesion segmentation, detection of anomalies in x-rays and MRI images and similar. Challenges include high degree of similarity between objects and background, coarse contours of the object, cluttered background, overlapping objects (partial occlusion), objects of various sizes, etc.
Image-to-image translation consists in translating one image from one input domain to another, (e.g. from grayscale to colour). Many AI-based approaches have been published in the literature, mostly based on generative adversarial networks (GANs). In medical imaging, one of the investigated applications is modality translation (e.g. from MR to CT images [Nie et al., IEEE Trans Biomed Eng. 2018, 65(12): 2720-2730]). Other studies have focused on multi-input image translation, where for instance a set of brain MR images acquired with different modalities are translated into a different one [Joyce et al., MICCAI 2017, Part III, LNCS 10435, 347-355]. However, it is still unclear if these techniques are reliable enough for clinical applications.
In this project, the application of GAN-based image translation in cardiac MR perfusion imaging will be explored. Perfusion MR imaging, specifically first-pass perfusion and late Gadolinium enhancement (LGE), are currently the reference techniques to assess the presence of cardiac perfusion abnormalities (i.e. ischemic and infarcted myocardial regions). However, they require the injection of an external contrast medium, which has been associated with rare but life-threatening side effects. Recent studies have suggested that T1 mapping (a contrast-free MR modality) can produce images sensitive to perfusion defects [Liu et al., JACC Cardiovasc Imaging. 2016, 9(1): 27-36]. The aim of this project is to use deep generative models to perform image translation from T1 mapping to LGE images (potentially in a multi-input setting taking into account other modalities like cardiac tagging or cine) for contrast-free detection of cardiac perfusion defects. A successful model will have a high impact within the field of cardiac imaging, and potentially lead to a change in clinical perfusion imaging.
Artificial Agents learn best when the task they try to solve is adapted to their current capabilities. We developed a simulator that allows to easily generate Reinforcement Learning environments and tasks for an agent. We would like to study if the characteristic of an environment can be captured in a vector representation, used to generate different variations of said environment. By doing so, we would be able to generate a curriculum for an agent, depending on its capabilities and progression.
Deep Learning has been labelled as a “black box”: Given the complexity of the multi-layered architectures in which it is embedded and its dependence on the inter-relation of a large number of hyperparameters, it is difficult to establish which parts of a Deep Learning model is responsible for a given output. This raises questions of interpretability, and trustworthiness and is the focus of research in the area Explainable AI. A plethora of different approaches have been proposed to try and make Deep Learning explainable, from data visualization to knowledge distillation. This project looks into the mathematical foundations of Deep Learning as an attempt to understand how such complex systems operate.
Our assumption is that Deep Learning filters information based on variational principles akin to those present in physical systems, such as the Principle of Least Action. As such, they reflect symmetries and underlying conservative and dissipative quantities, whose analysis may shed light into their workings. We propose to use abstract algebra (groups, groupoids, categories) to analyse Deep Learning architectures.
Deep Learning (DL) has been labelled as a “black box” approach to AI due to the fact that, given their inherent complexity, it is difficult to ascertain their functioning and which parts of the web of interconnections that form a deep network should be credited for any given output. Explainable AI (XAI) techniques have been proposed to make DL understandable, accountable and trustworthy, yet most approaches refer to surrogate models that don’t global guarantee interpretability or specific visual analytics based on reverse engineering. In this project we aim at developing a fundamental understanding of how AI developers can use a psychologically embedded theory such associative learning to build interpretable DL architectures and processes by design. In order to achieve and validate this goal, the project will realise the conceptual scheme through an end-to-end software prototype where the envisioned framework is implemented.
The impressive success of new Deep Learning technologies in classification (e.g., image processing in health and wellbeing, defense), prediction (e.g., time series in market and weather forecasting) and optimization tasks (e.g., game playing, energy and transport) comes with great responsibilities. AI is now able to identify terrorist and fraudulent activity, pilot military drones, drive vehicles autonomously, and classify types of brain tumours with great accuracy. In such critical applications, it is paramount that ethical, legal and social factors are taken into account, so as to guarantee that not harm is caused either in the gathering, generation and preparation of data (GDPR compliance, bias avoidance etc), the processing of the information and the application, dissemination and storage of results. In this project we aim at investigating formal methods that guarantee that the implementation of Deep Learning methods abides by ethical and legal standards and the associated risks (e.g., in bias and discrimination) are minimised.