CitAI Seminars

Tell me

Organisers: Giacomo Tarroni and Alex Ter-Sarkisov


If you wish to attend, via Zoom, please contact the Seminar Organisers


  • Wednesday, 23-02-2022 (Zoom; 16:30): Seminar by Melanie Mitchell (Santa Fe Institute).
    Why AI is Harder Than We Think
    Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI Spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI Winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this talk I will discuss some fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I will also speculate on what is needed for the grand challenge of making AI systems more robust, general, and adaptable—in short, more intelligent.

  • Wednesday, 09-03-2022 (Zoom; 17:30): Seminar by Stefano Ghirlanda (Brooklyn College, City University of New York).
    To be announced

  • Wednesday, 06-04-2022 (Zoom; 16:30): Seminar by Carles Sierra (Artificial Intelligence Research Institute, CSIC).
    To be announced

Term 2 2021-22

  • Wednesday, 26-01-2022 (Zoom; 16:30): Seminar by David Filliat (ENSTA Paris).
    Improving data efficiency for machine learning in robotics
    Machine learning is a key technology for robotics and autonomous systems, in the area of sensory processing and in the area of control. While learning on large datasets for perception lead to impressive applications, applications in the area of control still present many challenges as the system to control remains slow and brittle. Reducing the required number of interactions with the system is, therefore, a key aspect of the progress of machine learning applied to robotics. We will present different approaches that can be used for this purpose, in particular approaches for State Representation Learning that can help improve data efficiency in Reinforcement Learning.

    To watch this seminar video recording, click the thumbnail below


Term 1 2021-22

  • Wednesday, 08-12-2021 (Zoom; 16:30): Seminar by Ira Ktena (DeepMind).
    Graph representations and algorithmic transparency in life sciences
    Graphs are omnipresent in life sciences. From the molecular structures of drug compounds to the cell signaling networks and brain connectomes, leveraging graph structural information can unlock performance improvements and provide insights for various tasks, where data inherently lies in irregular domains. In this talk I'm going to cover the immense potential of graph representation learning in life sciences and some of the state-of-the-art approaches that have shown great progress in this domain. The talk will further cover aspects of algorithmic transparency and how these considerations might arise in the context of life sciences.

    We are sorry to announce that this seminar's video will not be available. DeepMind has not authorised its release.

  • Wednesday, 24-11-2021 (Zoom; 16:30): Seminar by Shuhui Li (Department of Electrical and Computer Engineering at The University of Alabama).
    Neural-Intelligence for IPM Motor Control and Drives in Electric Vehicles
    Due to the limited space within an electric vehicle (EV), high performance and efficiency of EV electric and electronic components are critical in accelerating the growth of the EV market. One of the most important components within an EV is electric motors, particularly the widely used interior-mounted permanent magnet (IPM) motor by the automobile industry. This seminar focuses on the development of multiple Neuro-Intelligence (NI) systems to overcome the technological limitations in existing IPM motor drives and control systems to improve motor efficiency and reliability. The design of the NI systems will employ a closed-loop interaction between data-driven NI methods and physics-based models and principles as much as possible to enhance the learning ability of the NI systems that can meet the real-life requirements and conditions. To handle the part-to-part variation of individual motors and ensure the lifetime adaptivity and learning capabilities of the offline-trained NI systems, a 5G-based cloud computing platform is employed for routine offline NI learning after an EV is put in use, which guarantees the reliability, convergence, and performance of the routine offline NI learning over the clouding computing platform for the most efficient and reliable drives of an IPM motor over its lifetime.

    To watch this seminar video recording, click the thumbnail below


  • Wednesday, 20-10-2021 (Zoom; 16:30): Seminar by Haitham Bou Ammar (Huawei Research & Innovation and Computer Science, UCL).
    High-Dimensional Black-Box Optimisation in Small Data Regimes
    Many problems in science and engineering can be viewed as instances of black-box optimisation over high-dimensional (structured) input spaces. Applications are ubiquitous, including arithmetic expression formation from formal grammars and property-guided molecule generation, to name a few. Machine learning (ML) has shown promising results in many such problems (sometimes) leading to state-of-the-art results. Abide those successes, modern ML techniques are data-hungry, requiring hundreds of thousands if not millions of labelled data. Unfortunately, many real-world applications do not enjoy such a luxury -- it is challenging to acquire millions of wet-lab experiments when designing new molecules. This talk will elaborate on novel techniques we developed for high-dimensional Bayesian optimisation (BO), capable of efficiently resolving such data bottlenecks. Our methods combine ideas from deep metric learning with BO to enable sample efficient low-dimensional surrogate optimisation. We provide theoretical guarantees demonstrating vanishing regrets with respect to the true high-dimensional optimisation problem. Furthermore, in a set of experiments, we confirm the effectiveness of our techniques in reducing sample sizes by acquiring state-of-the-art logP molecule values utilising only 1% labels compared to previous SOTA.

    To watch this seminar video recording, click the thumbnail below


  • Wednesday, 06-10-2021 (Zoom; 16:30): Seminar by Greg Slabaugh (Digital Environment Research Institute - DERI at Queen Mary, University of London and The Alan Turing Institute).
    Capturing the Moment: Deep Learning for Computational Photography
    Recently smartphones have included onboard processors capable of running deep neural networks. This has enabled the use of on-device deep learning for computational photography applications to produce high quality photographs and has spurred research interest to advance quality of photos using advanced techniques. This talk will present some recent research using convolutional neural networks for image enhancement including color and brightness transformations and bokeh, as well as a new method for adaptively training fully convolutional networks.

    To watch this seminar video recording, click the thumbnail below


Term 2 2020-21

  • Wednesday, 05-05-2021 (Zoom; 16:30): Seminar by Alex Ter-Sarkisov (CitAI at City, University of London).
    Detection and Segmentation of Lesions in Chest CT Scans for The Prediction of COVID-19
    We introduce a lightweight model based on Mask R-CNN with ResNet18 and ResNet34 backbone models that segments lesions and predicts COVID-19 from chest CT scans in a single shot. The model requires a small dataset to train to achieve a 42.45 average precision (main MS COCO criterion) on the segmentation test split, 93.00% COVID-19 sensitivity and F1-score of 96.76% on the classification test split across 3 classes: COVID-19, Common Pneumonia and Control/Negative.

    To watch this seminar video recording, click the thumbnail below


  • Wednesday, 21-04-2021 (Zoom; 16:30): Seminar by Khurram Javed (RLAI Lab at University of Alberta).
    Towards Scalable Real-time Learning for making Robust Intelligent Systems
    In this seminar, I will talk about the necessity of real-time learning for building robust learning systems. I will contrast two directions for making robust systems — (1) zero-shot out-of-distribution generalization and (2) real-time online adaptation — and argue that the latter is a more promising strategy. I will then talk about three open problems that have to be solved for real-time learning, namely (1) catastrophic forgetting, (2) online agent-state construction, and (3) discovery, and share some of the research being done by me and my colleagues to address these problems.

    To watch this seminar video recording, click the thumbnail below


  • Wednesday, 31-03-2021 (Zoom; 16:30): Seminar by Emmanuel Pothos (Department of Psychology at City, University of London).
    Why should we care about 'quantum' in cognition
    A predominant approach to modelling cognition is based on classical (Bayesian) probability theory. So-called Bayesian cognitive models have been consistently successful, but equally there have been instances of persistent divergence between such models and human behaviour. For example, the influential research programme from Tversky and Kahneman has produced several apparent fallacies. Probabilistic theory is not restricted to Bayesian theory, however. We explore the applicability and promise of quantum probability theory in understanding behaviour. Quantum theory is the probability rules from quantum mechanics, without any of the physics; it is in principle applicable in any situation where there is a need to quantify uncertainty. Quantum theory incorporates features, such as interference and contextuality, which appear to align well with intuition concerning human behaviour, at least in certain cases. We consider some notable quantum cognitive models.

    To watch this seminar video recording, click the thumbnail below


  • Wednesday, 17-03-2021 (Zoom; 16:30): Seminar, by Sergio Naval Marimont (Department of Computer Science at City, University of London).
    Unsupervised approaches for medical anomaly detection
    Deep learning methods have been proposed to automatically localize anomalies in medical images. Supervised methods achieve high segmentation accuracies, however, they rely on large and diverse annotated datasets and they are specific to the anomalies previously annotated. Unsupervised methods are not affected by these limitations, however, performance is comparatively poorer. Most commonly, unsupervised anomaly detection methods are based on two steps: 1) Use a generative model to learnt the distribution of normal / healthy anatomies in images 2) Localize anomalies in test images as differences from the learnt normal distribution. Anomaly Scores propose alternative ways to measure and identify the differences to the learnt distribution. During the seminar we will review the unsupervised models most commonly evaluated in the literature (namely, Variational Auto-Encoders and Generative Adversarial Networks) and strengths and weaknesses of these models for the anomaly detection task. We will also review several methods proposed to overcome these models' limitations and how they defined anomaly scores. We will finalize with a review of several studies comparing performance of these methods in Brain MR images.
    Slides (.pdf) available here

    To watch this seminar video recording, click the thumbnail below

    Naval Marimont

  • Wednesday, 03-03-2021 (Zoom; 16:30): Seminar, by Indira L. Lanza Cruz (Departament de Llenguatges i Sistemes Informàtics at Universitat Jaume I).
    Author Profiling for Social Business Intelligence
    This research presents a novel and simple author profiling method applied to social media to classify users based on analysis perspectives for Social Business Intelligence (SBI). To demonstrate the approach, we use data from the Twitter social network over the automotive domain. Unlike most of the methods for author profiling in social media that mainly rely on metrics provided by Twitter such as followers, retweets, etc., the technique developed uses only the textual information of user profiles. One of the greatest difficulties that analysts face when addressing machine learning problems is the complexity to quickly obtain a dataset rich in data and quality. In this sense, we propose a semi-automatic technique for obtaining language models for the classification of profiles in social media oriented to SBI, based on the identification of key bi-grams. This process needs to be scalable, fast, and dynamic since the needs and objectives of analysis in the company change very frequently. We evaluate three families of classifiers, namely: SVM, MLP and FastText using FastText pre-trained embeddings. The semi-supervised and unsupervised approaches yielded very good results demonstrating the efficiency of the methodology for SBI with minimal participation of the expert.

    To watch this seminar video recording, click the thumbnail below


  • Wednesday, 17-02-2021 (Zoom; 16:30): Seminar, by Xingang Fu (Deparment of Electrical Engineering and Computer Science at Texas A&M University Kingsville).
    Deep Recurrent Neural Network Control Applied in Solar Inverters for Grid Integration
    Single-phase or three-phase grid-tied converters/Inverters are widely used to integrate small-scale renewable energy sources and distributed generations to the utility grid. A novel Recurrent Neural Network (RNN) current controller is proposed to approximate the optimal control and overcome the problems associated with conventional controllers under practical conditions. The training of the neural controllers was implemented by Levenberg-Marquardt Backpropagation (LMBP) and Forward Accumulation Through Time (FATT) algorithms efficiently. The local stability and local convergence were investigated for neural controllers to guarantee stable operations. The RNN controller was validated through a Texas Instruments (TI) LCL filter based solar microinverter kit that contains a C2000 TI microcontroller. Both simulation and hardware-in-the-loop experiments demonstrated the excellent performance of the RNN vector controller for better solar integration.
    Slides (.pdf) available here

    To watch this seminar video recording, click the thumbnail below


  • Wednesday, 03-02-2021 (Zoom; 16:30): Seminar, by Alex Taylor (Department of Computer Science at City, University of London).
    Lessons from a boy and his use of an adapted Hololens System
    In this talk, I’ll come at AI/ML from a perspective that I imagine is quite unusual for CitAI presentations—from the perspective of use. Through short video clips, I’ll provide some examples of a young boy using an adapted Hololens system. We’ll learn how the boy, TH, who is blind, has come to work with others - including the system - to compose a world he is able to orientate to and interact in in new ways. The remarkable achievements that TH is able to accomplish will be shown not to be a product of the system so much as the capabilities that emerge through its use. The point I’ll aim to emphasise is that we limit the potential of actors (human or otherwise) when we define them by their individual abilities, such as whether they have sight or not. Much more generative possibilities arise when we consider how actors enable one another, through their unfolding relations. This, I’ll suggest, has wider implications for designing AI systems.

  • Wednesday, 20-01-2021 (Zoom; 16:30): Seminar, by Andrea Zugarini (SAILab, Siena Artificial Intelligence Lab, at Università di Siena).
    Language Modeling for Understanding and Generation
    In the last decade there have been incredible advances in Natural Language Processing (NLP), where deep learning models have reached astonishing understanding and generation capabilities. Language Modeling played a major role in such developments, and nowadays Language Models (LMs) are an essential ingredient for any NLP problem. In this seminar we present how LMs are involved in text understanding and generation problems. In particular, we first describe a character-aware model to learn general purpose word and context representations that are exploitable for many language understanding tasks. Then, we outline Language Models in the context of text generation, focusing on poems. Finally, we show how language models can be a valid tool to study diachronic and dialectical language varieties.
    Slides (.pdf) available here

    To watch this seminar video recording, click the thumbnail below


Term 1 2020-21

  • Wednesday, 09-12-2020 (Zoom; 16:30): Seminar, by Daniel Yon (Department of Psychology, Goldsmiths, University of London).
    Prediction, action and awareness
    To interact with the world around us, we must anticipate how our actions shape our environments. However, existing theories disagree about how we should use such predictions to optimise representations of our actions and their consequences. Cancellation models in action control suggest that we ‘filter out’ expected sensory signals, dedicating more resources to unexpected events that surprise us most and signal the need for further learning. In direct contrast, Bayesian models from sensory cognition suggest that perceptual inference should biased towards our prior predictions – allowing us to generate more reliable representations from noisy and ambiguous inputs. In this talk I will present a combination of psychophysical, neuroimaging (fMRI) and computational modelling work that compares these Cancellation and Bayesian models – asking how predictions generated by our actions shape perception and awareness. In light of these results, I will discuss a new hypothesis about mechanisms of prediction and surprise that may solve the ‘perceptual prediction paradox’ presented by incompatible Cancellation and Bayesian accounts.

  • Wednesday, 25-11-2020 (Zoom; 16:30): Seminar, by Sebastian Bobadilla-Suarez (Love Lab, Psychology and Language Sciences, UCL).
    Neural similarity in BODaviaud response and multi-unit recordings
    One fundamental question is what makes two brain states similar. For example, what makes the activity in visual cortex elicited from viewing a robin similar to a sparrow? There are a number of possible ways to measure similarity, each of which makes certain conceptual commitments. In terms of information processing in the brain, interesting questions include whether notions of similarity are common across brain regions and tasks, as well as how attention can alter similarity representations. With multi-unit recordings, additional questions like the importance of spike timings can be considered. We evaluated which of several competing similarity measures best captured neural similarity. One technique uses a decoding approach to assess the information present in a brain region and the similarity measures that best correspond to the classifier’s confusion matrix are preferred. Across two published fMRI datasets, we found the preferred neural similarity measures were common across brain regions, but differed across tasks. In considering similarity spaces derived from monkey multi-unit recordings, we found that similarity measures that took into account spike timing information best recovered the representational spaces. In both fMRI and multi-unit data, we found that top-down attention, which highlighted task relevant stimulus attributes, had the effect of stretching neural representations along those axes to make stimuli differing on relevant attributes less similar. These effects were captured by a deep convolutional network front-end to a Long Short-Term Memory (LSTM) network that tracked changes in task context and whose representations stretched in a task-driven manner that paralleled patterns of neural similarity changed with task context.

    To watch this seminar video recording, click the thumbnail below


  • Wednesday, 18-11-2020 (Zoom; 16:30): Seminar, by Nathanael Fijalkow (LaBRI, CNRS, University of Bordeaux and The Alan Turing Institute).
    Program synthesis in the machine learning era
    Program synthesis is one of the oldest dream of artificial intelligence: synthesising a program from its specification, avoiding the hurdle of writing it, the pain of debugging it, and the cost of testing it. In this talk I will discuss the recent progress obtained in the field of program synthesis through the use of machine learning techniques. I will highlight the key challenges faced by the "machine learning guided search approach", and recent solutions.

    To watch this seminar video recording, click the thumbnail below


  • Wednesday, 11-11-2020 (Zoom; 16:30): Seminar, by Rodrigo Agerri (IXA Group, Euskal Herriko Unibertsitatea).
    Word Representations for Named Entity Recognition
    After a brief introduction of Information Extraction in Natural Language Processing (NLP), in this talk we will provide an overview of the most important methods to represent words in NLP and their use for the Named Entity Recognition (NER) task (extracting proper names automatically from written text). In addition to the introduction of new deep learning algorithms and architectures, new techniques for word representations have helped to greatly improve results in many NLP tasks, including NER. After introducing some of the most successful vector-based contextual representations, we will also study their impact for multilingual approaches as well as for low resourced languages, such as Basque.
    Slides (.pdf) available here

    To watch this seminar video recording, click the thumbnail below


  • Wednesday, 28-10-2020 (Zoom; 16:30): Seminar, by Alessandro Betti (SAILab, Siena Artificial Intelligence Lab, at Università di Siena).
    A Variational Framework for Laws of Learning
    Many problems in learning naturally present themselves as a coherent stream of information which has its proper dynamics and temporal scales; one emblematic example is that of visual information. However nowadays most of the approaches to learning completely disregard, at all, or in first approximation, this property of the information on which the learning should be performed. As a result, the problem is typically formulated as a “static” optimization problem on the parameters that define a learning model. Formulating a learning theory in terms of an evolution laws instead shifts the attention to the dynamical behaviour of the learner. This gives us the opportunity, for those agents that live in streams of data to couple their dynamics with the information that flows from the environment and to incorporate into the temporal laws of learning dynamical constraints that we know will enhance the quality of learning. We will discuss how we can consistently frame learning processes using variational principles.
    Slides (.pdf) available here

    To watch this seminar video recording, click the thumbnail below


  • Wednesday, 21-10-2020 (Zoom; 16:30): Seminar, by Mehdi Keramati (Department of Psychology at City, University of London).
    Optimal Planning under Cognitive Constraints
    When deciding their next move (e.g. in a chess game, or a cheese maze), a superhuman or a super-mouse would think infinitely deep into the future and consider all the possible sequences of actions and their outcomes. A terrestrial human or mouse, however, has limited time-consuming computational resources and is thus compelled to restrict its contemplation. A key theoretical question is how an agent can make the best out of her limited time and cognitive resources in order to make up her mind. I will review several strategies, some borrowed from the artificial intelligence literature, that we and others have demonstrated that animals/humans use in the face of different cognitive limitations. These strategies include: acting based on habits, limiting the planning horizon, forward/backward planning, hierarchical planning, and successor-representation learning.
    To watch this seminar video recording, click the thumbnail below


  • Wednesday, 14-10-2020 (Zoom; 16:00): Seminar by Jonathan Passerat-Palmbach (ConsenSys, and BioMedIA at Imperial College London).
    Convergence of Blockchain and Secure Computing for Healthcare solutions
    Web3 provides us with the bricks to build decentralised AI marketplaces where data and models could be monetised. However, this stack does not provide the privacy guarantees required to engage the actors of this decentralised AI economy. Once a data or a model has been exposed in plaintext, any mechanism controlling access to this piece of information becomes irrelevant since it cannot guarantee that the data has not leaked. In this talk, we'll explore the state-of-the-art in Secure/Blind Computing that will guarantee the privacy of data or models and enable a decentralised AI vision. Typically, we will describe an Ethereum orchestrated architecture for a technique known as Federated Learning that enables training AI models on sensitive data while respecting their owners' privacy.

    To watch this seminar video recording, click the thumbnail below


  • Wednesday, 07-10-2020 (Zoom; 16:30): Seminar by Yang-Hui He (Department of Mathematics at City University of London)
    Universes as Bigdata: Superstrings, Calabi-Yau Manifolds and Machine-Learning
    We review how historically the problem of string phenomenology lead theoretical physics first to algebraic/differential geometry/topology, and then to computational geometry, and now to data science and machine-learning. With the concrete playground of the so-called Calabi-Yau landscape, accumulated by the collaboration of physicists, mathematicians and computer scientists over the last 4 decades, we show how the latest techniques in machine-learning can help explore problems of physical and mathematical interest.

    To watch this seminar video recording, click the thumbnail below


  • Wednesday, 30-09-2020 (Zoom; 16:30): Seminar by Michaël Garcia-Ortiz (CitAI) and Department of Computer Science at City, University of London)
    The illusion of space
    Humans naturally experience the notion of space, which is integral to how we perceive and act in the world. In this seminar, we will ask ourselves where this notion comes from, and how it can emerge in biological and artificial agents. We will draw relations between space, objects, and actions, following ideas that Poincare expressed more than 100 years ago. Finally, we will see how predictive learning can be used as a mechanism to acquire the notions of displacement, space, and objects.

Term 3 2019-20

  • Wednesday, 15-07-2020 (Zoom; 16:30): Seminar, by Eduardo Alonso, (CitAI and Department of Computer Science at City, University of London).
    On representations, symmetries, groups, and variational principles
    Given the complexity of the world, one of the main problems in Artificial General Intelligence is how to learn tractable representations. One potential solution is to assume that the world shows structure-preserving symmetries that our representations should reflect. Symmetries have been traditionally formalised as groups and, through the conservation of certain quantities, embed variational principles that dynamic laws must follow. In this talk we will try to bridge the gap between recent advances in representational learning that use group theory to express symmetries and the Free Energy Principle, which hypothesizes that the brain processes information so as to minimize surprise. Interestingly, the latter presumes that organisms execute actions intended to transform the environment in such a way that it matches our preferred representation of the world; on the other hand, it has been proposed that for the agents to learn such representations they must execute operations, as dictated by the actions of the underlying symmetry group. Once the relation between symmetries and variational principles, in the context of representational learning, has been established, we will introduce the idea that groupoids, rather than groups, are the appropriate mathematical tool to formalise partial symmetries that, we claim, are the symmetries of the real world.
    Slides (.pdf) available here

Term 2 2019-20

  • Week 7, 04-03-2020 (AG03; 16:30): Seminar, by Hugo Caselles-Dupré (ENSTA ParisTech and Softbank Robotics Europe; ObviousArt).
    Re-defining disentanglement in Representation Learning for artificial agents
    Finding a generally accepted formal definition of a disentangled representation in the context of an agent behaving in an environment is an important challenge towards the construction of data-efficient autonomous agents. The idea of disentanglement is often associated to the idea that sensory data is generated by a few explanatory factors of variation. Higgins et al. recently proposed Symmetry-Based Disentangled Representation Learning, an alternative definition of disentanglement based on a characterization of symmetries in the environment using group theory. In our latest NeurIPS paper we build on their work and make observations, theoretical and empirical, that lead us to argue that Symmetry-Based Disentangled Representation Learning cannot only be based on static observations: agents should interact with the environment to discover its symmetries.
    Slides (.pdf) available here

  • Week 5, 19-02-2020 (C300; 16:30): Seminar, by Lee Harris (Computational Intelligence Group, University of Kent).
    Comparing Explanations Between Random Forests And Artificial Neural Networks
    The decisions made by machines are increasingly comparable in predictive performance to those made by humans, but these decision making processes are often concealed as black boxes. Additional techniques are required to extract understanding, and one such category are explanation methods. This research compares the explanations of two popular forms of artificial intelligence; neural networks and random forests. Researchers in either field often have divided opinions on transparency, and similarity can help to encourage trust in predictive accuracy alongside transparent structure. This research explores a variety of simulated and real-world datasets that ensure fair applicability to both learning algorithms. A new heuristic explanation method that extends an existing technique is introduced, and our results show that this is somewhat similar to the other methods examined whilst also offering an alternative perspective towards least-important features.
    Slides (.pdf) available here

  • Week 4, 12-02-2020 (ELG14; 16:30): CitAI planning events 2020 (Core members).

  • Week 3, 05-02-2020 (C300; 16:30): Seminar, by Vincenzo Cutrona (INSID&S Lab, Università di Milano-Bicocca).
    Semantic Data Enrichment meets Neural-Symbolic Reasoning
    Data enrichment is a critical task in the data preparation process of many data science projects where a data set has to be extended with additional information from different sources in order to perform insightful analyses. The most crucial pipeline step is the table reconciliation, where values in cells are mapped to objects described in the external data sources. State-of-the-art approaches for table reconciliation perform well, but they do not scale to huge datasets and they are mostly focused on a single external source (e.g., a specific Knowledge Graph). Thus, the investigation of the problem of scalable table enrichment has recently gained attention. The focus of this talk will be on an experimental approach for reconciling values in tables, which relies on the neural-symbolic reasoning paradigm and that is potentially able to both scale and adapt itself to new sources of information. Preliminary results will be discussed in the last part of the talk.
    Slides (.pdf) available here

  • Week 2, 29-01-2020 (C321; 15:00): Worktribe session, by Claudia Kalay (Research & Enterprise Office) (Core members).

  • Week 1, 22-01-2020 (ELG14; 16:10): CitAI funding strategy 2020 (Core members).

Term 1 2019-20

  • Week 11, 04-12-2019 (E214): EIT and other R&E opportunities, by Brigita Jurisic (Research & Enterprise Office) (Core members).

  • Week 10, 27-11-2019 (AG04): Lecture on Deep Learning II, by Alex Ter-Sarkisov (Core members).

  • Week 9, 20-11-2019 (A227): Seminar, by Kizito Salako, Sarah Scott, Johann Bauer and Nathan Olliverre.

  • Week 8, 13-11-2019 (E214): Lecture on Deep Learning I, by Alex Ter-Sarkisov (Core members).

  • Week 7, 06-11-2019 (AG11): Seminar, by Fatima Najibi, Tom Chen, Alex Ter-Sarkisov, and Atif Riaz.

  • Week 5, 23-10-2019 (ELG14): Knowledge Transfer Partnerships (KTPs), by Ian Gibbs (Research & Enterprise Office) (Core members).

  • Week 4, 16-10-2019 (E214): Webpage development session, by Esther Mondragón (Core members).

  • Week 3, 09-10-2019 (A227): Seminar, by Ernesto Jiménez-Ruiz, Michael Garcia Ortiz, Mark Broom, Laure Daviaud and Essy Mulwa.

  • Week 2, 02-10-2019 (A227): Seminar, by Ed Alonso, Esther Mondragón, Constantino Carlos Reyes-Aldasoro and Giacomo Tarroni.

  • Week 1, 25-09-2019 (AG02): UKRI Funding procedures and opportunities, by Peter Aggar (Research & Enterprise Office) (Core members).