Organisers: Laure Daviaud and Giacomo Tarroni
If you wish to attend, via Zoom, please contact the Seminar Organisers
- Wednesday, 30-09-2020 (Zoom; 16:30): Seminar by Michaël Garcia-Ortiz (CitAI) and Department of Computer Science at City, University of London)
The illusion of spaceHumans naturally experience the notion of space, which is integral to how we perceive and act in the world. In this seminar, we will ask ourselves where this notion comes from, and how it can emerge in biological and artificial agents. We will draw relations between space, objects, and actions, following ideas that Poincare expressed more than 100 years ago. Finally, we will see how predictive learning can be used as a mechanism to acquire the notions of displacement, space, and objects.
- Wednesday, 07-10-2020 (Zoom; 16:30): Seminar by Yang-Hui He (Department of Mathematics at City University of London)
Universes as Bigdata: Superstrings, Calabi-Yau Manifolds and Machine-LearningWe review how historically the problem of string phenomenology lead theoretical physics first to algebraic/differential geometry/topology, and then to computational geometry, and now to data science and machine-learning. With the concrete playground of the so-called Calabi-Yau landscape, accumulated by the collaboration of physicists, mathematicians and computer scientists over the last 4 decades, we show how the latest techniques in machine-learning can help explore problems of physical and mathematical interest.
- Wednesday, 14-10-2020 (Zoom; 16:00): Seminar by Jonathan Passerat-Palmbach (ConsenSys, and BioMedIA at Imperial College London).
Convergence of Blockchain and Secure Computing for Healthcare solutionsWeb3 provides us with the bricks to build decentralised AI marketplaces where data and models could be monetised. However, this stack does not provide the privacy guarantees required to engage the actors of this decentralised AI economy. Once a data or a model has been exposed in plaintext, any mechanism controlling access to this piece of information becomes irrelevant since it cannot guarantee that the data has not leaked. In this talk, we'll explore the state-of-the-art in Secure/Blind Computing that will guarantee the privacy of data or models and enable a decentralised AI vision. Typically, we will describe an Ethereum orchestrated architecture for a technique known as Federated Learning that enables training AI models on sensitive data while respecting their owners' privacy.
- Wednesday, 21-10-2020 (Zoom; 16:30): Seminar, by Mehdi Keramati (Department of Psychology at City, University of London).
Optimal Planning under Cognitive ConstraintsWhen deciding their next move (e.g. in a chess game, or a cheese maze), a superhuman or a super-mouse would think infinitely deep into the future and consider all the possible sequences of actions and their outcomes. A terrestrial human or mouse, however, has limited time-consuming computational resources and is thus compelled to restrict its contemplation. A key theoretical question is how an agent can make the best out of her limited time and cognitive resources in order to make up her mind. I will review several strategies, some borrowed from the artificial intelligence literature, that we and others have demonstrated that animals/humans use in the face of different cognitive limitations. These strategies include: acting based on habits, limiting the planning horizon, forward/backward planning, hierarchical planning, and successor-representation learning.
- Wednesday, 28-10-2020 (Zoom; 16:30): Seminar, by Alessandro Betti (SAILab, Siena Artificial Intelligence Lab, at Università di Siena).
A Variational Framework or Laws of LearningMany problems in learning naturally present themselves as a coherent stream of information which has its proper dynamics and temporal scales; one emblematic example is that of visual information. However nowadays most of the approaches to learning completely disregard, at all, or in first approximation, this property of the information on which the learning should be performed. As a result, the problem is typically formulated as a “static” optimization problem on the parameters that define a learning model. Formulating a learning theory in terms of an evolution laws instead shifts the attention to the dynamical behaviour of the learner. This gives us the opportunity, for those agents that live in streams of data to couple their dynamics with the information that flows from the environment and to incorporate into the temporal laws of learning dynamical constraints that we know will enhance the quality of learning. We will discuss how we can consistently frame learning processes using variational principles.
- Wednesday, 11-11-2020 (Zoom; 16:30): Seminar, by Rodrigo Agerri (IXA Group, Euskal Herriko Unibertsitatea).
Word Representations for Named Entity RecognitionAfter a brief introduction of Information Extraction in Natural Language Processing (NLP), in this talk we will provide an overview of the most important methods to represent words in NLP and their use for the Named Entity Recognition (NER) task (extracting proper names automatically from written text). In addition to the introduction of new deep learning algorithms and architectures, new techniques for word representations have helped to greatly improve results in many NLP tasks, including NER. After introducing some of the most successful vector-based contextual representations, we will also study their impact for multilingual approaches as well as for low resourced languages, such as Basque.
- Wednesday, 18-11-2020 (Zoom; 16:30): Seminar, by Nathanael Fijalkow (LaBRI, CNRS, University of Bordeaux and The Alan Turing Institute).
- Wednesday, 25-11-2020 (Zoom; 16:30): Seminar, by Sebastian Bobadilla-Suarez (Love Lab, Psychology and Language Sciences, UCL).
Term 3 2019-20
- Wednesday, 15-07-2020 (Zoom; 16:30): Seminar, by Eduardo Alonso, (CitAI and Department of Computer Science at City, University of London).
On representations, symmetries, groups, and variational principlesGiven the complexity of the world, one of the main problems in Artificial General Intelligence is how to learn tractable representations. One potential solution is to assume that the world shows structure-preserving symmetries that our representations should reflect. Symmetries have been traditionally formalised as groups and, through the conservation of certain quantities, embed variational principles that dynamic laws must follow. In this talk we will try to bridge the gap between recent advances in representational learning that use group theory to express symmetries and the Free Energy Principle, which hypothesizes that the brain processes information so as to minimize surprise. Interestingly, the latter presumes that organisms execute actions intended to transform the environment in such a way that it matches our preferred representation of the world; on the other hand, it has been proposed that for the agents to learn such representations they must execute operations, as dictated by the actions of the underlying symmetry group. Once the relation between symmetries and variational principles, in the context of representational learning, has been established, we will introduce the idea that groupoids, rather than groups, are the appropriate mathematical tool to formalise partial symmetries that, we claim, are the symmetries of the real world.
Slides (.pdf) available here
Term 2 2019-20
- Week 7, 04-03-2020 (AG03; 16:30): Seminar, by Hugo Caselles-Dupré (ENSTA ParisTech and Softbank Robotics Europe; ObviousArt).
Re-defining disentanglement in Representation Learning for artificial agentsFinding a generally accepted formal definition of a disentangled representation in the context of an agent behaving in an environment is an important challenge towards the construction of data-efficient autonomous agents. The idea of disentanglement is often associated to the idea that sensory data is generated by a few explanatory factors of variation. Higgins et al. recently proposed Symmetry-Based Disentangled Representation Learning, an alternative definition of disentanglement based on a characterization of symmetries in the environment using group theory. In our latest NeurIPS paper we build on their work and make observations, theoretical and empirical, that lead us to argue that Symmetry-Based Disentangled Representation Learning cannot only be based on static observations: agents should interact with the environment to discover its symmetries.
Slides (.pdf) available here
- Week 5, 19-02-2020 (C300; 16:30): Seminar, by Lee Harris (Computational Intelligence Group, University of Kent).
Comparing Explanations Between Random Forests And Artificial Neural NetworksThe decisions made by machines are increasingly comparable in predictive performance to those made by humans, but these decision making processes are often concealed as black boxes. Additional techniques are required to extract understanding, and one such category are explanation methods. This research compares the explanations of two popular forms of artificial intelligence; neural networks and random forests. Researchers in either field often have divided opinions on transparency, and similarity can help to encourage trust in predictive accuracy alongside transparent structure. This research explores a variety of simulated and real-world datasets that ensure fair applicability to both learning algorithms. A new heuristic explanation method that extends an existing technique is introduced, and our results show that this is somewhat similar to the other methods examined whilst also offering an alternative perspective towards least-important features.
Slides (.pdf) available here
- Week 4, 12-02-2020 (ELG14; 16:30): CitAI planning events 2020 (Core members).
- Week 3, 05-02-2020 (C300; 16:30): Seminar, by Vincenzo Cutrona (INSID&S Lab, Università di Milano-Bicocca).
Semantic Data Enrichment meets Neural-Symbolic ReasoningData enrichment is a critical task in the data preparation process of many data science projects where a data set has to be extended with additional information from different sources in order to perform insightful analyses. The most crucial pipeline step is the table reconciliation, where values in cells are mapped to objects described in the external data sources. State-of-the-art approaches for table reconciliation perform well, but they do not scale to huge datasets and they are mostly focused on a single external source (e.g., a specific Knowledge Graph). Thus, the investigation of the problem of scalable table enrichment has recently gained attention. The focus of this talk will be on an experimental approach for reconciling values in tables, which relies on the neural-symbolic reasoning paradigm and that is potentially able to both scale and adapt itself to new sources of information. Preliminary results will be discussed in the last part of the talk.
Slides (.pdf) available here
- Week 2, 29-01-2020 (C321; 15:00): Worktribe session, by Claudia Kalay (Research & Enterprise Office) (Core members).
- Week 1, 22-01-2020 (ELG14; 16:10): CitAI funding strategy 2020 (Core members).
Term 1 2019-20
- Week 11, 04-12-2019 (E214): EIT and other R&E opportunities, by Brigita Jurisic (Research & Enterprise Office) (Core members).
- Week 10, 27-11-2019 (AG04): Lecture on Deep Learning II, by Alex Ter-Sarkisov (Core members).
- Week 9, 20-11-2019 (A227): Seminar, by Kizito Salako, Sarah Scott, Johann Bauer and Nathan Olliverre.
- Week 8, 13-11-2019 (E214): Lecture on Deep Learning I, by Alex Ter-Sarkisov (Core members).
- Week 7, 06-11-2019 (AG11): Seminar, by Fatima Najibi, Tom Chen, Alex Ter-Sarkisov, and Atif Riaz.
- Week 5, 23-10-2019 (ELG14): Knowledge Transfer Partnerships (KTPs), by Ian Gibbs (Research & Enterprise Office) (Core members).
- Week 4, 16-10-2019 (E214): Webpage development session, by Esther Mondragón (Core members).
- Week 3, 09-10-2019 (A227): Seminar, by Ernesto Jiménez-Ruiz, Michael Garcia Ortiz, Mark Broom, Laure Daviaud and Essy Mulwa.
- Week 2, 02-10-2019 (A227): Seminar, by Ed Alonso, Esther Mondragón, Constantino Carlos Reyes-Aldasoro and Giacomo Tarroni.
- Week 1, 25-09-2019 (AG02): UKRI Funding procedures and opportunities, by Peter Aggar (Research & Enterprise Office) (Core members).