; ;

Research topics
AGI


Interacting and solving general problems



To Serve

Out-of-distribution generalisation in artificial agents

Generalisation from previous experiences is pervasive in the natural world; it is a given feature rather than a target. By generalising learning across domains, animals and human beings are capable of transferring knowledge and behaviour between comparable conditions and adapting to their environments efficiently. Despite advancements in AI, artificial agents lack this ability.

Generalisation is thought to be driven by extant environmental commonalities. Shared, mutual cues bridge the information acquired in one situation to another. In simple scenarios, mere sensory attributes carry on the necessary clues (stimulus generalisation). However, sensory features alone may not suffice and can lead to inadequate or dysfunctional use of information. Consequently, extracting complex human-pertinent information and suitable relational patterns capable of bearing forward structured resemblance across environments is cardinal to achieving real AGI. Effective adaptation demands more than passive reuse of data and calls for higher-level representations that abstract common sense conceptual knowledge upon which agents can reuse information and generate creative solutions.

We are interested in developing computational models that would allow artificial agents to come up with such a level of knowledge representations.

Mondragon-2

Featured publication:

Cătărău-Cotuțiu, C; Mondragón, E.; Alonso, E. (2022) AIGenC: AI generalisation via creativity. arXiv preprint arXiv:2205.09738, arXiv:2205.09738

Fully connected associative learning networks

We are interested in investigating fundamental principles in associative learning and how these may be expressed computationally. We are particularly keen on those phenomena that so far seem to go beyond an associative analysis, and in the conceptual and formal modifications of the theory that would allow it to integrate them.

With this aim in mind we develop fully connected architectures as general computational models of associative learning such as the Double error Dynamic Asymptote (DDA) that explicitly incorporates into a Pavlovian structure interactions between so called neutral stimuli, physically present or associatively retrieved from memory. We envisage that Deep Learning architectures may be instrumental in building up higher order cognitive structures, given their capacity to learn representations as hierarchies of abstractions, and to do so in a serial manner.

Mondragon-1

Featured publications:

Kokkola, N., Mondragón, E. and Alonso, E. (2019). A Double Error Dynamic Asymptote Model of Associative Learning. Psychological Review, 126(4), pp. 506–549. doi: 10.1037/rev0000147.

Mondragón, E., Alonso, E. and Kokkola, K. (2017). Associative Learning Should Go Deep. Trends in Cognitive Sciences, 21(11), pp. 822–825. doi: 10.1016/j.tics.2017.06.001.

Variational principles for Artificial General Intelligence

Whether animals behave optimally is an open question of great importance, both theoretically and in practice. Attempts to answer this question focus on two aspects of the optimization problem, the quantity to be optimized and the optimization process itself. In this paper, we assume the abstract concept of cost as the quantity to be minimized and propose a reinforcement learning algorithm, called Value-Gradient Learning (VGL), that is guaranteed to converge to optimality under certain conditions. The core of the proof is the mathematical equivalence of VGL and Pontryagin’s Minimum Principle, a well-known optimization technique in systems and control theory. Given the similarity between VGL’s formulation and regulatory models of behaviour, we argue that our algorithm may provide AI with a variational technique in pursue of Artificial General Intelligence.

Alonso-3

Feature publication:

Alonso, E., Fairbank, M. and Mondragon, E. (2015). Back to Optimality: A Formal Framework to Express the Dynamics of Learning Optimal Behavior. Adaptive Behavior, 23(4), pp. 206–215. doi: 10.1177/1059712315589355.