15/07/2022, 1:30 pm (GMT+2/Warsaw time) – please note that we’re starting earlier than usual!
Hybrid meeting (via Zoom): click here to join the meeting
If you want to join the meeting in person, please contact the organizers.
Facebook event: Link
Although the grounded approach to cognition has marshalled substantial evidence in its support, how concepts form when both sensory and motor patterns are initially meaningless is still unclear. Here we propose a computationally specified process model in which concepts form by learning to perceive and act. We focus on the formation of multimodal representations of objects while learning to manipulate them. In this model, sensory patterns from each of several sensory modalities (vision, touch, proprioception) are mapped in a lower bidimensional space of internal representations, which is common to all sensory modalities. A topological mapping between the sensory space and the internal bi-dimensional space is acquired. The agent’s actions are also represented in the same space via a topological mapping. Since these learning processes can be mutually constrained, the sensory and motor mappings aligns in the internal space so that each point conveys a multi-modal sensorimotor representation. Results in simulation show that this learning can be achieved by maximising the convergence between sensory and motor representations activated in the internal space by a single event. We show that this convergence can be interpreted as a measure of competence for a goal, which thus acts as an intrinsic motivation for competence acquisition.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 952324.