The way we, humans, perceive the external world appears to us as a natural, immediate and effortless task but is done by using a number of “low-level” sensory-motor processes that contribute to our perceptual skills. For example, while walking into an indoor environment, we are capable of memorizing relevant objects and environment configurations that will be used for planning future actions. The goal of the PACOM project is to implement some of these capabilities in a system in order to participate in the ANR "CArtographie par RoboT d'un TErritoire“ (défi CAROTTE) competition. This project therefore addresses the understanding of how an autonomous embodied system can build and extract such information from sensory and sensory-motor data and generates plans and actions to explore and navigate in typical indoor environmental settings.
We will develop a robot that can achieve autonomous semantic 3D mapping of a large unknown area. This objective will be achieved by integrating two major sub-systems. The first sub-system will provide 3D simultaneous localization and mapping relying on multiple laser sensors and panoramic vision. The map will be based on a hierarchical representation integrating a textured 3D structure at the lowest level and higher-level semantic concepts such as rooms, corridors and objects detected in the environment. The second sub-system will provide visual object search using a panoramic and an active camera. This sub-system will integrate a panoramic visual saliency computation in order to rapidly select area where objects might be present and a higher-resolution appearance-based object detection and recognition method using a pan-tilt-zoom camera focused toward the salient regions in order to robustly detect objects. The mapping and the detection of objects will be conducted autonomously through the use of a multi-objectives exploration strategy that will include the need for mapping 3D structure and the need for searching the objects and semantic elements present in the environment.
This fundamental research project will be conducted by three partners specialized in navigation for mobile robots (ENSTA ParisTech), panoramic vision and computer vision (ISIR) and software integration for mobile robotics (Gostai). The cognitive Robotics team of ENSTA ParisTech is the leader of this project.