Localisation par vision monoculaire pour la navigation autonome - Localization with monocular vision for autonomous navigation

Localisation par vision monoculaire pour la navigation autonome

Localization with monocular vision for autonomous navigation

Eric Royer Maxime Lhuillier  Michel Dhome  Jean-Marc Lavest 

LASMEA, UMR 6602 CNRS et université Blaise Pascal, 24 avenue des Landais, 63177 Aubière, France

Corresponding Author Email: 
Eric.ROYER@lasmea.univ-bpclermont.fr
Page: 
95-107
|
Received: 
13 July 2005
|
Accepted: 
N/A
|
Published: 
28 February 2006
| Citation

OPEN ACCESS

Abstract: 

We present a method for computing the localization of a mobile robot with reference to a learning video sequence. The robot is first guided on a path by a human, while the camera records a monocular learning sequence. Then a 3D reconstruction of the path and the environment is computed off line from the learning sequence. The 3D reconstruction is then used for computing the pose of the robot in real time in autonomous navigation. Results from our method are compared to the ground truth measured with a differential GPS or a rotating platform.

Résumé

Nous présentons une méthode pour déterminer la localisation d'un robot mobile par rapport à une séquence vidéo d'apprentissage. Dans un premier temps, le robot est conduit manuellement sur une trajectoire et une séquence vidéo de référence est enregistrée par une seule caméra. Puis un calcul hors ligne nous donne une reconstruction 3D du chemin suivi et de l'environnement. Cette reconstruction est ensuite utilisée pour calculer la pose du robot en temps réel dans une phase de navigation autonome. Les résultats obtenus sont comparés à la vérité terrain mesurée par un GPS différentiel ou une plate-forme rotative graduée.

Keywords: 

Localization, real-time, mobile robot, 3D reconstruction

Mots clés

Localisation, temps réel, robot mobile, reconstruction 3D

1. Introduction
2. Reconstruction 3D De La Séquence De Référence
3. Localisation En Temps Réel
4. Résultats
5. Conclusion
  References

[1] H. ARAÚJO, R.J. CARCERONI and C.M. BROWN, A fully projective formulation to improve the accuracy of Lowe’s pose estimation algorithm. Computer Vision and Image Understanding, 70(2):227-238, 1998.

[2] T. BAILEY, Constrained initialisation for bearing-only slam. In International Conference on Robotics and Automation, 2003.

[3] P. BEARDSLEY, P. TORR and A. ZISSERMAN, 3d model acquisition from extended image sequences. In European Conference on Computer Vision, pp. 683-695, 1996.

[4] G. BLANC, Y. MEZOUAR and P. MARTINET, Indoor navigation of a wheeled mobile robot along visual routes. In IEEE International Conference on Robotics and Automation, ICRA’05, Barcelone, Espagne, 18-22 Avril 2005.

[5] D. COBZAS, H. ZHANG and M. JAGERSAND, Image-based localization with depth-enhanced image map. In International Conference on Robotics and Automation, 2003.

[6] A.J. DAVISON, Real-time simultaneous localisation and mapping with a single camera. In Proceedings of the 9th International Conference on Computer Vision, Nice, 2003.

[7] O. FAUGERAS and M. HÉBERT, The representation, recognition, and locating of 3-D objects. International Journal of Robotic Research, 5(3):27-52, 1986.

[8] O. FISCHLER and R. BOLLES, Random sample consensus: a paradigm for model fitting with application to image analysis and automated cartography. Communications of the Association for Computing Machinery, 24:381-395, 1981.

[9] R. HARALICK, C. LEE, K. OTTENBERG and M. NOLLE, Review and analysis of solutions of the three point perspective pose estimation problem. International Journal of Computer Vision, 13(3):331-356, 1994.

[10] C. HARRIS and M. STEPHENS, A combined corner and edge detector. In Alvey Vision Conference, pp. 147-151, 1988.

[11] R. HARTLEY and A. ZISSERMAN, Multiple view geometry in computer vision. Cambridge University Press, 2000.

[12] K. KIDONO, J. MIURA and Y. SHIRAI, Autonomous visual navigation of a mobile robot using a human-guided experience. Robotics and Autonomous Systems, 40(2-3):124-1332, 2002.

[13] J.M. LAVEST, M. VIALA and M. DHOME, Do we need an accurate calibration pattern to achieve a reliable camera calibration? In European Conference on Computer Vision, pp. 158-174, 1998.

[14] T. LEMAIRE, S. LACROIX and J. SOLÀ, A practical 3D bearingonly slam algorithm. In International Conference on Intelligent Robots and Systems, pp. 2757-2762, 2005?

[15] Y. MATSUMOTO, M. INABA and H. INOUE, Visual navigation using view-sequenced route representation. In International Conference on Robotics and Automation, pp. 83-88, 1996.

[16] D. NISTÉR, An efficient solution to the five-point relative pose problem. In Conference on Computer Vision and pattern Recognition, pp. 147-151, 2003.

[17] D. NISTÉR, O. NARODITSKY and J. BERGEN, Visual odometry. In Conference on Computer Vision and Pattern Recognition, pp. 652-659, 2004.

[18] M. POLLEFEYS, R. KOCH and L. VAN GOOL, Self-calibration and metric reconstitution in spite of varying and unknown internal camera parameters. In International Conference on Computer Vision, pp. 90-95, 1998.

[19] A. REMAZEILLES, F. CHAUMETTE and P. GROS, Robot motion control from a visual memory. In International Conference on Robotics and Automation, volume 4, pp. 4695-4700, 2004.

[20] E. ROYER, J. BOM, M. DHOME, B. THUILOT, M. LHUILLIER and F. MARMOITON, Outdoor autonomous navigation using monocular vision, In International Conference on Intelligent Robots and Systems, pp. 3395-3400, 2005.

[21] S. SE, D. LOWE and J. LITTLE, Mobile robot localization and mapping with uncertainty using scale-invariant visual landmarks. International Journal of Robotic Research, 21(8):735-760, 2002.

[22] B. THUILOT, J. BOM, F. MARMOITON and P. MARTINET, Accurate automatic guidance of an urban vehicle relying on a kinematic gps sensor. In Symposium on Intelligent Autonomous Vehicles IAV04, 2004.

[23] L. VACCHETTI, V. LEPETIT and P. FUA, Stable 3-d tracking in realtime using integrated context information. In Conference on Computer Vision and Pattern Recognition, Madison, WI, June 2003.