OPEN ACCESS
Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. These systems allow fast generation of accurate point clouds of historical monuments. Using these data, we are able to create a polygon mesh that defines the shape of the edifice for virtual visualization. Most of the time, the 3D laser scanner is completed by a digital camera which is usedto colorize the point cloud. However, the photometric quality of point clouds is generally ratherlow, mainly due to color and resolution problems.Intensity uniformization methods existtoimprove the colorimetrybutthey do notpermit to obtain a photo-realist rendering nor to globally improve the resolution. That is why this paper proposes a solution to colorize point clouds using high resolution digital images acquired with a camera from any viewpoint. For this, we have developed a new accurate method forregistering the photographs on the point cloud which is a crucial step for a good colorization by color projection. Results, on datasets of the cathedral of Amiens in France, highlight the success of our approach, leading to point clouds with better photometric quality and resolution. MOTS-CLÉS : nuage de points, colorisation, optimisation visuelle virtuelle.
Extented Abstract
Architectural heritage contains important remnants of our past, it is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Cultural heritage includes monuments, works of art or archaeological landscapes. These sites are frequently faced with environmental conditions, human deterioration and can be victim of their age. It is in this spirit that E-Cathedrale, a research and development program, aims to construct a complete digital model of the Amiens Cathedral in France.
Modern tools like 3D laser scanners are more and more used in heritage documentation. These systems allow to scan quickly and accurately the spatial structure of these kinf of sites. The scan result is a dense and panoramic 3D point cloud of the scanned scene. These measures may be useful for future restoration, analysis, preservation, reconstructions or virtual displaying. Using these data, we are able to create a polygon mesh that defines the shape of the scene for its virtual visualization. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the geometric information with the scanned objects color. However, the photometric quality of point clouds is generally rather low because of two main problems.Thefirst problem is due to the digital camera resolution used for the points colorization. When the points sampling rate is upper than the digital camera resolution, a 3D point and its nearest neighbors are colorized with the same RGB value. This phenomenon creates a blur effect on the colored point clouds which are not representative of the real object aspect. The second problem comes from the fact that to entirely cover a scene, the laser scanner should be placed at different geographical positions. We obtain a point cloud for each scanner position. Despite fast acquisitions,these point clouds have been acquired at different time of the day, so at different sun exposures. Because of that, two points, side by side from two separate clouds, may have a completely different color.
Intensity uniformization methods exist to improve the colorimetry but they do not permit to obtain a photo-realist rendering nor to globally improve the resolution. That is why this paper proposes a solution to colorize point clouds using high resolution digital images acquired with a camera from any viewpoint. For this, we have developed a new accurate method for registering the photographs on the point cloud which is a crucial step for a good colorization by color projection.
Firstly, the point cloud colors from the different scans are homogenized. Thanks to that, we are able to detect and to match point features between a virtual image of the 3D scene and a digital image. A camera pose is computed using these sparse sets of corresponding points to register images and the 3D point cloud. Then, this initial camera pose is corrected. The pose optimization problem is formulated as the minimization of the difference of a digital image and a virtual one rendered from the initial pose. Finally, we have the real camera pose which took the digital image. We can now project the visible points of the cloud in the digital image frame and set new point colors. Results, on datasets of the cathedral of Amiens, highlight the success of our approach, leading to point clouds with better photometric quality and resolution.
RÉSUMÉ
Le patrimoine architectural est composé de biens historiques et artistiques qui doivent être protégés, préservés, restaurés et exposés au plus grand nombre. Des appareils modernes tels que les scanners à laser 3D sont de plus en plus utilisés en documentation culturelle. Ces outils permettent de générer avec précision et rapidité des nuages de points de monuments historiques. Avec les données collectées, il est possible de créer un maillage afin de visualiser virtuellement les formes et/ou la surface de l’édifice. La plupart du temps, le scanner tridimensionnel est équipé d’un appareil photo numérique qui est utilisé pour coloriser les points relevés. Cependant, la qualité photométrique du nuage de points n’est pas toujours suffisante principalement à cause de problèmes de couleurs et de résolution. Des méthodes d’uniformisation d’intensités existent pour améliorer la colorimétrie mais ne permettent pas d’obtenir des rendus photo-réalistes et d’améliorer la résolution. C’est pourquoi, nous proposons une nouvelle méthode pour coloriser les nuages de points à l’aide d’images numériques de haute résolution acquises avec un appareil photo. Pour cela, nous avons développé une méthode permettant d’obtenir un recalage précis entre des images numériques acquises et un nuage de points, ce qui est une étape cruciale pour une bonne colorisation par projection de couleurs. Des résultats sur des jeux de données issus de la numérisation de la cathédrale d’Amiens à l’intérieur et à l’extérieur démontrent la validité de notre approche en obtenant des nuages de points de qualité et de résolution photométriques nettement meilleures.
point clouds, colorization, visual and virtual optimization.
MOTS-CLÉS
nuage de points, colorisation, optimisation visuelle virtuelle.
Abmayr T., Härtl F., Mettenleiter M., Heinz A., Neumann B., Fröhlich C. (2004). Realistic 3d reconstruction - combining laserscan data with rgb color information. XXth ISPRS Congress: Proceedings of Commission V.
Adan A., Merchan P., Salamanca S. (2012). Creating Realistic 3D Models From Scanners by Decoupling Geometry and Texture. International Conference on Pattern Recognition (ICPR), p. 457–460.
Alcantarilla P. F., Bartoli A., Davison A. J. (2012). Kaze features. In Proceedings of the 12th european conference on computer vision - volume part vi, p. 214–227. Berlin, Heidelberg, Springer-Verlag.
Al-kheder S., Al-shawabkeh Y., Haala N. (2009, février). Developing a documentation system for desert palaces in Jordan using 3D laser scanning and digital photogrammetry. Journal of Archaeological Science, vol. 36, no 2, p. 537–546.
Alshawabkeh Y., Haala N. (2004). Integration of digital photogrammetry and laser scanning for heritage documentation. IAPRS, p. 12–23.
Altuntas C., Yildiz F., Karabork H., Yakar M., Karasaka L. (2007). Surveying and documentation of detailed historical heritage by laser scanning. XXI International CIPA Symposium, p. 01-06.
Belkhouche Y., Buckles B., Duraisamy P., Namuduri K. (2012). Registration of 3d-lidar data with visual imagery using shape matching. Int. Conf. on Image Processing, Computer Vision, and Pattern Recognition, p. 749–754.
Benhimane S., Malis E. (2004). Real-time image-based tracking of planes using efficient second-order minimization. IEEE/RSJ International Conference on Intelligent Robots and Systems, p. 943–948.
Caron G., Dame A., Marchand E. (2014). Direct model based visual tracking and pose estimation using mutual information. Image and Vision Computing, vol. 32, no 1, p. 54–63.
Chaumette F., Hutchinson S. (2006). Visual servo control part 1: Basic approaches. IEEE Robotics and Automation Mag, vol. 13, p. 82–90.
Collewet C., Marchand E. (2011, août). Photometric Visual Servoing. IEEE Transactions on Robotics, vol. 27, no 4, p. 828–834.
Comport A., Marchand E., Chaumette F. (2003). A real-time tracker for markerless augmented reality. InACM/IEEEInt.Symp.onMixedandAugmentedReality,ISMAR,p.36-45. Tokyo, Japon.
Corsini M., Dellepiane M., Ponchio F., Scopigno R. (2009). Image-to-geometry registration: a mutualinformationmethodexploitingillumination-relatedgeometricproperties. Computer Graphics Forum, vol. 28, no 7, p. 1755-1764.
Crombez N., Caron G., Mouaddib E. (2013). Colorisation photo-réaliste de nuages de points 3D. ORASIS.
Haddad N. A. (2011, juin). From ground surveying to 3D laser scanner: A review of techniques used for spatial documentation of historic sites. Journal of King Saud University Engineering Sciences, vol. 23, no 2, p. 109–118.
Henry P., Krainin M., Herbst E., Ren X. (2012). Rgb-d mapping : Using kinect-style depth camerasfordense3dmodelingofindoorenvironments. TheInternationalJournalofRobotics Research, vol. 31, no 5, p. 647–663.
Huber P. J. (1981). Robust statistics. John Wiley and Sons.
IkeuchiK.,OishiT.,Takamatsu. (2007,octobre). Thegreatbuddhaproject:Digitallyarchiving, restoring, and analyzing cultural heritage objects. Int. J. Comput. Vision, vol. 75, no 1, p. 189–208.
Kazhdan M., Bolitho M., Hoppe H. (2006). Poisson surface reconstruction. In Eurographics symposium on geometry processing, p. 61–70. Aire-la-Ville, Suisse.
Leutenegger S., Chli M., Siegwart R. Y. (2011). Brisk: Binary robust invariant scalable keypoints. Computer Vision, IEEE International Conference on, vol. 0, p. 2548-2555. Lowe D. G. (2004, novembre). Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision, vol. 60, no 2, p. 91–110.
Marchand E., Chaumette F. (2002). Virtual visual servoing: A framework for real-time augmented reality. In Eurographics Conference Proceeding, vol. 21(3), p. 289-298.
Marchand E., Spindler F., Chaumette F. (2005, December). Visp for visual servoing: a generic software platform with a wide class of robot control skills. IEEE Robotics and Automation Magazine, vol. 12, no 4, p. 40–52.
Mastin A., Kepner J., Fisher J. (2009). Automatic registration of lidar and optical images of urban scenes. IEEE Conference on Computer Vision and Pattern Recognition, p. 26392646.
Morel J.-M., Yu G. (2009, avril). Asift: A new framework for fully affine invariant image comparison. SIAM J. Img. Sci., vol. 2, no 2, p. 438–469.
MoussaW.,Abdel-WahabM.,FritschD. (2012). Anautomaticprocedureforcombiningdigital images and laser scanner data. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, p. 229–234.
Tian G. Y., Gledhill D., Taylor D., Clarke D. (2002). Colour correction for panoramic imaging. Proc. 6th Int. Conf. Inf. Vis., p. 483–488.