The Situated Vision: a Concept to Facilitate the Autonomy of the Systems. La Vision Située: un Concept pour Faciliter L’Autonomie des Systèmes

The Situated Vision: a Concept to Facilitate the Autonomy of the Systems

La Vision Située: un Concept pour Faciliter L’Autonomie des Systèmes

Bertrand Zavidovique Roger Reynaud 

F91405 Orsay cedex

IEF, Bât 220, DIGITEO-labs, Université Paris Sud

Page: 
309-322
|
Received: 
8 July 2006
|
Accepted: 
N/A
|
Published: 
30 October 2007
| Citation

OPEN ACCESS

Abstract: 

This paper is about vision to make systems more autonomous.We parallel two aspects of the current evolution of System-Perception:more ambitious yet coherent tasks tightly rely on more abstract description and control.As vision is likely to be a major and complex sensory modality for machines as it is for most animals,we concentrate our development on it.In the first part we show how thinking to systems helped to better pose vision problems and solve them in a useful manner.That is the “active vision”trend that we explain and illustrate.Along the same line,the necessity for anticipation shows further,leading to a first definition of “situated vision”.The second part deals with how to design systems able to achieve such vision.We show from a few examples how architectural descriptions evolve and better fit important features to grasp – a model – in view of more efficient control towards intelligence.Inner communication flows are better be controlled than local tasks that should be assumed completed efficiently enough in all cases.We conclude with a plausible sketch of a system to be experimented on in situations that require some autonomy.

Résumé

Cet article présente notre approche des techniques par lesquelles la vision doit contribuer à l’autonomie d’un système. Il établit un parallèle entre deux aspects de l’évolution des systèmes de perception. La première partie explique comment une conception système des problèmes de la vision (dénommée vision active) aide à les résoudre efficacement. Il y apparaît la nécessité d’anticiper,conduisant à une première définition de la vision située et cataloguant un ensemble des situations qui constitue la description fonctionnelle de l’exo-système. La seconde partie montre comment les descriptions architecturales des systèmes de fusion s’adaptent aux caractéristiques à extraire,et donc comment un modèle émerge progressivement au service d’un contrôle opportuniste vers plus d’intelligence (au sens capacité à utiliser conjointement des renseignements issus de diverses modalités). Nous pensons que l’essentiel des développements à court et moyen terme proviendra des avancées autour de cette approche duale et du concept de «systèmes autonomes»,dans lesquels les flots internes de communication apparaissent alors plus utiles à contrôler que les tâches locales,supposées accomplies de manière satisfaisante. Un schéma de système de contrôle de flux est enfin proposé pour mettre en œuvre au plus haut niveau du modèle un contrôleur de commutations entre différentes situations. Viser des tâches plus ambitieuses s’appuie sur des descriptions et un contrôle plus abstraits.

Keywords: 

Active vision,Autonomous Systems,opportunist control,fusion architecture.

Mots clés

Vision active,Système autonome,contrôle opportuniste,architecture de fusion.

1. Systèmes autonomes: la métaphore de l’orchestre
2. Fonctionnalités Accessibles et Définition de l’Exo-Système
3. Conception de Systèmes Dits Intelligents
4.Un Modèle pour les Systèmes Autonomes de Vision
5.Conclusion
  References

[1] B. MYSLIVETZ, E.D. DICKMANS, Recursive 3D road and relative ego-state recognition, IEEE Trans. PAMI, spec. issue on Interpretation of 3D scenes, Feb. 1992. 

[2] E.D. DICKMANS,V. GRAEFE, Dynamic monocular machine vision and applications, Jour. of Machine vision and application, Springer int., Nov. 1988, pp.223-261. 

[3] E.D. DICKMANS, 4-D Dynamic vision for intelligent motion control, Int. Jour. for Engineering Applications of A.I., spec. issue on Autonomous Intelligent Vehicles, C. Harris (ed.),1991.  

[4] E.D. DICKMANNS, Expectation-based dynamic scene understanding, Active Vision, MIT Press, 1992, pp. 303-335. 

[5] E.D. DICKMANNS, B. MYSLIWETZ, and T. CHRISTIANS, An integrated spatio-temporal approach to automatic visual guidance of autonomous vehicles, IEEE Transactions on Systems, Man and Cybernetics, 20(6),Nov. 1990, pp. 1273-1284. 

[6] M. HEBERT, 3-D Landmark Recognition from Range Images, IEEE Conference on Computer Vision and Pattern Recognition, Champaign, June 1992, pp. 360-365. 

[7] I.S. KWEON, T. KANADE. High Resolution Terrain Map from Multiple Data, IEEE International Workshop on Intelligent Robots and Systems, Tsuchiura, Jul. 1990. 

[8] I. S. KWEON, Modeling Rugged Terrain by Mobile Robots with Multiple Sensors, Ph.D. thesis, Robotics Institute, Carnegie Mellon University, Feb. 1991.  

[9] M. DAILY, J. HARRIS, D. KEIRSEY, K. OLIN, D. PAYTON, K. REISER, J. ROSENBLATT, D. TSENG’V. WONG,Autonomous Cross-Country Navigation with the ALV, IEEE Int. Conf. on Robotics and Automation, Philadelphia, pp.718-726,Apr.1988.  

[10] R.A. BROOKS, R. GREINER, T.O. BINFORD, The ACRONYM model-based vision system, 6th International Joint Conference on Artificial Intelligence, Tokyo,Aug. 1979, pp. 105-113.  

[11] R.A. BROOKS, Symbolic Reasoning among 3D Models and 2D Images, Artificial Intelligence, vol. 17, pp. 285-348, 1981. 

[12] T.O. BINFORD, Visual perception by a computer, IEEE conf. on Systems and Control, Miami, Dec 1971. 

[13] R.A. BROOKS, T.O. BINFORD, Representing and reasoning about specified scenes, Proc. DARPA IU workshop,Apr. 1980, pp. 95-103. 

[14] G. GIRALT, R. CHATILA, R. ALAMI, Remote Intervention, Robot Autonomy, and Teleprogramming: Generic Concepts and Real-World Application Cases, IEEE Int. Workshop on Intelligent Robots and Systems,Yokohama (Japan), Juillet 1993,pp. 314-320.  

[15] G. GIRALT, L. BOISSIER, The French Planetary Rover Vap: Concept and Current Developments, IEEE International Workshop on Intelligent Robots and Systems, Raleigh, Jul. 1992, pp. 1391-1398.

[16] R. CHATILA, L’expérience du robot mobile HILAIRE, Tech. Rep. 3188, Laboratoire d’Automatique et d’Analyse des Systèmes, Toulouse, 1984.  

[17] P. FILLATREAU, M. DEVY, Localization of an Autonomous Mobile Robot from 3D Depth Images using heterogeneous Feature, IEEE International Workshop on Intelligent Robots and Systems,Yokohama (Japan), Jul. 1993. 

[18] P. GRANDJEAN,A. ROBERT DE SAINT-VINCENT,3-D modeling of indoor scenes by fusion of noisy range and stereo data, IEEE International Conference on Robotics and Automation, Scottsdale, May 1989, pp. 681-687. 

[19] M. DEVY, J. COLLY, P. GRANDJEAN, T. BARON, Environment Modelling from a Laser/Camera Multisensor System, IARP 2ndWorkshop on Multi-Sensor Fusion and Environment Modelling, Oxford (U.K.), Sept. 1991. 

[20] F. NASHASHIBI, Ph. FILLATREAU, B. DACRE-WRIGHT, T. SIMEON. 3D Autonomous Navigation in a Natural Environment, IEEE International Conference on Robotics and Automation, San Diego, May 1994.  

[21] P. MOUTARLIER, P. GRANDJEAN, R. CHATILA, Multisensory Data Fusion for Mobile Robot Location and 3D Modeling, IARP 1st Workshop on Multi-Sensor Fusion and Environment Modeling, Toulouse (France), Oct. 1989.  

[22] S. LACROIX, P. FILLATREAU, F. NASHASHIBI, R. CHATILA, M. DEVY, Perception for Autonomous Navigation in a Natural Environment, Workshop on Computer Vision for Space Applications, Antibes, Sept. 1993. 

[23] B. ZAVIDOVIQUE, First Steps of Robotic Perception: The Turning Pointof the 1990s, Proceedings of the IEEE., vol 90, n°7, 2002.  

[24] J. ALOIMONOS, I. WEISS, A. BANDOPADHAY, Active vision, International Journal of Computer Vision, 1(4) pp. 333-356, 1987. 

[25] R. BAJCSY. Active perception vs. passive perception, Proceedings of the IEEE, 1985, pp. 55-59. 

[26] V. ANANTHARAM, P. VARAIYA, Optimal strategy for a conflict resolution problem, System and Control Letters, 1986. 

[27] B. ZAVIDOVIQUE, A. LANUSSE, P. GARDA, Robot perception systems: some design issues, NATO Adv. Res. Work. MARATEA, A.K. Jain Ed. Springer Verlag,Aug. 87. 

[28] X. MERLO, Techniques probabilistes d’intégration et de contrô1e de la perception en vue de son exploitation par le système de décision d’un robot, PhD. thesis, Institut National Polytechnique de Lorraine, France, 1988. 

[29] Y. ALOIMONOS, Visual shape computation, Proc. of the IEEE, 76(8), jan. 1988, pp. 899-916.  

[30] C. FERMÜLLER, Y. ALOIMONOS, Vision and action, Image and vision computing, 13(10), 1993, pp.725-744. 

[31] P. BOUKIR, Reconstruction 3D d’un environnement statique par vision active, PhD. thesis, Oct. 1993. 

[32] F. CHAUMETTE, S. BOUKIR, P. BOUTHEMY, D. JUVIN, Structure from controlled motion,IEEE Transactions on PAMI,4(11), Feb. 1996, pp.372-389.  

[33] X. MERLO, A. LANUSSE, B. ZAVIDOVIQUE, Optimal control of a robot perception system, IASTED Int’l Symp. GENEVE, 1987.  

[34] B. ZAVIDOVIQUE, Perception for decision or Decision for perception? Human and Machine Perception, V. Cantoni Ed. Plenum Press, 1997, pp. 155-178. 

[35] R. BAJCZY, Active perception, Proc. of the IEEE, vol 8(76), aug. 1988, pp. 996-1005.  

[36] J. MAVER and R. BAJCSY, Occlusion as a guide for planning the next view, IEEE Trans. on Pattern Analysis and Machine intelligence, 15(5), May 1993, pp 417-433.  

[37] R. PITO,A sensor based solution to the next best view problem, IAPR Int. Conf on Pattern Recognition,Aug. 1996, pp. 941-945.  

[38] J. KOSEKA, H. CHRISTENSEN, and R. BAJCSY, Discrete event modeling of visually guided behaviors, International Journal of Computer Vision; 14(2), Mai 1995, pp.179-191. 

[39] P. WHAITE and F. FERRIE, Autonomous exploration: Driven by uncertainty, IEEE International conference on Computer Vision and Pattern Recognition, CVPR ‘94, Seattle, pp. 339-346, June 1994.  

[40] D.H. BALLARD,Animate vision, Artificial Intelligence, 48(1),Août 1991, pp.57-86. 

[41] D. NOTON, L. STARK, Eye movement and visual perception, Scientific American, 224(6), Juin 1971, pp. 34-43. [42] A.L. YARBUS, Eye movements and vision, Plenum Press, 1967. 

[43] E. MILIOS, M. JENKIN, and J. TSOTSOS, Design and performance of trish, a binocular robot head with torsional eye movements, International Journal of Pattern Recognition and Artificial Intelligence, 7(1), Février 1993, pp. 51-68.  

[44] D. MURRAY, K. BRADSHAW, P. MCLAUCHLAN, P. SHARKEY, Driving saccade to pursuit using image motion, International Journal of Computer Vision, 16(3), Mars 1995, pp. 205-228.  

[45] J. TSOSOS,A complexity level analysis of vision, IEEE Int. Conf on Computer Vision, ICCV’87, London, June 1987. 

[46] K. BRUNNSTROM,J.O. EKLUNDH,and T. UHLIN,Active fixation for scene exploration, International Journal of Computer Vision, 17(2), Feb. 1996, pp.137-162. 

[47] Y. ALOIMONOS, Purposive and qualitative active vision, IAPR Conf. on Pattern Recognition ICPR’90,Atlantic City, pp.346-360. 

[48] M.I. TARR, M.J. BLACK, A computational and evolutionary perspective on the role of representation in vision, Computer Vision, Graphics, and Image Processing: Image Understanding, jul. 1994, pp. 65-73. 

[49] C. BROWN,Towards general vision, Computer Vision, Graphics, and Image Processing: Image Understanding, 60(1), 1994, pp.89-91. 

[50] S. SANDINI, E. GROSSO, Why purposive vision? Computer Vision Graphics, and Image Processing: Image Understanding, 60(1),1994, pp. 109-112. 

[51] A. L. ABBOTT and N. AHUJA, Active surface reconstruction by integrating focus, vergence, stereo and camera calibration, Third International Conference on Computer Vision, 1990, pp. 489-492. [52] B. B. BEDERSON, R. S. WALLACE, E. L. SCHWARTZ, Two miniature pan-tilt devices, International Conference on Robotics and Automation, 1992, pp. 658-663.  

[53] C. BROWN, Gaze controls cooperating through prediction, Image and vision computing, 8(l), 1990, pp. 10-17. 

[54] C.M. BROWN,Kinematic and 3D motion prediction for gaze control, Workshop on Interpretation of 3D Scenes, 1989, pp.145-151. 

[55] W.S. CHING, P.S. TOH, K.L. CHAN, M.H. ER, Robust vergence with concurrent detection of occlusion and specular highlights, Fourth Int. Conference on Computer Vision, 1993, pp. 384-394. 

[56] J. J. CLARK, N. J. FERRIER, Modal control of an attentive vision system, 2nd Int. Conf. on Computer Vision, 1988, pp. 514-523.  

[57] D. COOMBS, C. BROWN, Real-time smooth pursuit tracking for a moving binocular robot, Computer Vision and Pattern Recognition, 1992, pp. 23-28. 

[58] J. L. CROWLEY, P. BOBET, M. MESRABI, Gaze control for a binocular camera head, 2nd European Conf. on Computer Vision, 1992, pp. 588-596.  

[59] J. C. FIALA, R. LUMIA, K. J. ROBERTS, A. J. WAVERING, Triclops: a tool for studying active vision, International Journal of Computer Vision, 12(2, 3), 1994, pp. 231-250. 

[60] E. GROSSO, D. H. BALLARD, Head-centred orientation strategies in animate vision, 4th Int. Conf. on Computer Vision, pp. 395-402. 

[61] I. HORSWILL and M. YAMAMOTO, A $ 1000 active stereo vision system, CVPR, 94. 

[62] E. KROTKOV, F. FUMA, J. SUMMERS,An agile stereo camera system for flexible image acquisition, IEEE Journal on Robotics and Automation, 4(l), 1988, pp.108-113. 

[63] B. MARSH, C. BROWN, T. LEBLANC, M. SCOTT, T. BECKER, P. DAS, J. KARLSSON, C. QUIROZ, Operating system support for animate vision, Jour. of Parallel and Distributed Computing, 15, 1992, pp.103-117.  

[64] D.W. MURRAY,P.F. MCLAUCHLAN,I.D. REID,and P.M. SHARKEY, Reactions to peripheral image motion using a head/eye platform, 4th International Conference on Computer Vision,1993,pp. 403-411. 

[65] K. PAHLAVAN, J. O. EKLUNDH, A head-eye system-analysis and design, Computer Vision, Graphics, and Image Processing: Image Understanding, 56(l), Jul. 1992, pp.41-56. 

[66] K. PAHLAVAN, T. UHLIN, and J.O. EKLUNDH, Integrating primary ocular processes, 2nd European Conference on Computer Vision, May 1992, pp 526-541. 

[67] K. PAHLAVAN, T. UHLIN, and J.O. EKLUNDH, Dynamic fixation, 4th Int. Conf. on Computer Vision, May 1993, pp. 412-419. 

[68] K.N. KUTULAKOS, C.R. DYER, Recovering shape by purposive view-point adjustment, Computer Vision and Pattern Recognition, June 1992, pp. 16-22. 

[69] K.N. KUTULAKOS, C.R. DYER, Recovering shape by purposive viewpoint adjustment, International Journal of Computer Vision, 12(2/3), 1992, pp.113-136. 

[70] R. PISSARD-GIBOLLET, P. RIVES,Asservissement visuel appliqué à un robot mobile:État de 1’art et modélisation cinématique,Research report RR1577, INRIA-Sophia Antipolis, France, Dec. 19

[71] L. E. WIXSON, Exploiting world structure to efficiently search for objects, Technical Report 434, University of Rochester, C. S. Department, Rochester, New York , Jul. 1992.  

[72] L. E. WIXSON, Gaze selection for visual search, PhD thesis, University of Rochester, Rochester, New York, 1994. 

[73] L. BIRNBAUM, M. BRAND, and P. COOPER, Looking for trouble: using causal semantics to direct focus of attention, 4th International Conference on Computer Vision, May 1993, pp. 49-56. 

[74] R. D. RIMEY P. A. VON KAENEL, C. M. BROWN, Goal-oriented dynamic vision, Technical report, University of Rochester, New York, Aug. 1993. 

[75] R. D. RIMEY,Control of selective perception using Bayes nets and decision theory, PhD thesis, University of Rochester, New York, Dec 1993. 

[76] M. ISARD, A. BLAKE, Contour tracking by stochastic propagation of conditional density, European Conf. on Computer Vision, Cambridge 1996, pp. 343-356. 

[77] M. ISARD, A. BLAKE, Condensation: unifying low-level and highlevel tracking in a stochastic framework, 5th European conference on Computer Vision, 1998. 

[78] O. DESSOUDE, Contrôle Perceptif en milieu hostile: allocation de ressources automatique pour un système multicapteur, PhD Thesis, University Paris-Sud, 1993. 

[79] C. OLIVIER, Stratégies d’acquisition, de traitement et de prise en compte d’informations pour contrôle de robot en environnement non structuré, PhD. thesis, University Paris-Sud, 1993. 

[80] A. STEINBERG, C. BOWMAN, F. WHITE, Revision to the JDL Data fusion Model, Proc. of AeroSense Conference, SPIE vol. 3719, pp. 430,441, 1999.  

[81] B. DASARATHY, «Optimal Features-In Feature-Out (FEIFEO) Fusion for Decisions in Multisensor Environments», Proc. SPIE 3376, Sensor Fusion: Architectures, Algorithms and Applications II, 1998. 

[82] M. BEDWORTH, J. O’BRIEN, The Omnibus Model: A New Model for Data Fusion, Proc. of FUSION 99, USA, 1999. 

[83] M. ENDSLEY, «Towards a Theory of Situation Awareness in Dynamic Systems», Human Factors Journal,vol. 37,pp. 32-64,1995. 

[84] J. SALERNO, «Information Fusion: a High-Level Architecture Overview», Proc. of FUSION’00, Paris, 2000. 

[85] C.B. FRANKEL, M. BEDWORTH, «Control, Estimation and Abstraction in Fusion Architectures: Lessons from Human Information Processing», Proc. of FUSION’00, Paris, 2000. 

[86] J. GAINEY, E. BLASCH, Development of Emergent Processing Loops as a System of Systems Concept, Proc. of AeroSense Conference, SPIE vol 3179, pp. 186-195, 1999. 

[87] J. LLINAS, C. BOWMAN, G. ROGOVA, A. STEINBERG, E. WALTZ, F. WHITE, «Revisions and Extensions to the JDL Data Fusion Model II», Proc. of FUSION’04, Stockolm, 2004. 

[88] P.K. VARSHNEY, Distributed detection and data fusion, Springer ed, NewYork, 1997. 

[89] R. REYNAUD, S. BOUAZIZ, «Architecture de systèmes multicapteurs», Revue Traitement du Signal, Méthodologie de la gestion intelligente des senseurs, Vol 22 numéro 4 , pp 393-405, 2005.