Design and Implement of Entertainment and Competition Humanoid Robot

Design and Implement of Entertainment and Competition Humanoid Robot

Qiubo Zhong Jie Zhao Chunya Tong

State Key laboratory of Robotics and System, Harbin Institute of Technology, Harbin, 150001, China

School of Electronic and Information Engineering, Ningbo University of Technology, 315016, China

Corresponding Author Email: 
chunya.tong01@gmail.com
Page: 
17-24
|
DOI: 
http://dx.doi.org/10.18280/rces.020103
Received: 
N/A
| |
Accepted: 
N/A
| | Citation

OPEN ACCESS

Abstract: 

According to the features of competitions for humanoid held in FIRA  (Federation of International Robot-soccer Association), control methods for humanoid robot can be divided into telecontrol, semi-autonomous and autonomous. The construct and features of control for these three methods are introduced. A fast color recognition algorithm and motion planning based on real-time expert system are proposed. Simulation and experiments on real humanoid robot are achieved to verify the construct and algorithm.

Keywords: 

humanoid robot, image recognition, motion planning, control structure.

1. Introduction

Humanoid robot refers to the robot which has the features of human beings, and can work and use the tools used by human beings in the real environment. [1] Karl J. Muecke developed a sophisticated hardware platform called  (DARwIn) used for stugdying bipedal gaits that has evolved over time. [2] Sven Behnke described how he augmented multiple robots called RoboSapiens to obtain a team of soccer playing humanoid robots. [3] He added a Pocket PC and a color camera to the robot base to make it autonomous. Martin Buss discussed the development process to achieve walking motion with a recently constructed humanoid robot, and developed on-line control strategies to track the precalculated trajectories. [4] Footstep planning and balancing compensation is used for adaptive walking. The humanoid robot Johnnie  (1800 mm, 40 kg, 17 DOF) can walk with a maximum speed of 2.0 km/h. [5] The control and computational power is onboard, whereas the power supply is outside of the robot. In the Humanoid Robot Project the robot HRP-3  (1600 mm, 65kg, 36 DOF) with special skills for water and dust resistivity is the successor of the HRP-2 model. This robot can walk with a speed of 2.5 km/h. [6] H. Dong present the hardware design and gait generation of humanoid soccer robot Stepper-3D virtual slope walking, inspired by passive dynamic walking, is introduced for gait generation. [7]

When the researches of large type humanoid robot are being under attention all over the world, the researches of small type humanoid robot are also started, and the researches of small type humanoid robot purpose on the entertainment of athletics, through this competitive entertainment platform they reflect the application values on technology and put forward new technical requests at the same time. In this field, the indisputable masterpiece is the SDR-3X (Sony Dream Robot -3X) which is developed by SONY on 2000 [42]. This robot heights 50cm and weights only 5kg, it can undertake 15m per minute’s forward and also can undertake up-standing from ground posture, one-leg standing, dancing according to tempo of music etc. complex movements. On September of 2003, SONY put forward QRIO [43] humanoid robot again, which can make interactions with people on action and language, and at the same time, this QRIO was the world’s first robot who can run. Its running hang time is 6ms, and its both-foots jumping hang time is 10ms.

French robots company Aldebaran Robotics successfully developed an entertainment type humanoid robot named Nao on 2005. It’s heights 58cm, weights 4.3kg. It has 24 degrees of freedom, and among them each arms and legs has 5 degrees of freedom. The robots are also equipped with 2 speakers, 4 microphones, 2 digital cameras based on CMOS which can form stereo vision, and it has many kinds of sensors on sonar, acceleration, tilt, pressure and so on. It can connect with Wifi network by wireless or cable ways. It can make programs by the Choregraphe by the supporting of C++, this software can also be compatible with Robotics Studio and Cyberbotics Webots, and support many kinds of platform so as to Linux, Windows and so on.

The Nao robot had taken the place of AIBO robot which was made by SONY on 2007 and become the standard platform of Robocup robot soccer [44-46].

In recent years, the Mini company of South Korea have been studying the small competitive humanoid robots[47], and successfully developed ROBONOVA, MF-1 and MF-2 humanoid robots. Among them, ROBONOVA type of humanoid robot obtains better results in the teaching and the competitive aspects. Besides them, there are the new robot “J4” developed by JVC company on 2005, the NUVO developed by ZMP company and the DIY type humanoid robot KHR-1 developed by Kondo company of Japan and so on[48]. All of the robots’ characteristics are below 40cm height, under 1.5kg weight, the whole body is made by removable DC motor so that it is very convenient to assembly and replacement, and it has the personified debugging interface, it is very easy for the beginners to entry it.

After the driving of Robocup and FIRA two big robot soccer competitions in recent years, many universities also began to develop small type humanoid competitive robot. Precision instruments and machinery robots laboratory of Tsinghua university on the basis of previous research, successfully developed the automatic humanoid soccer robot MOS2007 on 2007 and make the PDA as visual processing and decision-making system; on 2009, the Robot and Control Laboratories of Automation Institute successfully developed the Stepper-3D humanoid robot based on passive dynamic walking whose speed can reach 0.5m per second. The SR-H100 type humanoid robot developed by IPC company of Zhejiang has 20 degrees of freedom, it uses PC104+megal128 frame, and it uses a fast image recognition and localization algorithm so as to distinguich the image and process the position in the court fast and accurately [50]. The SJTU humanoid robot of Shanghai jiaotong University heights 57cm, weights 3.2kg, and control its movement by PC104+Atmel. The small type humanoid robot developed by the National University of Defense Technology use the based on CMUCam embedded vision system, it can make many complex movements as turning, up-standing from ground, playing football and so on. The Mini-HIT [51] developed by Multi-agent robotic research center of Harbin Institute of Technology has 24 degrees of freedom, it heights 45cm, weights 3.13kg, can make many complex movement like sprinting, long-distance running, shooting, boxing and so on. All kinds of the competitive entertainment type humanoid robots above have participated in various robot competition held at home and abroad and have achieved very good record.

Along with the process of technology and the improvement of people’s life quality needs, we believe that in the near future, various types of humanoid robot will appear in each corner of human society and our life around, they will get along with people harmoniously by providing people with a variety of service.

Robot soccer competition is one focus in robot research recently. In 2002 and 2008, humanoid robot competition project was established by RoboCup and FIRA respectively. This project provides a good research platform for technology of humanoid robot and collaborative technology of multi-robots. RoboCup is a long term research indicative that aims at development of a team of fully autonomous humanoids that can win against human soccer world cup champion by 2050, and to apply innovations emerged on the way to accomplish the goal to social and industrial issues.8 M. Friedmann described the development of the 55cm tall, autonomous humanoid robot Bruno. During RoboCup 2006 Bruno has demonstrated a variety of high-quality motions including the fastest forward walking and the first ever back heel kick of a humanoid robot.9

2. Telecontrol Method

In the competitive game, especially confrontational events, including boxing, soccer for humanoid robot, in order to allow competition with ornamental and recreational features, fast speed of processing by the robot becomes a key factor. Because existing hardware can not meet the intense competitive recently, telecontrol is chosen to be a transitionary method. The game sketch by telecontrol method is described as in Fig. 1.

Figure 1. Sketch of humanoid robot soccer for telecontrol method

In the Fig.1, there are two teams  (A and B in Fig. 1) in the competition, and each team consisted of three humanoid robots which are controlled by one operator  (H in Fig. 1). Commands are sent to the different robots by wireless or Bluetooth. When robots receive the commands, they call the motions in the motion library to execute.

Using RoboBasic, 10 developing motions of robot is very easy. Basic motions which are composed of forward walking, backward walking, left shifting, right shifting and kicking are developed off-line.

The decisions of this control method absolutely depend on operator himself. For the robot, better performance of basic motions is the key to success of game.

3. Semi-Autonomous Control Method

Humanoid robot soccer game by telecontrol method is not an intelligent way. To make the decision by robot itself, semi-autonomous and autonomous control method for humanoid robot are developed depending on whether vision processing is achieved on robot itself. Semi-autonomous humanoid robot soccer hardware system consists of robot, image acquisition device  (camera and video capture board), computer and wireless communication system. Two different control methods of semi-autonomous  (shown in Fig. 2 and Fig. 3) are developed depending on whether camera is set on robot itself.

Figure 2. The structure sketch of Semi-HuroSot System

Figure 3. MF-2 Architecture Diagram [10]

In Fig. 2, the semi-hurosot system includes of vision robots, decision and communication subsystems. From the perspective of automatic control theory, these three subsystems compose a closed loop control system. Before the game starts, the system captured the environmental data from game field by vision subsystem, analyzed these data off-line, when the game starts, the system monitors the whole game field by vision subsystem. The status of real time competition situation is acquired by CCD camera set above game filed, recognition and processing of vision data are achieved by computer. According to the coordinate information of ball and robots, the decision subsystem calls corresponding competition decision from decision library, allocates task to each robot, calls corresponding motion from motion library, produces kinds of control commands. These different control commands are sent to humanoid robots by wireless subsystem and tasks are completed by robots when they received these commands. The whole block diagram of semi-hurosot system is shown in Fig. 4.

Figure 4. The block diagram of Semi-HuroSot system

Fig. 3 shows another way of semi-autonomous. The image information is acquired by camera which is set on the head of robot. The wireless system sends the image information to the computer and processing of image is achieved on it. The result of processing is provided to decision system which is also running on the computer. Control commands are sent to humanoid robots by wireless and when robots received these commands, they call the motions from motion library to execute movements. The interface of vision processing which is adopted by OpenCV is shown in Fig. 5. Image transmission is also by wireless devices which are shown in Fig. 6 and Fig. 7.

Figure 5. Interface of vision processing based on OpenCV

Figure 6. Wireless transmitter on robot

Figure 7. Wireless receiver on computer

4. Autonomous Control Method

Autonomous humanoid robot is the latest hotspot in robot research, because it conforms to the features of human being. Autonomous humanoid can make decisions, plan motions and achieve tasks independently according to the environmental information which is acquired by video sensor. Many efforts to achieve autonomous biped locomotion may be divided into two main approaches: gait trajectories are computed online according to the actual intention and perception data of the robot [11] or a large set of trajectories is computed offline12 and the robot selects one of these precalculated trajectories depending upon its situation. [13]

An autonomous humanoid named Mini-HIT is shown in Fig. 8. Parameters of this robot are described as below:

Degree of freedom: 24, Height: 45CM, Weight: 3.13KG, Maximum torque: 30KG.CM, Camera Angle: 50.

Figure 8. Picture of autonomous humanoid robot Mini-HIT

4.1 Control structure

The structure of autonomous humanoid robot composes five different parts, which are described as in Fig.9, such as video sensor, decision system which is executed on PDA  (personal digital assistant) or embedded control board, motion control system, sensor system including tilt and distance sensors and movement system consisted of motors.

Figure 9. Control structure of autonomous humanoid robot

The control structure can be divided into upper and lower layers. Vision sensor and decision system composes the upper layer while motion control system, other sensor system and movement system are the lower layer.

4.2 Motion control system in lower layer

The motion control system is controlled by C3024 board. The structure of this board is shown in Fig. 10. Tilt and distance sensors  (or other sensors) can be added into this board easily to make up a feedback system. The real picture of tilt sensor and ultrasonic sensor are shown in Fig. 11 and Fig. 12.

Figure 10. Structure of C3024 board

Figure 11. Gyros picture

Figure12. Ultrasonic picture

4.3 Image processing and decision control in upper layer

4.3.1 Image processing

Color recognition

A real time multi-objects color recognition algorithm is present on the base of.14The input of this algorithm is the image with RGB format which is gathered by camera. The outputs are the result of recognition for object and coordinate value of image. This algorithm can be divided into three steps:

 (a) Pixel clustering based on color.15

 (b) Extraction of the target area based on connectivity analysis.

 (c) Target recognition based on prior knowledge

Pixel clustering based on color

We use HSI color space, which represents Hue, Saturation and Intensity respectively.16 Conversion of HSI can reflect the characteristics of the human eye to distinguish colors. For example, human being may say some color is brighter, or redder, bluer, and more colorful. However, in the application, calculation of definitions for these descriptions always is very large. And it will be very slow if each point of image being computed. There may be 16×106 kinds of colors and it is impossible to build a color mapping table. For the reason above, a new fast computing method is present. For 8 bits of RGB image, calculation of corresponding to values of 8 bits of HSI is achieved. Details of algorithm are described as below:

 (a) Hue

Step 1: Sort the RGB component, and suppose the results are

$C_{1} \geq C_{2} \geq C_{3} \quad\left(C_{i} \in\{R, G, B\}\right)$.

Step 2: Suppose D1~D3 to denote values of three demarcation points, if maximum of hue is defined as M, then $D_{1}=\frac{1}{6} M, D_{2}=\frac{1}{2} M, D_{3}=\frac{5}{6} M,$ suppose $l=\frac{\left(D_{2}-D_{1}\right)}{2}$.

Step 3: Compute offset by Eq.  (1):

offset $=l \cdot \frac{C_{1}-C_{2}}{C_{1}-C_{3}}$     (1)

Step 4: According to the results of sorting for colors, calculate the value of hue by table 1.

Table 1. Calculation of hue

Color Sorting

Region

Color equations

Color Sorting

Region

Color equations

R≥G≥B

D1-offset

B≥G≥R

D2+offset

G≥R≥B

D1+offset

B≥R≥G

D3-offset

G≥B≥R

D2-offset

R≥B≥G

D3+offset

 (b) Saturation

The value of saturation can be calculated by Eq.  (2)

$S=\left(1-\frac{C_{3}}{C_{1}}\right) \cdot M$           (2)

 (c) Intensity

Since human eyes have different sensitivities on RGB colors, intensity is defined as linear weight by RGB. 

$I=r_{1} R+r_{2} G+r_{3} B$      (3)

Where I denotes intensity, $r_{i}(i=1,2,3)$ represent the weight of brightness sensitivity of human eyes.

Extraction of the target area based on connectivity analysis

A straightforward way of connected region labeling is to include two steps: search and delivery. 17Although it is a simple method, it takes much time. It will also seriously affect real-time of target recognition system without proper design.18The proposed algorithm in this paper can not only quickly get the connected label, but also extract the shape feature of connected region during labeling. Details of this algorithm are introduced as below: 

Step 1: RLE image run length encoded.

Step 2: Finish labeling the connected region on RLE  (RLE-Run Length Encoded) image.

Step 3: Shape feature extraction from connected region.

The image which is expressed by run length is called RLE-Run Length Encoded image. RLE-Run Length Encoded refers to use number  (length) of pixels to describe the image, these pixels are adjacent and their gray values are equal on horizontal space. Each group of pixels corresponds to a run-length. The value of run-length equals the number of corresponding to pixels.

The run-length defined in this paper is consisted of three components: run = {color,length,parent}, where “color” is the color of run-length, “length” is the length of run-length, “parent” is the pointer of run-length, which points the connecting and minimum label run-length. The “label” means sorting from left to right, from up to down in the RLE image. After connectivity analysis of RLE image is finished, every run-length’s “parent” points at the minimum label one in its connected region, this means that every run-length that is belonged to the same connected region has the same label of “parent”. Shape feature extraction is simple, just scan RLE image once with the result of connected region labeling, and compute shape feature in region from every run-length in connected region  (where “parent” is equal).

The algorithm will scan RLE image twice by sorting from left to right and from up to down. Firstly, all the run-length’s “parent” point to themselves. The whole structure of RLE image is Equivalent to “forest”, 19 and each tree in the forestry has only one node. In the first scanning, the algorithm judges whether run-length in two adjacent lines with same color has the relationship of four adjacencies. Then, it merges the tree structure within single run-length according to the adjacency relations among run-lengths. In the forestry structure combined, each tree structure denotes one connected region, and each connected region corresponds to only one tree structure. After the second scanning finished, each run-length points to the root of tree. Finally, after all pointing operations are completed, connected region extraction is finished. Each connected region takes its first run-length address  (root of tree) as its own region label. The process of combination is shown in Fig. 13.

Figure 13. Runs combination

The final step for analyzing image connected region is extracting basic shape features from connected region. The shape features include area of connected region  (unit: pixel number), center of mass and enclosing rectangle. Supposing that there are run-lengths: r1, r2, …, rn , the equation of shape feature extraction is described as below:

area$=\sum_{i=1}^{n} \operatorname{length}\left(r_{i}\right)$        (4)

Enclosing rectangle:

$\left\{\begin{array}{l}\text { left }=\min \left\{\text { original column coordinate } r_{i} \mid i=1,2, \ldots, n\right\} \\ \text { top }\left.=\text { max fow coordinate } r_{i} \mid i=1,2, \ldots, n\right\} \\ \text { right }=\text { max }\left\{\text { terminal column coordinate } r_{i} \mid i=1,2, \ldots, n\right\} \\ \text { bottom }=\text { min }\left\{\text { row coordinate } r_{i} \mid i=1,2, \ldots, n\right\}\end{array}\right.$  (5)

$\left\{\begin{array}{l}\operatorname{cen}_{-} x=\frac{\sum_{i=1}^{n} \frac{\operatorname{length}_{i} \cdot\left(2 \cdot x_{i}+\operatorname{length}_{i}-1\right)}{2}}{\operatorname{area}} \\ \operatorname{cen}_{-} y=\frac{\sum_{i=1}^{n} \operatorname{length}_{i} \cdot y_{i}}{\operatorname{area}}\end{array}\right.$    (6)

Where length, $x_{i}$ and $y_{i}$ are the length, origin column coordinate and row coordinate of run-length.

As shown in Fig. 13, the coordinate system of RLE image defines the left and bottom corner as the origin, and positive direction of y axis as upward direction. Where x, y are the number of pixels.

The object for target recognition is locating, which locates the relative position between target and robot. Because in two-dimensional surface, the target “ball” parallels with the ground, vision system uses this “ball” as a constraint condition to realize the location of target by single camera.

The algorithm is described as below:

Input:

1. IMG  (HSI color space) captured by camera.

2. the biggest connected region RGN.

Output:

vertex coordinates of ball  (x, y).

Begin

     1. x=cen_x, y=cen_y.

     2. scan the x column of IMG upwards from the coordinate of central of mass  (COM) of RGN.

              If the difference between hues of two pixels is bigger than 40, stop scanning.

              Else y=y+1. 

     3. return  (x, y)

end

Where IMG is the original image, RGN is the connected region of target, area is the area of connected region, cen_x and cen_y are horizontal and vertical coordinate of COM in region RGN respectively

4.3.2 Decision control

Motion planning and obstacle avoidance algorithm for autonomous humanoid robot based on real time expert system and environmental reaction. The basic idea for this method is to integrate the knowledge of targets and obstacles  (such as color, volume and shape of obstacles) during moving in the field into reasoning rule library to improve the effectiveness and robustness of obstacle recognition. There are three steps in this method:

Step 1: Catch and extract the feature points of obstacle from vision system.

Step 2: Perceive the target and obstacle in the environment by single vision algorithm.

Step 3: plan the robot’s orientation and speed of movement on line according to the dynamic environmental information and reasoning rule in motion planning.

The obstacle avoidance and motion planning present in this paper are control system based on behavior of humanoid robot. A reactive architecture based on behavior is adopted and the typical architecture is inclusive architecture present by Bulgarelli.20

Supposing the known enclosing rectangle of obstacle region is {left,top,right,bottom} and the COM is  (cen_x, cen_y), the area of connected is area, the density of obstacle region ρa can be obtained by Eq.  (7)     

$\rho_{a}=\frac{\text {area}}{(\text {right}-\text {left}) \cdot(\text {top}-\text {bottom})}$     (7)

In the obstacle avoidance algorithm, the nearest point which is away from robot to obstacle is defined as obstacle avoidance reference point PO. As shown in Fig. 14, the method of extracting obstacle avoidance point from obstacle region in the image is to obtain the central point of bottom line of enclosing rectangle of connected region.  

Figure 14. Method of detecting obstacle-point

The process of obstacle avoidance and motion planning for humanoid robot includes three kinds of behaviors of approaching to destination,finding local shortest path and moving of obstacle avoidance. The structure of obstacle avoidance and motion planning for humanoid robot is shown in Fig. 15. 

Figure 15. Software architecture of path planning system

When the robot executes every sub-behavior,motion planning is achieved by the different reasoning rule library. The main rules in the rule library of motion planning for humanoid robot are list below:

 (a) Behavior rule library of “approaching to destination”

Rule 1:  IF the camera finds the target

         THEN location information is adopted

Rule 2:  IF the camera finds the obstacle in front of the robot

         THEN looking for path is executed

Rule 3:  IF the camera can not find the target

         THEN rotate the robot’s head to search the target

 (b) Method of finding local path

Since the obstacle avoidance for robot is dynamic and non-identified, and the obstacle and target are moving randomly. Therefore, when the behavior of finding local path is executed, finding a shortest obstacle-free path will be executed repeatedly. Firstly, make the robot keep standing, then control the camera to scan all around once, search the shortest obstacle-free path from the grid map using the environmental map which is described by grid expression method.     

 (c) Behavior rule library of “moving of obstacle avoidance”

When the robot moves around the obstacle, the camera will detect the target and meanwhile, the distance of movement and variation of angle are recorded by feedback of sensor and motors. 

Rule 1:  IF the obstacle is in front of the robot

THEN do situ rotation in the shortest obstacle-free path direction.

Rule 2:  IF the distance between obstacle and robot is safe for avoidance

THEN do shifting movement

Rule 3:  IF the distance between obstacle and robot is dangerous for avoidance

THEN do backwards movement

Rule 4:  IF the target is found in front of the robot and there are no obstacles

THEN do “approaching to destination”

5. Simulations and Experiments

Simulations and experiments are achieved on real robot including robonova, MF-2 and Mini-HIT respectively.

5.1 Simulations and experiments of robonova and MF-2

Simulation for rolling is described in Fig. 16 while snapshots of rolling on robonova are shown in Fig. 17.

Figure 16. Simulation of rolling for Robonova

Figure 17. Rolling experiment of Robonova

According to the principle of dynamics and kinematics constraints, the rolling motion is designed by robobasic language, which is very easy to design motions for Robonova robot. Testing for that motion is achieved on simrobot, which is a simulation software for Robonova. Experiments on flat ground are realized, the snapshots show that the robot can do this motion successfully.  

In Fig. 18, soccer for humanoid robot is achieved by telecontrol method. Dance by four robots is realized in Fig.19. The robots are Robonova.

Figure 18. Soccer for Robonova

Figure 19. Dance for Robonova

In experiment of semi-autonomous robot, color code is set on the head of the robot. There is a global camera which is set above to recognize the robots and ball. Fig. 20 shows scene of the semi-humanoid robot in soccer.

Figure 20. Semi-humanoid robot in soccer.

Fig. 21 is the snapshots of penalty kick for humanoid robot FM-2 by semi-autonomous control method. The difference of control method between Fig. 20 and Fig. 21 is just the image captured device. In Fig. 20, it is the global camera set above the robots while in Fig. 21, it is the camera set on the head of robot. 

Figure 21. Penalty kick of Semi-humanoid robot

6. Conclusion

According to the difference of control method for humanoid robot, three different control methods are present. A fast color recognition algorithm and motion planning based on real-time expert system are proposed. For the features competition held in FIRA, experiments on different kinds of humanoids are achieved.

In the next years, the speed of walking must be increased significantly. The visual perception of the soccer world must become more robust against changes in lighting and other interferences. We continuously improve our computer vision software to make it more reliable. The weight and power consumption of the components plays a role that should not be underestimated.

The goal for humanoid robot is to live in harmony with people, coordination work with people. With the development of stable walking technology, problems of complex motions planning, environment recognition and servo motion planning based on environment recognition will become new research focus in the area of humanoid robot. 

Acknowledgment

This material is based upon work funded by Natural Science Foundation of China under Grant No.61203360, Zhejiang Provincial Natural Science Foundation of China under Grant No.LQ12F03001, LQ12D01001, LY12F01002, Ningbo City Natural Science Foundation of China under Grant No.2012A610009, 2012A610043, State Key Laboratory of Robotics and System  (HIT)Foundation of China under Grant No.SKLRS-2012-MS-06, China Postdoctoral Science Foundation under Grant No.2013M531022.

  References

1. S. Kajita, H. Hirukawa, K. Yokoi and K. Harada, Humanoid Robots, (Ohm-sha, Ltd., 2005), pp. 20-24.

2. J. K. Muecke and W. H. Dennis, DARwIn’s Evolution: Development of a Humanoid Robot, Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems  (IEEE Press, San Diego, USA, 2007), pp. 2574-2575.

3. S. Behnke, J. Muller and M. Schreiber, Playing Soccer with RoboSapien, RoboCup 2005, LNAI (Springer-Verlag, Berlin Heidelberg, 2006), pp. 36-48.

4. M. Buss, M. Hardt, J. Kiener, M. Sobotka, M. Stelzer, O. V. Stryk, and D. Wollherr, Towards an Autonomous, Humanoid, and Dynamically Walking Robot: Modeling, Optimal Trajectory Planning, Hardware Architecture, and Experiments, Proceedings of the 3rd International Conference on Humanoid Robots 2003, pp. 1-15.

5. K. Loeffler, M. Gieger and F. Pfeiffer, Sensor and Control Design of a Dynamically Stable Biped Robot, IEEE International Conference on Robotics and Automation (ICRA) (IEEE Press, Taipei, Taiwan, 2003), pp. 484-490.

6. K. Kaneko, F. Kanehiro, S. Kajita, H. Hirukawa, T. Kawasaki, M. Hirata, K. Akachi and T. Isozumi, Humanoid Robot HRP-2, IEEE International Conference on Robotics and Automation  (ICRA),  (IEEE Press, New Orleans, USA, 2004), pp. 1083-1090.

7. H. Dong, M.G. Zhao, J. Zhang, N.Y. Zhang, Hardware Design and Gait Generation of Humanoid Soccer Robot Stepper-3D, Robotics and Autonomous Systems, (57) pp.828-838, 2009.

8. H. Kitano and M. Asada, RoboCup: A Challenge AI Problem, AI Magazine, spring, 1997. 

9. M. Friedmann, J. Kiener, S. Petters, D. Thomas, O. V. Stryk and H. Sakamoto, Versatile, High-Quality Motions and Behavior Control of Humanoid Soccer Robots, Proceedings of the Workshop on Humanoid Soccer Robots of the 2006 IEEE-RAS International Conference On Humanoid Robots  (IEEE Press, Genoa, Italy, 2007), pp. 9-16.

10. http://www.minirobot.co.kr.

11. M. Inaba, F. Kanehiro, S. Kagami and H. Inoue, Two-Armed Bipedal Robot that Can Walk, Roll Over and Stand Up, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems  (IROS)  (IEEE Press, 1995), pp. 297-302.

12. D. Wollherr, M. Hardt, M. Buss and O. V. Stryk, Actuator Selection and Hardware Realization of A Small and Fast-Moving, Autonomous Humanoid Robot, In Proc. Of The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Press, Lausanne, Switzerland, 2002), pp. 391-398.

13. O. Lorch, A. Albert, J. Denk, M. Gerecke, R. Cupec, J. F. Seara, W. Gerth and G. Schmidt, Experiments In Vision-Guided Biped Walking, Proceedings Of The IEEE/RSJ International Conference On Intelligent Robots And Systems  (IROS)  (IEEE Press, Lausanne, Switzerland, 2002), pp. 2484-2490.

14. J. Bruce, T. Balch and M. Veloso, Fast and Inexpensive Color Image Segmentation for Interactive Robots, Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS) (IEEE Press, 2000), pp. 2061-2066.

15. H. E. Ren, Q. B. Zhong and J. F. Kang, Object Recognition Algorithm Research Based on Variable Illumination, Proceedings of 2009 IEEE International Conference on Automation and Logistics  (IEEE Press, Shenyang, China, 2009), pp. 1609-1613.

16. G. Stockman and L. Shapiro, Computer Vision.  (Prentice Hall, 2001), pp. 256-280.

17. G. N. DeSouza and A. C. Kak, Vision for Mobile Robot Navigation: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, 24 (2), pp. 237-267, 2002.

18. S. JUNGMJ, Development of Vision Based Soccer Robot System for Multi-Agent Cooperative System, Proceedings MIROSOT 97 (Taejon, KAIST, 1997), pp. 19-36.

19. Z. TAYLORRL, Importance of Fast Vision in Winning the First Micro–Robot World Cup Soccer Tournament, Robotics Auto System, 21 (2), pp.139-147, 1997.

20. B. A. Stefano, A Simple and Efficient Connected Component Labeling Algorithm, Proceeding of the 10th International Conference on Image Analysis and Processing (Venice, Italy, 1999), pp. 322-327.