Early Terrain Identification for Mobile Robots Using Inertial Measurement Sensors and Machine Learning Techniques

Early Terrain Identification for Mobile Robots Using Inertial Measurement Sensors and Machine Learning Techniques

Nilesh Bhosle Arnav Malik D. Shivakrishna Jayant Jagtap Shrikrishna Kolhar*

Marik Institute of Computing, Artificial Intelligence, Robotics and Cybernetics, NIMS University, Jaipur 303121, India

Symbiosis Institute of Technology (SIT), Symbiosis International (Deemed University), Pune 412115, India

Corresponding Author Email: 
shrikrishna.kolhar@sitpune.edu.in
Page: 
1631-1638
|
DOI: 
https://doi.org/10.18280/jesa.570610
Received: 
10 October 2024
|
Revised: 
26 November 2024
|
Accepted: 
6 December 2024
|
Available online: 
31 December 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Due to rapid advancements in robotics technology, mobile robots are now utilized across various industries and applications. Understanding the terrain on which a robot operates can greatly aid its navigation and movement adjustments, ultimately minimizing potential hazards and ensuring seamless operation. This study aims to identify the specific terrain on which a mobile robot travel. Data was gathered using an inertial measurement unit (IMU) installed on the robot for experimental testing. The key contributions of this research are twofold: firstly, the implementation and evaluation of various machine learning techniques using the IMU sensor dataset, comparing their performance using metrics like accuracy, precision, recall, and F1-score. Secondly, after assessing the different techniques, the most effective one is chosen for the final system implementation. Following the experimental evaluation of machine learning techniques, it was determined that the light gradient boosting machine (LGBM) classifier outperformed the others. Consequently, LGBM was utilized for the proposed system's implementation, achieving a 91% accuracy in surface classification. The experimental results highlight the efficiency and viability of the proposed system.

Keywords: 

feature extraction, inertial measurement unit, light gradient boosting classifier, machine learning, mobile robots, early terrain identification

1. Introduction

With the rapid development and advances in robotics technology, mobile robots are being used in various applications across various industries and domains [1, 2]. These applications include manufacturing, logistics, healthcare, agriculture, security, and surveillance. Because of their high capabilities and ease of interface with the surrounding environment, mobile robots are becoming prevalent in day-to-day life.

In recent years, surface classification for robots has garnered increased interest in civilian and military applications. Mobile robots rely on environmental data to change their movements intelligently and respond accordingly. Thus, the prompt identification of surface types is crucial for these robots to effectively accomplish their assigned tasks and goals [3]. In typical environments, surfaces vary extensively with a wide range of characteristics. While most floors are generally flat and pose little challenge for traversal, there can be instances where surfaces may be bumpy or slippery. Such surfaces may be impeding robots' mobility or leading to safety hazards. Consequently, awareness of surface types can assist mobile robots in navigating and adjusting their movements accordingly to mitigate potential risks and ensure smooth operation [4, 5]. For example, when a mobile robot detects a surface as a wooden floor, it can run at a higher speed since wooden floors provide a relatively smooth and easy terrain for movement. However, surfaces like carpets are soft and uneven, impeding the robot's movement and necessitating a speed reduction. Thus, accurate methods for assessing the current or upcoming characteristics of floors help enhance the manoeuvrability of robots [6].

Surface classification is grouped into two main categories based on different sensing methods: contact-based classification, involving techniques such as vibration and touch sensing, and contactless classification, primarily relying on vision and sound sensing [7]. A big obstacle in the contact-based classification is for the sensory data to be looked over for noise and/or fluctuations caused by different contact conditions. External factors such as temperature, humidity, and surface variations might corrupt the sensor readings' reliability and lead to mis-classifications. These hurdles can be accomplished using a higher-grade signal processing system and machine learning to improve the reliability and accuracy of contact-based classification systems [8]. While contactless classification using vision and sound sensing holds demarcated chances, it still has several issues. Among them are shifts in lighting circumstances, occlusions, ambient noise, and the strong demand for massive amounts of labelled training data of ML models. Facing these hindrances requires the development of agile algorithms that can change according to different environments and extract information proficiently from heterogeneous sensor data [9].

This paper compares machine learning algorithms trained on inertial measurement unit (IMU) sensor data [10]. The aim is to identify the terrain type out of nine different terrains using sensor data such as acceleration and velocity. This prediction will enhance the autonomous navigation capabilities of robots on diverse surfaces, ensuring they perform their tasks without the risk of falling.

2. Related Work

Numerous techniques have been suggested in scholarly works for distinguishing the surface over which a vehicle or robot moves. Various sensor principles are employed to aid in identifying the physical features [11], which provide valuable terrain data for the vehicle's operation. The methods outlined in the literature can be classified into IMU-based identification, tactile sensor-based identification, image-based identification, and acoustic-based identification [12].

2.1 IMU-based identification

Surface identification using IMUs is widely used. IMUs are strategically placed and equipped with multiple sensors to measure acceleration and angular velocity. Typically, the IMU is affixed to the vehicle, such as on the chassis. Algorithms are then utilized to analyze the sensor data, determine the difference in linear and angular acceleration, and assign a corresponding surface label to the feature set. Nampoothiri et al. [13] present a comprehensive survey of advanced algorithms used for terrain identification, focusing on deep learning and fuzzy logic methods. Tick et al. [6] introduced a novel approach that utilized a classifier for surface identification, achieving an accuracy of approximately 90% over a 20-minute drive. The measurements were taken at various velocities, enhancing the algorithm's reliability. Before the experiment, the IMU data was filtered to address significant noise and gyro sensor drift, often using a Kalman filter to refine the data. In one of the studies,

Csík et al. [14] proposed a new approach for terrain classification using a multi-layered perceptron (MLP) neural net as a classifier. A prototype measurement system gathered data from inertial sensors across diverse outdoor terrains. Both the accelerometer and gyroscope data were independently and jointly tested with different processing window sizes. The results indicate that the proposed system can achieve a classification efficiency of over 99%. Shin et al. [15], in one of the studies, used IMU sensor data and a Gaussian mixture model to develop a system for terrain identification. In their study, Knuth and Groves [16] used IMU to record acceleration, angular rate, and magnetic field data. A total of 44 features are extracted from the magnitude of the data, including time domain and frequency domain features. Five classifiers were trained and tested on these features to classify terrain types. In another study, Thavitchasri et al. [17] used IMU sensor data to predict surface type to facilitate improved navigation for autonomous tractors. IMU sensor data was recorded for seven different surface types, and various ML algorithms like logistic regression, SVM, random forest, AdaBoost, and XGBoost were used to classify surface types.

2.2 Tactile sensor-based identification

In this approach, an IMU is also employed to identify the surface by positioning the IMU on the rod attached to the vehicle's rear end, making direct contact with the surface. This direct contact facilitates the capture of characteristic irregularities more effectively in the sensor data. Tactile sensing is essential, especially when optical sensors cannot directly observe surface properties, for instance, in fog and dust scenarios [18]. Mason et al. [19] presented a study on the use of acoustic tactile sensors to enhance the identification of terrain materials, types, and structures for mobile robots. In a study by Giguere and Dudek [20], ten different surfaces were examined with variable time windows achieving the highest accuracy of 94.6%, respectively. These experiments were conducted at a constant speed. So, there is a need for further validation at the robot's varying speeds. Moreover, it's essential to note that in practical applications, using a rigid rod to contact the ground is not feasible. This approach introduces complexities in the vehicle's kinematics and requires additional design considerations, which are not currently addressed in the kinematic model. In another research, Nagy et al. [21] used tactile sensor data and machine learning models for road quality classification. Accelerometer and gyroscope sensors were used to record the data. Further machine learning models like decision tree were used to classify road quality based on features obtained using power spectrum and principal component analysis.

2.3 Image-based identification

Numerous methods in literature used the most popular convolutional neural networks (CNN) for image-based surface identification [22]. The difficulty with such methods lies in generating a sufficient number of images for proper training of the neural networks. A benefit of such an approach is its capability to identify the surface even when the robot is not in motion. Wang et al. [23] present a deep learning and support vector machine-based combined approach to identify types of terrains for the robots. The algorithms were trained using a dataset comprising 30,000 images representing six distinct types of terrain. This method achieved an accuracy of up to 87%. Demirtaş et al. [24] introduced a method for classifying indoor and outdoor surfaces based on images. Initially, they created a database of 2081 images representing various surfaces like carpets, tiles, and wood. The surfaces above were effectively categorized utilizing a robust deep-learning model, resulting in an outstanding accuracy rate of 99.52%. Additionally, the authors expressed a high level of confidence in asserting that their model offers expedited loading and decreased processing durations compared to other models documented in the existing literature. In another study, the authors combined vibration and image texture data for a terrain identification task [25]. Time and frequency domain features of vibration signal and Gray-level co-occurrence matrix feature from the image are extracted, and a weighted k-nearest neighbor classifier is used further to classify different terrains.

2.4 Acoustic-based identification

The utilization of this technique for surface identification is uncommon. Acoustic-based surface identification is advantageous for mobile robots operating outdoors, where conventional visual or tactile sensing techniques may encounter limitations or inconsistencies. These methods involve capturing and analyzing acoustic signals generated during the interaction between a robot and various surfaces. In their work, Libby and Stentz [26] explore the application of sound data as an innovative sensing modality for categorizing different surfaces on which a robot traverses in an outdoor setting.

Acoustic recordings were acquired by mounting them on a mobile robot that navigated diverse outdoor terrains. The measurements were taken for different velocities and features like spectral flux and rate of energy change, which were considered in the classification model building. Subsequently, the data is assigned with labels and used to train the multi-class classifier. This trained classifier can then discern between distinct interactions solely based on acoustic data and is subsequently used to classify five different surfaces. Overall, they achieved a 92% true positive rate. Valada et al. [27] presented a standard method for categorizing audio sequences, utilizing spectrogram extraction from recorded audio data for classification with a CNN. Their dataset spans 2.15 hours and encompasses nine distinct surfaces, with recordings conducted at various vehicle speeds. They report an impressive overall classification accuracy of 98.5%. However, it is worth mentioning that classification accuracy notably declines when the vehicle speed falls below 1 m/s.

3. Method

The proposed machine-learning-based method for mobile robot terrain classification is shown in Figure 1. In the initial phase, sensor readings are collected using robot and IMU sensors. The collected data is then pre-processed, and important features are extracted. The extracted features are then used to build the machine learning model, and in the final stage, the trained model is used for terrain classification. Each of these stages is explained as follows:

Figure 1. Proposed machine learning pipeline for terrain identification

3.1 Robot with IMU sensors

The system work starts with data collection using the robot, as shown in Figure 2. This robot is an in-house development designed with compact dimensions: a width of 21 cm, a length of 28 cm, and a height of 8 cm. It has a total weight of 685 grams, making it lightweight and efficient for its intended applications. The robot equipped with an IMU sensors are used to record the readings. The IMU sensor used in experimentation is an XSENS MTi-300. The XSENS MTi-300 has high accuracy, robust performance, and ease of integration. Key specifications of this sensor are as follows: Roll/Pitch accuracy: ±0.2° RMS; Supports real-time data rates up to 2 kHz; Dimensions: 57 × 41.9 × 23.6 mm; Weight: 55g; Operating temperature: -40℃ to +85℃; Power consumption: Typical 520 mW.

An IMU device can measure and communicate specific gravity and angular rate about an object to which it is affixed. An IMU comprises gyroscopes for measuring angular rate and accelerometers for measuring specific acceleration. So, IMU is a sensor capable of tracking movement across various axes. The system measures orientation, angular velocity, and linear acceleration along various axes. These readings are then given as input to the proposed system. In data collection process, for each terrain type 900 to 1000 samples have been collected. Each sample has 128 measurements per time series of 1 second, plus three ID columns for identifying the terrain type and measurement number. The sampling rate of IMU sensor was 128 Hz.

Figure 2. Robot used in the experiment with IMU sensors

3.2 Data pre-processing

Pre-processing is necessary for keeping data integrity by detecting and rectifying errors, filling in missing values, addressing outliers, and resolving inconsistencies within the dataset. Pre-processing methods like smoothing or filtering help diminish noise or extraneous details within the data, thereby facilitating the model's ability to discern patterns and generate precise predictions. In our case, analysis of collected data showed no corrupted values within the data series, and the time axis was consistently sampled. Therefore, there is no need to interpolate missing values. The interquartile range (IQR) method removed outliers in the angular velocity readings. The linear acceleration reading had high kurtosis, and the data was almost centered to zero. So, to make the features linearly separable, the feature extraction step extracts the important statistical features from the available features.

3.3 Feature extraction

To capture the nature of the movement of the robot, we have extracted a total of 13 statistics, including mean, standard deviation, minimum value, maximum value, median, absolute deviation, mean absolute change, absolute minimum, absolute maximum, absolute average, quantile, skewness, and kurtosis. These derived features represent various statistical characteristics and the nature of robot movement. These features are then used to train the machine learning model.

Statistical features are computed for each window of 1 second independently. For the classification of robot terrain using IMU sensor data, statistical features are essential for capturing the subtle variations in sensor readings. These features condense raw sensor data, facilitating the differentiation of various terrain types. By extracting and analyzing statistical features, the system can effectively identify terrain characteristics, even when the sensor data is complex or contains a degree of noise. Incorporating a diverse range of statistical features enhances the input for machine learning models, enabling them to identify intricate patterns and improve predictive accuracy. Features such as the median and absolute deviation are particularly valuable as they are less affected by noise or outliers, enhancing classification performance on irregular surfaces. Statistical features also distil large datasets into a concise set of descriptors, making them computationally efficient for use in classification models. The significance of each statistical feature is explained below:

  1. Mean: Mean Indicates the central tendency of the data and helps identify overall trends. For example, a consistently higher mean in acceleration relates to harder terrains like concrete.
  2. Standard deviation: High variability might indicate uneven terrains like hard tiles with gaps, while low variability signifies smoother surfaces like carpets.
  3. Minimum value: Minimum value is useful for detecting outliers or extreme dips in sensor readings, which might occur during sudden drops or soft terrain interactions like carpet.
  4. Maximum value: It reflects extreme forces or impacts, such as those experienced on hard tiles or wooden surfaces.
  5. Median: Median provides a robust measure of central tendency less affected by outliers. It represents typical values on uneven terrains.
  6. Absolute deviation: This statistical feature highlights variability and is less sensitive to extreme outliers. It is effective in distinguishing terrains with subtle irregularities.
  7. Mean absolute change: This measures the rate at which sensor readings change over time. Elevated values could signify sudden transitions in terrain, such as moving from soft PVC flooring to hard tile surfaces.
  8. Absolute minimum: Useful for assessing the least intense interactions, which can occur on softer terrains.
  9. Absolute maximum: Highlights extreme intensity, often a key feature of hard or uneven terrains.
  10. Absolute average: This statistical feature offers a balanced magnitude assessment, accounting for both positive and negative variations. It is particularly effective for describing overall terrain roughness.
  11. Quantile: Quantiles help to identify the data distribution and variability, which is critical for terrains with mixed characteristics.
  12. Skewness: Positive skewness indicates that occasional high readings dominate, such as large bumps, while negative skewness could suggest predominantly low readings.
  13. Kurtosis: High kurtosis indicates the presence of outliers or extreme events, characteristic of irregular terrains like hard tiles with significant gaps.

3.4 Machine learning models

Multiple machine learning models, including decision trees, random forests, support vector machines, light gradient boosting machine (LGBM), and extreme gradient boosting (XGBoost) classifiers, are tested on the derived features. After evaluating the different models, it was found that the LGBM classifier outperformed the rest of the classifiers. So, finally, the system was implemented using the LGBM classifier.

LGBM and XGBoost are both widely used ensemble models for classification tasks. LGBM employs a leaf-wise tree growth strategy, resulting in deeper decision trees that capture detailed information. On the other hand, XGBoost uses a level-wise tree growth approach, leading to a more balanced tree structure at a higher computational cost. Notably, LGBM requires less time and memory compared to XGBoost when handling large datasets [28]. Developed by researchers at Microsoft and Peking University, LGBM is an innovative tree-based ensemble learning technique designed to tackle the efficiency and scalability challenges encountered by XGBoost in scenarios involving high-dimensional input features and large datasets. LGBM leverages two main approaches: exclusive feature bundling (EFB) and gradient-based one-side sampling (GOSS) [29]. Additionally, LGBM utilizes a histogram-based algorithm, binning continuous feature values into discrete bins to expedite the training process [30, 31].

3.5 Prediction of surface

The final step of the proposed system is to predict the type of surface on which the robot is moving. The system predicts nine different surfaces such as fine concrete, concrete, soft tiles, tiled, soft PVC, carpet, wood, hard tiles, and hard tiles with large space. The selected nine terrains represent unique physical properties of surfaces, such as texture, hardness, and flexibility. These surfaces cover a wide spectrum of rigid surfaces (e.g., fine concrete, concrete, hard tiles), flexible or soft surfaces (e.g., carpet, soft PVC), and moderate textures (e.g., wood, tiled surfaces). Differentiating these terrains allows the robot to adapt to environments ranging from industrial floors to domestic settings, improving navigation and functionality. The study ensures the relevance of robot applications in diverse fields by including these common indoor and outdoor terrains. Also, by selecting these terrains, the study balances practical relevance, sensor diversity, and algorithmic challenges, ensuring a robust evaluation of the robot's terrain classification capabilities.

4. Results and Discussions

This section focuses on the experimentation carried out using the proposed system, the results obtained for different machine learning methods, their performance comparison, and the discussion about the real-time use of the proposed system.

The robot has IMU sensors to record movement readings and create a dataset. The following three parameters are measured using the IMU sensors [32].

Orientation: Four values for attitude quaternion channels represent the scalar and vector values.

Angular rate: Comprises three orthogonal IMU coordinate axes X, Y, and Z.

Acceleration: Three force values corresponding to three orthogonal IMU coordinate axes X, Y, and Z.

Figure 3. Density plot for linear acceleration in X direction

After data pre-processing and initial analysis, it was observed that a few features were not linearly separable. Figure 3 shows the density plot for linear acceleration in the X-direction, measured using the IMU sensors. This figure shows that the data is not linearly separable and is almost centered at zero. So, to overcome this problem, we have extracted the 12 statistical features from the dataset. These features are then used for the proposed system's implementation and experimental evaluation. Feature extraction is performed in the time domain and aims to acquire a concise and straightforward representation of the reading from the IMU sensors. Here, we have extracted 12 features, including mean, standard deviation, maximum and minimum values, median absolute deviation, mean absolute change, absolute minimum, absolute maximum, absolute average, quantile, skewness, and kurtosis. These derived features represent various statistical characteristics and insights into the nature of the motion being measured.

The various machine learning algorithms are then implemented using the derived features, and their performance is evaluated using different metrics such as accuracy, precision, recall, and F1-score, as shown in Eqs. (1)-(4), respectively.

${Accuracy}=\frac{T P+T N}{T P+F P+F N+T N}$           (1)

${Precision} (P)=\frac{T P}{T P+F P}$              (2)

${Recall}(R)=\frac{T P}{T P+F N}$              (3)

$F 1- {score} =\frac{2 \times P \times R}{P+R}$                (4)

where, TP is true positives, TN is true negatives, FP is false positives and FN is false negatives.

Thus, we have finally selected LGBM to implement the system. Extensive experimentation is carried out using the LGBM classifier. The classification report of the LGBM classifier is shown in Table 1.

Table 1. Classification report for LGBM classifier

 

Precision

Recall

F1-Score

0

0.88

0.90

0.89

1

0.87

0.94

0.90

2

0.97

0.93

0.95

3

0.91

0.92

0.92

4

0.95

0.92

0.94

5

0.97

0.90

0.93

6

0.92

0.77

0.84

7

1.00

0.50

0.67

8

0.88

0.90

0.89

Accuracy

 

 

0.91

Macro avg

0.93

0.85

0.88

Weighted avg

0.91

0.91

0.91

Table 2. Comparative analysis of the performance of different classifiers for terrain identification

Model

Accuracy

Balanced Accuracy

F1-Score

Time Taken

LGBM classifier

0.91

0.88

0.91

16.55

XGB classifier

0.89

0.86

0.89

17.00

Random Forest

0.88

0.83

0.88

10.15

Bagging classifier

0.83

0.78

0.83

15.22

Extra trees classifier

0.76

0.74

0.76

01.50

Decision tree

0.76

0.70

0.76

1.80

K-nearest neighbour

0.74

0.69

0.74

0.56

SVC

0.71

0.59

0.69

2.16

Logistic regression

0.68

0.42

0.68

0.46

Linear SVC

0.67

0.60

0.67

8.17

Linear discriminant analysis

0.62

0.56

0.62

0.28

We have implemented and evaluated 11 machine-learning algorithms on the dataset for terrain classification. Table 2 presents comparative results for various machine learning models. The LGBM classifier outperforms all other machine learning algorithms. It is a fast, distributed, high-performance gradient-boosting framework constructed using decision tree classifiers. It uses a gradient-boosting framework to construct the ensemble of weak learners sequentially such that a new weak learner is constructed to reduce the classification errors made by the previous learner. Unlike the traditional gradient boosting techniques, it uses a histogram-based algorithm to select the best feature to split at the node of the decision tree. Thus, its ability to handle large datasets with higher dimensional feature sets, speed, and memory efficiency makes it the best-performing and most popular in the literature.

From experimental evaluation, it is clear that the proposed method achieved an overall classification accuracy of 91%. Table 1 also shows the precision, recall, and F1-score [33], along with the support for all the nine classes in the dataset. For most of the classes, these values are greater than 90%. This indicates a high-performance level and demonstrates the proposed approach's effectiveness in predicting the terrain on which the robot is moving. The achieved accuracy highlights the potential of our model in mobile robot terrain classification.

The robot's speed impacts the accuracy of terrain classification. This impact is due to how sensor data is collected and processed. At higher speeds, there is increased noise in the IMU sensor’s data. This increases the fuzziness in the distinct patterns associated with different terrains and reduces the resolution of the terrain's features. Lower speed captures the stable data and better resolution of terrain-specific features. Also, speed affects how the robot wheels interact with the terrain. On slippery surfaces (e.g., tiles), higher speed might cause skidding, which could distort classification patterns. In this study, the impact of the robot's speed on terrain classification accuracy was not explicitly considered. The primary objective was to evaluate the system's ability to distinguish between nine distinct terrains under consistent operational conditions. While speed influences sensor readings and classification performance, it was assumed the robot would operate within a moderate and constant speed range appropriate for a typical surface and environment.

Figure 4. Normalized confusion matrix of LGBM classifier

Figure 4 shows the normalized confusion matrix for the LGBM classifier. The rows in the confusion matrix represent the instances of the actual class, that is, the true floor type, and the columns represent the surface types predicted by the classifier. Consequently, the diagonal values represent correctly predicted classes, that is, the accuracy of the classifier in predicting the instances of a particular class. On the other hand, the off-diagonal values represent the degree of incorrectly classified instances.

The receiver operating characteristic (ROC) and the area under the ROC curve (ROC-AUC) are important metrics for evaluating the classification model [34]. The x- and y-axes represent false positive rate (FPR) and true positive rate (TPR) defined in Eq. (5) and Eq. (6), respectively. The false positives (FP) represent the number of negative instances wrongly classified as positive, true negatives (TN) represent the number of instances correctly classified as positive, and false negatives (FN) represent the number of positive instances wrongly classified as negative.

FPR=FP/(FP+TN)                (5)

TPR=TP/(TP+FN)                (6)

The ROC curve and ROC-AUC for the classifier are shown in Figure 5. For almost all classes, the ROC-AUC is above 0.94. This indicates that the classifier has excellent discriminatory power and performs very well in distinguishing the positive and negative classes in the classification problem. As ROC-AUC values are close to 1, this specifies that the classifier's predictions are highly consistent with the true class labels and the model is making accurate predictions and is reliable for predicting the terrain type on which the robot is moving.

Figure 5. ROC curve for LGBM classifier

The proposed system has two limitations. First, the terrain classification system may struggle to obtain clear, consistent data if the robot's sensors are blocked or misaligned due to physical objects. This could lead to misclassifications, as the terrain data might not represent the actual surface but rather a disturbed sensor reading caused by the obstruction. Second, in the proposed system the impact of the robot's speed on terrain classification accuracy was not explicitly considered.

Our analysis revealed that misclassified samples are complex and of poor quality, making them hard to classify. To address this, we plan to improve data quality and diversity by incorporating more complex samples. The model will be retrained and fine-tuned with these enhancements. This approach aims to make the model more robust for challenging cases.

5. Conclusion

The use of mobile robots in various fields has increased significantly. Predicting the type of terrain on which a robot is moving is crucial for its navigation and movement. This paper aims to identify the terrain a robot operates by creating a dataset for experimental evaluation using IMU sensors mounted on a mobile robot. The dataset contains information about the robot's orientation, angular speed, and linear acceleration while traversing nine distinct surface types. From this data, 12 statistical features are extracted for the experimentation of the proposed system. Various state-of-the-art machine learning models were tested on this dataset, and their performance was compared. The analysis indicated that the LGBM model outperformed other techniques. Extensive experimentation with LGBM resulted in a 91% accuracy in predicting a mobile robot's terrain type, demonstrating the efficiency and feasibility of the proposed system.

Future work could involve further deep-learning techniques to improve accuracy and overall performance, analyzing the effect of varying speeds to enhance the model's robustness and generalizability across diverse real-world scenarios. Also, the work could be extended for a more detailed analysis of the misclassified samples and potential strategies to improve the classification accuracy for these cases.

Acknowledgment

This work was supported by the Research Support Fund (RSF) of Symbiosis International (Deemed University), Pune, India.

  References

[1] Niloy, M.A., Shama, A., Chakrabortty, R.K., Ryan, M.J., et al. (2021). Critical design and control issues of indoor autonomous mobile robots: A review. IEEE Access, 9: 35338-35370. https://doi.org/10.1109/ACCESS.2021.3062557

[2] Rubio, F., Valero, F., Llopis-Albert, C. (2019). A review of mobile robots: Concepts, methods, theoretical framework, and applications. International Journal of Advanced Robotic Systems, 16(2): 1-22. https://doi.org/10.1177/1729881419839596

[3] Bai, C., Guo, J., Zheng, H. (2019). Three-dimensional vibration-based terrain classification for mobile robots. IEEE Access, 7: 63485-63492. https://doi.org/10.1109/ACCESS.2019.2916480

[4] Gillham, M., Howells, G., Spurgeon, S., McElroy, B. (2013). Floor covering and surface identification for assistive mobile robotic real-time room localization application. Sensors, 13(12): 17501-17515. https://doi.org/10.3390/s131217501

[5] Park, B., Kim, J., Lee, J. (2012). Terrain feature extraction and classification for mobile robots utilizing contact sensors on rough terrain. Procedia Engineering, 41: 846-853. https://doi.org/10.1016/j.proeng.2012.07.253

[6] Tick, D., Rahman, T., Busso, C., Gans, N. (2012). Indoor robotic terrain classification via angular velocity based hierarchical classifier selection. In 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, pp. 3594-3600. https://doi.org/10.1109/ICRA.2012.6225128

[7] Yasuda, Y.D., Martins, L.E.G., Cappabianco, F.A. (2020). Autonomous visual navigation for mobile robots: A systematic literature review. ACM Computing Surveys (CSUR): 53(1): 13. https://doi.org/10.1145/3368961

[8] Mahadhir, K.A., Tan, S.C., Low, C.Y., Dumitrescu, R., Amin, A.T.M., Jaffar, A. (2014). Terrain classification for track-driven agricultural robots. Procedia Technology, 15: 775-782. https://doi.org/10.1016/j.protcy.2014.09.050

[9] Fritz, L., Hamersma, H.A., Botha, T.R. (2023). Off-road terrain classification. Journal of Terramechanics, 106: 1-11. https://doi.org/10.1016/j.jterra.2022.11.003

[10] Kaajakari, V. (2009). Practical Mems: Design of Microsystems, Accelerometers, Gyroscopes, RF Mems, Optical Mems, and Microfluidic Systems. Small Gear Publishing.

[11] Yandun Narváez, F., Gregorio, E., Escolà, A., Rosell-Polo, J.R., Torres-Torriti, M., Auat Cheein, F. (2018). Terrain classification using ToF sensors for the enhancement of agricultural machinery traversability. Journal of Terramechanics, 76: 1-13. https://doi.org/10.1016/j.jterra.2017.10.005

[12] Mei, M., Chang, J., Li, Y., Li, Z., Li, X., Lv, W. (2019). Comparative study of different methods in vibration-based terrain classification for wheeled robots with shock absorbers. Sensors, 19(5): 1137. https://doi.org/10.3390/s19051137

[13] Nampoothiri, M.H., Vinayakumar, B., Sunny, Y., Antony, R. (2021). Recent developments in terrain identification, classification, parameter estimation for the navigation of autonomous robots. SN Applied Sciences, 3: 480. https://doi.org/10.1007/s42452-021-04453-3

[14] Csík, D., Odry, Á., Sárosi, J., Sarcevic, P. (2021). Inertial sensor-based outdoor terrain classification for wheeled mobile robots. In 2021 IEEE 19th International Symposium on Intelligent Systems and Informatics (SISY): Subotica, Serbia, pp. 159-164. https://doi.org/10.1109/SISY52375.2021.9582504

[15] Shin, D., Lee, S., Hwang, S. (2021). Locomotion mode recognition algorithm based on Gaussian mixture model using IMU sensors. Sensors, 21(8): 2785. https://doi.org/10.3390/s21082785

[16] Knuth, T., Groves, P. (2023). IMU based context detection of changes in the terrain topography. In 2023 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, USA, pp. 680-690. https://doi.org/10.1109/PLANS53410.2023.10140086

[17] Thavitchasri, P., Maneetham, D., Crisnapati, P.N. (2024). Intelligent surface recognition for autonomous tractors using ensemble learning with BNO055 IMU sensor data. Agriculture, 14(9): 1557. https://doi.org/10.3390/agriculture14091557

[18] Krebs, A., Pradalier, C., Siegwart, R. (2010). Adaptive rover behavior based on online empirical evaluation: Rover-terrain interaction and near-to-far learning. Journal Field Robotics, 27(2): 158-180. https://doi.org/10.1002/rob.20332

[19] Mason, W., Brenken, D., Dai, F.Z., Castillo, R.G.C., Cormier, O.S.M., Sedal, A. (2024). Acoustic tactile sensing for mobile robot wheels. arXiv preprint arXiv:2402.18682. https://doi.org/10.48550/arXiv.2402.18682

[20] Giguere, P., Dudek, G. (2011). A simple tactile probe for surface identification by mobile robots. IEEE Transactions on Robotics, 27(3): 534-544. https://doi.org/10.1109/TRO.2011.2119910

[21] Nagy, R., Kummer, A., Abonyi, J., Szalai, I. (2024). Machine learning-based soft-sensor development for road quality classification. Journal of Vibration and Control, 30(11-12): 2672-2684. https://doi.org/10.1177/10775463231183307

[22] Gonzalez, R., Iagnemma, K. (2018). Deepterramechanics: Terrain classification and slip estimation for ground robots via deep learning. arXiv preprint arXiv:1806.07379. https://doi.org/10.48550/arXiv.1806.07379

[23] Wang, W., Zhang, B., Wu, K., Chepinskiy, S.A., Zhilenkov, A.A., Chernyi, S., Krasnov, A.Y. (2022). A visual terrain classification method for mobile robots’ navigation based on convolutional neural network and support vector machine. Transactions of the Institute of Measurement and Control, 44(4): 744-753. https://doi.org/10.1177/0142331220987917

[24] Demirtaş, A., Erdemir, G., Bayram, H. (2024). Indoor surface classification for mobile robots. PeerJ Computer Science, 10: e1730. https://doi.org/10.7717/peerj-cs.1730

[25] Wang, H., Lu, E., Zhao, X., Xue, J. (2023). Vibration and image texture data fusion-based terrain classification using WKNN for tracked robots. World Electric Vehicle Journal, 14(8): 214. https://doi.org/10.3390/wevj14080214

[26] Libby, J., Stentz, A.J. (2012). Using sound to classify vehicle-terrain interactions in outdoor environments. In 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, pp. 3559-3566. https://doi.org/10.1109/ICRA.2012.6225357 

[27] Valada, A., Spinello, L., Burgard, W. (2018). Deep feature learning for acoustics-based terrain classification. Robotics Research, 2: 21-37. https://doi.org/10.1007/978-3-319-60916-4_2

[28] Li, Z., Zuo, J., Wu, J., Liu, Z., Song, Y., Geng, S. (2021). Application of decision tree integrated hybrid classifier in feature-fused robot big data. In Proceedings of the 2021 1st International Conference on Control and Intelligent Robotics, Guangzhou, China, pp. 681-686. https://doi.org/10.1145/3473714.3473832

[29] Soomro, A.A., Mokhtar, A.A., Hussin, H.B., Lashari, N., Oladosu, T.L., Jameel, S.M., Inayat, M. (2024). Analysis of machine learning models and data sources to forecast burst pressure of petroleum corroded pipelines: A comprehensive review. Engineering Failure Analysis, 155: 107747. https://doi.org/10.1016/j.engfailanal.2023.107747

[30] Ke, G., Meng, Q., Finley, T., Wang, T., et al. (2017). LightGBM: A highly efficient gradient boosting decision tree. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, California, USA, pp. 3149-3157.

[31] McCarty, D.A., Kim, H.W., Lee, H.K. (2020). Evaluation of light gradient boosted machine learning technique in large scale land use and land cover classification. Environments, 7(10): 84. https://doi.org/10.3390/environments7100084

[32] You, Z., Wang, X. (2018). Chapter 7—Miniature inertial measurement unit. In Space Microsystems and Micro/Nano Satellites, Butterworth-Heinemann: Oxford, UK, pp. 233-293.

[33] Guns, R., Lioma, C., Larsen, B. (2012). The tipping point: F-score as a function of the number of retrieved items. Information Processing & Management, 48(6): 1171-1180. https://doi.org/10.1016/j.ipm.2012.02.009

[34] Pawar, V., Bhosale, N.P. (2018). Internet-of-things based smart local bus transport management system. In 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, pp. 598-601. https://doi.org/10.1109/ICECA.2018.8474728