Compressed High Resolution Satellite Image Processing to Detect Water Bodies with Combined Bilateral Filtering and Threshold Techniques

Compressed High Resolution Satellite Image Processing to Detect Water Bodies with Combined Bilateral Filtering and Threshold Techniques

Chatragadda RajyalakshmiKoritepati Ram Mohan Rao Ramisetty Rajeswara Rao 

CSE Department, JNTU Kakinada, Kakinada 533003, India

NRSC, ISRO, Balanager, Hyderabad 500037, India

CSE Department, JNTUK, UCEV, Vizianagaram 535003, India

Corresponding Author Email: 
chrajyalakshmi84@gmail.com
Page: 
669-675
|
DOI: 
https://doi.org/10.18280/ts.390230
Received: 
12 February 2022
|
Revised: 
3 April 2022
|
Accepted: 
15 April 2022
|
Available online: 
30 April 2022
| Citation

© 2022 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Water body extraction has become an increasingly important source of information for the prediction of heavy flood situations over the last few decades. Several approaches for identifying water bodies from separate High Resolution satellite data with varying spatial, spectral, and temporal characteristics have been developed. The vast amount of data available, supported by open access regulations adopted by many agencies, is transforming the remote sensing environment, underlining the necessity of data processing speed and efficiency. The current study presents an autonomous method for extracting water bodies from satellite pictures that use a single-band threshold approach with bilateral filtering. One important goal is to use low-cost software to understand raw picture data to evaluate multi-temporal and multi-sensor images. Using the proposed framework, the workflow is constructed on a free and open-source software processing framework. Water was detected properly and quickly from other land cover features.

Keywords: 

multi sensor, high resolution, water body, bilateral filter, threshold

1. Introduction

Water-body extraction is becoming increasingly significant as large volumes of high-resolution remote sensing pictures become available, and manually digitising water-bodies is a time-consuming task. Water bodies in nature change based on the season and the surrounding environment. Due to the presence of spectral signatures in the form of the Normalized Difference Water Index (NDWI), water-body extraction algorithms have historically been utilised on multispectral images [1]. Despite having a high spatial resolution, multispectral images are usually three to four times less accurate than panchromatic images [2]. To extract microscopic water bodies to the fullest extent possible, a system based on high resolution panchromatic images is given.

Unlike multispectral photos, panchromatic photographs do not show a distinct water body [3]. Due to their low signature, they are easily confused with shadows and moist agricultural plains, where many water-body extraction algorithms fail. Furthermore, manual digitization of biologically produced water bodies with irregular borders will take longer [4]. Some applications, such as DEM production, flood monitoring, and other environmental studies, require highly exact boundary definitions. Previously, approaches relied on training sets with a fixed window size [5]. suitable for all types of water the use of training sets is not suited for all types of water bodies, such as transparent sediments collected with different sensors at different times of year [6].

In light of these issues, we propose an automated water-body extraction technology that is statistically based. It is made up of reflection, texture, and the Human Activity Index [7]. The first segmentation of a water body is based on the automatic estimation of a threshold value by the entropy maximisation principle [8]. False alarms are reduced further by distinguishing between homogeneous and heterogeneous objects utilising object level information such as the human activity index [9]. A contour refinement technique is also incorporated in this work for accurate boundary and delineation of small land areas within water bodies.

On Cartosat-1 image, the proposed methodology was tested. The results of the evaluation revealed that detection efficiencies of more than 90% are possible. In the field of remote sensing extract water body from the very high compressed satellite images is a crucial challenge [10]. For extract of water bodies using remote sensing photos, deep learning has grown more prominent. These solutions, on the other hand, are usually targeted at a single sensor and are not universal [11]. As a result, we developed a novel network, Dense-local-feature-compression (DLFC) network, with the goal of automatically extract water bodies commencing from various remote sensing images [12]. The densely linked module of Dense Net can get the feature maps of all layers before it in this network, allowing each layer to receive feature maps of every one layer before it. When joining layers, the concatenate operation on the feature dimension is employed [13]. It is capable of realizing various levels of feature reuse. Before the concatenate process, the local-feature-compression module is introduced [14].

Previous research has shown that the convolution process can be used to obtain more abstract features. Combining spatial and spectral information in remote sensing photographs and recovering water bodies from a range of remote sensing images using the DLFC [15].

Furthermore, we create a novel data collection for water bodies utilising Cartosat-1 remote sensing photos. With earlier remote sensing images from Cartosat-1, GaoFen-6, Sentinel-2, and ZY-3 [16], the proposed DLFC functioned wonderfully. When compared to traditional water body extraction technologies and current networks, DLFC is a huge improvement [17]. Using the DLFC approach, water bodies in a multi-source remote sensing image may be automatically and swiftly separated [18]. Water body detection using satellite data is important for DEM construction, flood monitoring, flood damage analysis, evaluating the risk of lakes that have been damaged naturally or artificially, and so on.

2. Materials and Methods

2.1 Satellite raw image processing

The geometric adjustment was conducted based on the latitude and longitude data which is unprocessed instrument data with geo-referenced and compressed format. This data opening through software environment is a challenging task. In this technique, the geometric correction is done in a methodical manner [1]. Satellite image processing is utilized for a variety of purposes, including Earth observation, space astronomy, and scientific study and weather forecast and to detect flood zone disaster areas The level of detail is described by the Satellite's spatial resolution [2]. Every satellite image includes thousands of tiny dots known as pixels. If a piece of an image has information that isn't relevant to a certain application, processing the entire image isn't necessary [3]. As a result, the image is broken into several sections [4]. Image segmentation groups pixels with comparable properties [5].

We may generate pixel-wise masking for every object in a picture using image segmentation, giving us a better knowledge of the object in the image [5]. Spectral indices like NDWI have typically been used for extracting water bodies as of multispectral images. In previous research, a model based upon the EOS/MOSDIS be able to segment water bodies and extract NDWI criterion is described. Some of the researchers used supervised classification algorithms to extract water areas in the given area. Single-band density slicing [6], spectral water indexes [7, 8], object-oriented approach [9-12], and deep learning algorithms [13, 14] are the most often utilized methods for extract water bodies from remote sensing photos in prior research [15]. Among all available water body mapping methods, the spectral water index-based method is the most reliable. Water indices have been presented since 1990s. The normalized difference water index (NDWI) was proposed by McFeeters [16]. Despite the fact that water has the lowest reflectance in the panchromatic band's spectral bandwidth, it is difficult to automatically distinguish water bodies [17]. This is due to the fact that, in addition to wet lands, the shadows of mountains, clouds, and buildings have lower reflectance values and hence overlap with those of water pixels, making separation difficult [18].

This methodology employs a multi-level segmentation approach to locate possible water bodies based on reduced reflectance, which is then refined using textural measurements to distinguish cloud shadows from wet agricultural soils [19]. The lower reflectance is obtained automatically using an entropy-based threshold approach. Several efforts for combining spectral and spatial information of hyper-spectral images [20] in the classification process have recently been reported to detect water bodies from spatial images [21]. A pixel based classifier employing spatial information represented in the form of grey level co-occurrence matrix (GLCM) [22, 23], and extended morphological profiles (EMP) [24], for example, can create more accurate classification maps like pixel shape index [25, 26], Gabor filter texture information [27, 28], wavelet texture feature [29, 30], and so forth. In the previous research so many algorithms implemented like Watershed [31], mean shift [32, 33], hierarchical segmentation [34, 35], super pixel segmentation [36], extraction and classification of homogeneous objects [37], minimum spanning forest [38], fractal net evolution approach-based segmentation [39], and other segmentation techniques were used to perform spectral-spatial classification.

Using a non-linear combination of data from close-up images, double-sided filtering smoothes the image while preserving edges. This procedure is non-repetitive, local and simple. It combines gray or color levels based on their geometric and photometric similarities, and maintains near-to-far values in both scope and range. A double-sided filter, unlike filter working on each of the three band of a color image independently, can improve the Perceptual Index below the CIE-Lab color space, soften colors, and maintain edge protection so that it is calibrated for human perception. Unlike standard filtering, it does not create stray colors at the edges of color images and reduces ghosting

2.2 Raw image open with current open-source technologies

As shown in Figure 1, first, we create an empty RGB.tiff in Raster with the same characteristics as Band 4 (width, height, CRS, etc.) [7]. Then we must write those bands to the RGB image that is now empty. Sample Image First, we use the same projection as the original image to re project our Natural reserve. The RGB image is then opened, the metadata is obtained, and the projected boundary is masked. To detect water areas. Using ERIDAS imagine format the compressed image can be opened through free software approach. Raw image opening point of view OpenCV, Matplot, Pillow libraries play a very important role. One of the image format stores spatial data is called geotiff. A GeoTIFF file is a standard. If or image file format with additional spatial (geo-referencing) information encoded as tags in the. If file. Tags denote the presence of additional metadata such as spatial extent, CRS, Resolution, and so on, in addition to the pixel values. It’s a common format for distributing satellite and aerial photography imagery.

Figure 1. Block diagram for proposed methodology

3. Noise Removal Process

Noise can occur in a variety of ways in digital photos. Imaging mistakes arise when pixels are calculated incorrectly and do not duplicate the true intensity of the original image, resulting in noise [40, 41]. Image data dependent noise and noise in images are two types of noise that can be seen. An image can be systematically contaminated by noise. To add noise to photographs, three types of noise are often used: impulse noise, additive noise, and multiple noise [42].

Operation of Morphological Bilateral Filtering Bilateral filtering produces the cleanest images while keeping edges by applying a nonlinear combination of surrounding image data [17]. Bilateral filtering is based on the concept of doing the same thing within a picture as usual filters in its area: two pixels can be close to each other in the sense that they occupy the same spatial position, or they can be the same, comparable. relates to closeness within a range, whereas closeness refers to closeness within a domain. Filtering in the traditional situation is domain filtering that improves proximity by weighting pixel values by a factor that decreases with distance. The range filtering method averages picture data by utilising weights that decrease as the difference is larger. Range filters are not linear since their weight relies on the intensity or colour of the image. They are not more computationally complex than conventional non-separating filters. Bilateral filtering is the combination of domain and range filtering [15]. The low-pass domain filter applied to an imagef(x) gives the following output image function:

$\mathrm{I}_{\mathrm{x}}(\mathrm{x}=\mathrm{p})=1 / \mathrm{Wp} \sum \mathrm{q} \in \operatorname{Sgs}(/ / \mathrm{p}-\mathrm{q} / /) \mathrm{fr}(\mathrm{Ip}-\mathrm{Iq}) \mathrm{Iq}$         (1)

Eq. (1) above says normalized weighted average, where p,q refer pixel coordinates, S indicate spatial neighborhood of Ix(x), gs is a spatial Gaussian function that reduces the effect of distant pixels, fr is the Gaussian Band that decrease the effect of pixel q when their intensity value differs with pixel intensity p, and Ip is the pixel intensity [8]. To eliminate noise, the majority of the researchers utilised Gaussian filters. Even if the noise is not completely eliminated, the output will be blurry due to the edges data. In the traditional scenario, bilateral filtering is the most essential filtering strategy for processing satellite images with little noise. It filters indiscriminately, resulting in resolution degradation. Instead, the filter only operates on the object's inner pixels, leaving no edges. This method of filtering is based on a two-pronged approach. A double-sided filter is a Gaussian filter that has a high effect in areas of uniform colour and a low effect in areas of great colour variation. The double-sided filter acts as either an edge-preserving filter or an edge-aware filter, since we expect a significant color change at the edges.

3.1 Optimization of bilateral filtering method with Gaussian Variance

Let's consider how the side filter work. The Gaussian distribution again serves as a suitable starting point. Let's start by inviting a 2D shape, no normalization here [2]. Suppose the Gaussian variance is concentrated at point p, and we sample the PDF for point q that is far from d from p.

$N(d)=e^{-((q x-p x) 2+(q y-p y) / 2 \sigma 2)}$       (2)

A Gaussian filter applied to the pixel index p in image I can be written as:

$\mathrm{G}(\mathrm{p})=1 / \mathrm{w} \sum \mathrm{q} \in \mathrm{SN}(|\mathrm{p}-\mathrm{q}|) \mathrm{Iq} \mathrm{w}=\sum \mathrm{q} \in \mathrm{SN}(|\mathrm{p}-\mathrm{q}|) \mathrm{Iq}-$                  (3)

$\mathrm{w}=\sum \mathrm{q} \in \mathrm{SN}(|\mathrm{p}-\mathrm{q}|)$            (4)

where, Iq(3) signifies a tiny neighborhood of pixels surrounding p, and S denotes the value at pixel index the N stands for a Gaussian (also known as a Normal) distribution, and the w stands for the normalization factor, which ensures that image brightness is preserved during filtering.

The noise reduction strategy with two kernel filters spatial kernel, order kernel [43] is two-way filtering. The Spatial Kernel uses the Gaussian function to smooth the image. Spatial multiplication means distance between pixels in an image (Euclidean distance), and range multiplication is the similar in magnitude between two pixels in an image. The Spatial Kernel ($\mathrm{W} \sigma s$) (5) is an equation that performs the spatial proximity measurement.

$W \sigma s=\exp \left(-(\|p-q\| 2) / 2 \sigma^{2} s\right.$         (5)

Although extent kernel ($\mathrm{W} \sigma s$) (6) refers to the weight of a pixel based on the difference in magnitude between the pixel and the magnitude at the center of the image analysis, the equation shows how to calculate the extent multiplication for each pixel [4].

$W \sigma r=\exp \left(-\left(|I /(p)-I(q)|^{2}\right) / 2 \sigma^{2} s\right)$        (6)

3.1.1 Bilateral Filtering  computation as follows

$(p)=1 / W \sum p q \in N G \sigma s(p)((\|p-q\|) G \sigma r(|I p-I q|) I q$          (7)

where, Ip is the intensity value at pixel location p, N(p) is a neighborhood of p, Wp(8) is the normalization factor that guarantees pixel weights sum to 1, and I is a grey level picture.

$W p=\sum q \in N G \sigma s(p)((\|p-q\|) G \sigma r(|I p-I q|)$           (8)

Tomasi and Manduchi were the first to propose a two-sided filter, a classical edge-preserving smoothing method [18]. The two-sided filter is modulated based on the similarity between the centre pixel and nearby pixels, and a function of the difference in intensity values between neighboring pixels, almost identical to the Gaussian filter.

The bilateral filtering concept extends the concept of a Gaussian fuzzy spatial domain with additional range weights provided by the intensity domain by selecting Gaussian functions for the spatial filtering and range filtering functions.

Where Iq signifies a tiny neighborhood of pixels surrounding p, and S denotes the value at pixel index q. The N stands for a Gaussian (also known as a Normal) distribution, and the w stands for the normalization factor, which ensures that image brightness is preserved during filtering

$\mathrm{G}(\mathrm{p})=1 \mathrm{w} \sum \mathrm{q} \in \mathrm{SN}(|\mathrm{p}-\mathrm{q}|)(|\mathrm{Ip}-\mathrm{Iq}|) \mathrm{Iq}$              (9)

It turns out that weighting the contribution of each Iq to G (p) in Eq. (9) by the color difference between p and q is a controllable method for modifying the filter in the edge regions [2]. Gaussian spatial kernel, the spatial extent of the considered nucleus or neighborhood is determined by the value of S [13].

Let GR denote the Gaussian range kernel, with R determining the edge's amplitude and weight [18]. As illustrated below, the new value of pixel (x, y) in image I is determined from pixels (x, y) in the vicinity of the corresponding pixel [11]. To regulate this second Gaussian on the colour information, add a new sigma. Depending on the size of your kernel, you may need to raise sigma $\sigma$. We saw in the previous section how two factors influence a bilateral filter: the Gaussian filter's variance in the spatial domain and the Gaussian filter's variance in the colour domain [2]. These parameters will be referred to as the spatial and range sigmas, respectively. Furthermore, the bilateral filter's two Gaussians are multiplied together, resulting in a value near to zero.

The proposed method was first compared with three different automatic cloud detection methods, kmeans + ICM (Repetitive Condition Modeling) [18], RGB Refinement [22] and SIFT + GrabCut [23]. It can be seen that the proposed method is the best of the four. Using the global threshold, the RGB fine-tuning algorithm provides raw cloud recognition results, which are then refined by adding texture functions [1]. In G04, the RGB setting is effective and works well, but when the clouds are unevenly scattered, as in G02 and G03, the results are disappointing. Satellite images are usually displayed as a combination of red, green and blue stripes to show their true colors. Rasterio is used to read data and render RGB images using ranges 4, 3, and 2 [7]. The true colour of satellite images is often displayed in composition of blue, red and green color bands [7]. Let us first read the data with Rasterio and create an RGB image from Bands 4, 3, and 2. First compressed image that is .img file format opened and that can be processed in the form of. Tiff format. It is a challenging task and opening of. img is very tedious task. In the real environment most of the software tools are available to process the files but those are very costly. In this research using free software environment the total image can be opened.

4. Methodologies

4.1 Contour methodology

In this Process Counter is the important part for the image conversion. We simply want to curve the water areas in this image. Because the texture of this image is quite irregular and uneven, there aren't a lot of colors. The brightness and intensity of the different pixels hue in this image fluctuate. So the best thing to do is combine all of these distinct colors of pixels into one. This manner, when we use contouring, the water area will be treated as a single feature.

4.2 Automatic threshold determination

In the second stage, I threshold the image so that only the colour I'm contouring now looks white and the rest turns black. This step doesn't make much of a difference here, but it's necessary because contouring works best with black and white (thresholds) photographs.

4.3 Mask overlay and thresholding method

In the Figure 2 the first step is to examine if there is any way to quickly identify the image based on its value. Let’s try converting the image to grayscale and using Otsu's Method to see if we can get a good mask out of it. Water bodies can be recovered from a satellite image by overlaying low-resolution categorised satellite raster data with a high-resolution satellite image.

The single water feature has been retrieved due to the overlaying of a water mask layer on high resolution satellite data [12]. The extracted water layer has a lower level of detail, and this information is insufficient for large-scale map production. The classification of water features can be used to increase the level of details in the retrieved water layer from the high-resolution image.

Figure 2. Flow chart for gray scale conversion

The above flow chart Figure 2 shows that how to convert compressed image format and it shows gray scale conversion to extract water area from the high resolution image. One of the most common and powerful GIS features is overlay. It looks into the relationship between features and their location. by stacking feature layers vertically Look into geographic trends to see what they are. sites that meet a set of requirements

Figure 3. Color to gray conversion

Figure 4. Binary images processing with different threshold levels

The above Figure 3 shows conversions not getting clear output with single iteration. Implement multiple iteration levels to extract accurate results. Figure 4 shows that convert the image to HSV Color space. Let's see if we can separate these parts of the image. The function can be seen by defining different bodies of water in the image, the next step is to get the properties of each body of water [1]. Figure 4 shows that convert the image to HSV Color space, we can observe that the water areas have a distinct red color that is absent from the rest of the image.

Let's see if we can separate these parts of the image. The function can be seen by defining different bodies of water in the image, the next step is to get the properties of each body of water [1].

5. Results

5.1 GDAL Importance to process satellite images

This is the most widely used library for working with Geo Tiff pictures, although it might be tricky to install and use. The majority of GDAL's methods and classes are written in C++, but we're using its Python bindings here. The given image analyzed using a collection of morphological algorithms and bilateral filtering process with threshold method.

Image thresholding is a technique for transforming a color image to a binary image based on a pixel intensity threshold. This Technique can be implementing for extract foreground and background elements that are prominent. Using Open-CV and Python, are the free ware software environments used to accomplish this process in a variety of ways. Mainly the range of threshold T is between 0 to 255. This value is applied to each and every pixel in the image. The main aim in the case of Bilateral filtering with threshold technique is to analyze the water area from the high resolution satellite image with accurate results (Figure 5).

Figure 5. Image processing with various hybrid bilateral threshold values

Figure 6. Image and water bodies identified

The image considered and the water bodies identified in the image are shown in Figure 6.

The grey images considered and the process of detection of water bodies using the proposed model is represented in Figure 7.

Figure 7. Grey image water bodies detection

6. Conclusions

For extracting water bodies from satellite images, the threshold technique is one of the most commonly used methods. The proposed model reflected brightness of water is lower than that of plants, buildings, bare soil, and roads, which is the basis for this method. False positives are produced because each passing pixel is labelled as a water body including some objects that aren't actually water bodies. Additionally, this approach has restrictions on the size and shadow of the image. As part of this study, we'll take a look at how freeware software can be used instead of expensive software like ERIDAS Imagine in order to reduce the cost of data processing. Free software is used to locate water features in a raw image using an automated satellite image processing approach. Using deep learning algorithms to evaluate raw satellite photos and extract water bodies in a specified area is a revolutionary architecture that will be implemented in the future. Data processing accuracy can be ensured with this approach.

  References

[1] Du, N., Ottens, H., Sliuzas, R. (2010). Spatial impact of urban expansion on surface water bodies—A case study of Wuhan, China. Landscape and Urban Planning, 94(3-4): 175-185. https://doi.org/10.1016/j.landurbplan.2009.10.002

[2] Yang, X., Qin, Q., Grussenmeyer, P., Koehl, M. (2018). Urban surface water body detection with suppressed built-up noise based on water indices from Sentinel-2 MSI imagery. Remote Sensing of Environment, 219: 259-270. https://doi.org/10.1016/j.rse.2018.09.016

[3] Chen, Y., Fan, R., Yang, X., Wang, J., Latif, A. (2018). Extraction of urban water bodies from high-resolution remote-sensing imagery using deep learning. Water, 10(5): 585. https://doi.org/10.3390/w10050585

[4] Wong, A., Clausi, D.A. (2007). ARRSI: Automatic registration of remote-sensing images. IEEE Transactions on Geoscience and Remote Sensing, 45(5): 1483-1493. https://doi.org/10.1109/TGRS.2007.892601

[5] da Rocha Gracioso, A.C.N., da Silva, F.F., Paris, A.C., de Freitas Góes, R., Elétrica-São Carlos, E. (2005). Gabor filter applied in supervised classification of remote sensing images. In Symposium Proceeding of the SIBGRAPI.

[6] Work, E.A., Gilmer, D.S. (1976). Utilization of satellite data for inventorying prairie ponds and lakes. Photogrammetric Engineering and Remote Sensing, 42(5): 685-694. https://doi.org/10.1109/TGE.1976.294436

[7] Li, W., Du, Z., Ling, F., et al. (2013). A comparison of land surface water mapping using the normalized difference water index from TM, ETM+ and ALI. Remote Sensing, 5(11): 5530-5549. https://doi.org/10.3390/rs5115530

[8] Du, Y., Zhang, Y., Ling, F., Wang, Q., Li, W., Li, X. (2016). Water bodies’ mapping from Sentinel-2 imagery with modified normalized difference water index at 10-m spatial resolution produced by sharpening the SWIR band. Remote Sensing, 8(4): 354. https://doi.org/10.3390/rs8040354

[9] Zhou, Y., Xie, G., Wang, S., Wang, F., Wang, F.T. (2014). Information extraction of thin rivers around built-up lands with false NDWI. Journal of Geo-information Science, 16(1): 102-107. https://doi.org/10.3724/SP.J.1047.2014.00102

[10] Vapnik, V.N. (1999). An overview of statistical learning theory. IEEE Transactions on Neural Networks, 10(5): 988-999. https://doi.org/10.1109/72.788640

[11] Roli, F., Fumera, G. (2001). Support vector machines for remote sensing image classification. In Image and Signal Processing for Remote Sensing VI, 4170: 160-166. https://doi.org/10.1117/12.413892

[12] Aung, E.M.M., Tint, T. (2018). Ayeyarwady river regions detection and extraction system from Google Earth imagery. In 2018 IEEE International Conference on Information Communication and Signal Processing (ICICSP), pp. 74-78. https://doi.org/10.1109/ICICSP.2018.8549806

[13] Liang, Z. (2019). Research on water information extraction method of multi-source remote sensing based on deep learning and its application. MS Dept. Ecol., AnHui Univ., AnHui, China.

[14] Wang, N., Cheng, J., Zhang, H., Cao, H., Liu, J. (2020). Application of U-net model to water extraction with high resolution remote sensing data. Remote Sensing for Land & Resources, (1): 35-42. https://doi.org/10.6046/gtzyyg.2020.01.06

[15] Frazier, P.S., Page, K.J. (2000). Water body detection and delineation with Landsat TM data. Photogrammetric Engineering and Remote Sensing, 66(12): 1461-1468. https://doi.org/10.1016/S1361-8415(00)00023-2

[16] McFeeters, S.K. (1996). The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. International Journal of Remote Sensing, 17(7): 1425-1432. https://doi.org/10.1080/01431169608948714

[17] Rubner, Y., Tomasi, C., Guibas, L.J. (1998). Proceedings of the Sixth International Conference on Computer Vision, pp. 839-846.

[18] Tomasi, C., Manduchi, R. (1998). Bilateral filtering for gray and color images. In Sixth international conference on computer vision (IEEE Cat. No. 98CH36271), pp. 839-846. https://doi.org/10.1109/ICCV.1998.710815

[19] Krishnan, K.B., Ranga, S.P., Guptha, N. (2017). A survey on different edge detection techniques for image segmentation. Indian Journal of Science and Technology, 10(4): 1-8. https://doi.org/17485/ijst/2017/v10i4/108963

[20] Yin, S., Zhang, Y., Karim, S. (2018). Large scale remote sensing image segmentation based on fuzzy region competition and Gaussian mixture model. IEEE Access, 6: 26069-26080. https://doi.org/10.1109/ACCESS.2018.2834960

[21] Sharma, P. (2019). Computer vision tutorial: A step-by-step introduction to image segmentation techniques. https://www.analyticsvidhya.com/blog/2019/04/introduction-image-segmentation-techniques.

[22] Zhang, Y. (1999). Optimisation of building detection in satellite images by combining multispectral classification and texture filtering. ISPRS Journal of Photogrammetry and Remote Sensing, 54(1): 50-60. https://doi.org/10.1016/S0924-2716(98)00027-6

[23] Huang, X., Zhang, L. (2009). A comparative study of spatial approaches for urban mapping using hyperspectral ROSIS images over Pavia City, northern Italy. International Journal of Remote Sensing, 30(12): 3205-3221. https://doi.org/10.1080/01431160802559046

[24] Benediktsson, J.A., Palmason, J.A., Sveinsson, J.R. (2005). Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Transactions on Geoscience and Remote Sensing, 43(3): 480-491. https://doi.org/10.1109/TGRS.2004.842478

[25] Dalla Mura, M., Villa, A., Benediktsson, J.A., Chanussot, J., Bruzzone, L. (2010). Classification of hyperspectral images by using extended morphological attribute profiles and independent component analysis. IEEE Geoscience and Remote Sensing Letters, 8(3): 542-546. https://doi.org/10.1109/LGRS.2010.2091253

[26] Shen, L., Zhu, Z., Jia, S., Zhu, J., Sun, Y. (2012). Discriminative Gabor feature selection for hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters, 10(1): 29-33. https://doi.org/10.1109/LGRS.2012.2191761

[27] Chen, C., Li, W., Su, H., Liu, K. (2014). Spectral-spatial classification of hyperspectral image based on kernel extreme learning machine. Remote Sensing, 6(6): 5795-5814. https://doi.org/10.3390/rs6065795

[28] Qian, Y., Ye, M., Zhou, J. (2012). Hyperspectral image classification based on structured sparse logistic regression and three-dimensional wavelet texture features. IEEE Transactions on Geoscience and Remote Sensing, 51(4): 2276-2291. https://doi.org/10.1109/TGRS.2012.2209657

[29] Quesada-Barriuso, P., Argüello, F., Heras, D.B. (2014). Spectral–spatial classification of hyperspectral images using wavelets and extended morphological profiles. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7(4): 1177-1185. https://doi.org/10.1109/JSTARS.2014.2308425

[30] Quesada-Barriuso, P., Argüello, F., Heras, D.B., Benediktsson, J.A. (2015). Wavelet-based classification of hyperspectral images using extended morphological profiles on graphics processing units. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 8(6): 2962-2970. https://doi.org/10.1109/JSTARS.2015.2394778

[31] Tarabalka, Y., Chanussot, J., Benediktsson, J.A. (2010). Segmentation and classification of hyperspectral images using watershed transformation. Pattern Recognition, 43(7): 2367-2379. https://doi.org/10.1016/j.patcog.2010.01.016

[32] Huang, X., Zhang, L. (2008). An adaptive mean-shift analysis approach for object extraction and classification from urban hyperspectral imagery. IEEE Transactions on Geoscience and Remote Sensing, 46(12): 4173-4185. https://doi.org/10.1109/TGRS.2008.2002577

[33] Ghamisi, P., Couceiro, M.S., Fauvel, M., Benediktsson, J.A. (2013). Integration of segmentation techniques for classification of hyperspectral images. IEEE Geoscience and Remote Sensing Letters, 11(1): 342-346. https://doi.org/10.1109/LGRS.2013.2257675

[34] Tarabalka, Y., Tilton, J.C., Benediktsson, J.A., Chanussot, J. (2011). A marker-based approach for the automated selection of a single segmentation from a hierarchical set of image segmentations. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 5(1): 262-272. https://doi.org/10.1109/JSTARS.2011.2173466

[35] Poorahangaryan, F., Ghassemian, H. (2020). Spectral-spatial hyperspectral image classification based on homogeneous minimum spanning forest. Mathematical Problems in Engineering. https://doi.org/10.1155/2020/8884965

[36] Fang, L., Li, S., Kang, X., Benediktsson, J.A. (2015). Spectral–spatial classification of hyperspectral images with a superpixel-based discriminative sparse model. IEEE Transactions on Geoscience and Remote Sensing, 53(8): 4186-4201. https://doi.org/10.1109/TGRS.2015.2392755

[37] Kettig, R.L., Landgrebe, D.A. (1976). Classification of multispectral image data by extraction and classification of homogeneous objects. IEEE Transactions on geoscience Electronics, 14(1): 19-26. https://doi.org/10.1109/TGE.1976.294460

[38] Tarabalka, Y., Chanussot, J., Benediktsson, J.A. (2009). Segmentation and classification of hyperspectral images using minimum spanning forest grown from automatically selected markers. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 40(5): 1267-1279. https://doi.org/10.1109/TSMCB.2009.2037132

[39] Huang, X., Zhang, L. (2009). A comparative study of spatial approaches for urban mapping using hyperspectral ROSIS images over Pavia City, northern Italy. International Journal of Remote Sensing, 30(12): 3205-3221. https://doi.org/10.1080/01431160802559046

[40] Gagnon, L., Smaili, F.D. (1996). Speckle noise reduction of airborne SAR images with symmetric Daubechies wavelets. In Signal and Data Processing of Small Targets, 2759: 14-24. https://doi.org/10.1117/12.241168

[41] Sciacchitano, F., Dong, Y., Hansen, P.C. (2017). Image reconstruction under non-Gaussian noise. Citation. Available at: www.compute. dtu.dk.

[42] Subashini, P., Bharathi, P.T. (2011). Automatic noise identification in images using statistical features. International Journal for Computer Science and Technology, 2(3): 467-471.

[43] Leng, X., Ji, K., Xing, X., Zou, H., Zhou, S. (2016). Hybrid bilateral filtering algorithm based on edge detection. IET Image Processing, 10(11): 809-816. https://doi.org/10.1049/iet-ipr.2015.0574