Automatic Extraction of Color Features from Landscape Images Based on Image Processing

Automatic Extraction of Color Features from Landscape Images Based on Image Processing

Cong Tan* Shaoyu Yang

College of Arts, Henan University of Animal Husbandry and Economy, Zhengzhou 450046, China

College of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450046, China

Corresponding Author Email: 
81424@hnuahe.edu.cn
Page: 
747-755
|
DOI: 
https://doi.org/10.18280/ts.380322
Received: 
28 December 2020
|
Revised: 
2 May 2021
|
Accepted: 
12 May 2021
|
Available online: 
30 June 2021
| Citation

© 2021 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

The dominant color features determine the presentation effect and visual experience of landscapes. The existing studies rarely quantify the application effect of landscape colors through image colorization. Besides, it is unreasonable to analyze landscape images with multiple standard colors with a single color space. To solve the problem, this paper proposes an automatic extraction method for color features from landscape images based on image processing. Firstly, a landscape lighting model was constructed based on color constancy theories, and the quality of landscape images was improved with color constant image enhancement technology. In this way, the low-level color features were extracted from the landscape image library. Next, support vector machine (SVM) and fuzzy c-means (FCM) were innovatively integrated to extract high-level color features from landscape images. The proposed method was proved effective through experiments.

Keywords: 

image processing, landscape colors, color feature extraction, color constancy

1. Introduction

Owing to seasonal changes and the variation in spatial layout, landscapes present various textures and color types. The varied colors would trigger different emotions among the viewers, exert a positive influence on their psychology, and leave them with a rich impression of the landscapes [1-6]. The dominant color features determine the presentation effect and visual experience of landscapes [7-9]. Considering the diversity and seasonal variation of the colors of plants and buildings, it is worthwhile to study the color configuration of landscapes, and quantify the features of color composition. These studies are premised on the color features of landscape images. Therefore, the extraction of these features must be continuously explored and updated [10-16].

Through field survey, color collection, and aerial photography, Raja et al. [17] extracted the architectural color features of typical historical blocks in cities, and proposed a block planning strategy with regional historical elements. Cai et al. [18] mainly explored campus building colors, which reflect the visual beauty and spiritual atmosphere of college buildings: The color styles of unique campus buildings in different periods were qualified and quantified in both time and space, and the evolution mechanism of the color styles was analyzed along with the continuous update and expansion of the campus. Using NCS color cards, Li et al. [19] collected the seasonal aspect color features of wild plants in Changbai Mountain Scenic Area, screened 120 types of plants with ornamental flowers, plants with ornamental fruits, and plants with colorful foliage, all of which have obvious seasonal aspect features. Furthermore, the flower color NCS range containing 53 standard colors was determined for plants with ornamental flowers, and the fruit color NCS range containing 15 standard colors was determined for plants with ornamental fruits. In addition, the ideas of creating a dynamic and sequential landscape out of the colors of plants in different seasons were discussed from three perspectives: landscape viewing period, time, and space. Starting with environmental color psychology, Jena et al. [20] constructed an evaluation model for the color features of park landscapes based on the degree of the psychological impact of tourists, set up the corresponding multi-linear regression equation, and applied the equation to associate park landscape color features with tourists’ psychological perception. Zhao et al. [21] enhanced the images on roadside landscapes through texture rendering, effectively extracted the color characteristic quantities that visually stimulate drivers and passengers, and carried out automatic identification of color modes and classification of color features.

The existing studies mostly qualify the application effect of landscape colors [22-29], but rarely quantify the effect through image colorization. Besides, it is unreasonable to analyze landscape images with multiple standard colors with a single color space. To solve the problem, this paper proposes an automatic extraction method for color features from landscape images based on image processing. To improve the robustness and stability of automatic extraction of color features from landscape images, Section 2 constructs a landscape lighting model based on color constancy theories, improves the quality of landscape images with color constant image enhancement technology, and thereby extracts the low-level color features from the landscape image library. Section 3 combines support vector machine (SVM) and fuzzy c-means (FCM) to extract high-level color features from landscape images. The proposed method was proved effective through experiments.

2. Color Feature Extraction from Landscape Images with Multiple Color Spaces

2.1 Lighting model construction based on color constancy theories

Three conditions must be satisfied to suppress the external interference of the computer vision system and the image processing system, and to enhance the robustness and stability of the color features of landscape images:

(1) The landscape image samples could be properly imported to the machine learning network as vectors;

(2) The network learning algorithm could fit the mapping function between color distribution of landscape images and lighting;

(3) The output vectors of machine learning network should be composed of the color distribution features of landscape images.

Figure 1. Utilization frequencies of different color systems in landscape images

Figure 1 shows the utilization frequencies of different color systems in landscape images. Among the spatial color systems of the landscape images in spring, summer, autumn, and winter, the light green is the most frequently utilized color system, followed in turn by dark green, yellow, white, pink, purple, and red. Currently, there are mainly two theories about color constancy. The first theory differentiates between lighting conditions and object colors through image depiction. The second theory explains the brightness and color change with the human memory process. Let (a, b) be the coordinates of any pixel in the target landscape image; ZD be the illuminance of the image; CH be the reflection property of the objects in the image; RE be the reflected light that can form an image once it is captured by the imaging device. Then, we have:

$RE(a, b)=ZD(a, b)\cdot CH(a,b)$               (1)

Formula (1) shows that the reflected light is only affected by CH and ZD. However, the impact of CH is far greater than that of ZD. If ZD can be estimated from RE and removed, then the landscape image enters the optimal state, eliminating the need for considering lighting variation. In this case, it is very likely to restore a clear landscape image.

Without any additional conditions or limitations, it is difficult to estimate the lighting and color distribution features of images accurately. If the lighting is fixed, however, the color distribution of a landscape image can form a closed and bounded convex set. This paper introduces the color constancy of landscape images to the machine learning network, and thus determines the mapping function between the color distribution features and the imaging lighting of landscape images. The landscape image lighting model needs to consider the following light factors, namely, ambient light, diffuse reflected light, specular reflected light, and light intensity attenuation.

The ambient light of landscape imaging scene may change obviously, if the weather changes from sunny to cloudy, or the other way around. The lights from all directions reach plants and buildings on different levels, and are then reflected evenly in all directions. Let RES be ambient light intensity; COS be the reflection coefficient of the object surface for ambient light. Then, the reflection intensity REH of ambient light can be calculated by:

$R{{E}_{H}}=R{{E}_{S}}\cdot C{{O}_{S}}$           (2)

The sun lights come from the same direction, and are reflected evenly in all directions by the plants and buildings on different landscape levels. Let M be the normal vector of plant/building surface at point O; K be the vector pointing from O to the light source, i.e., the sun; ω be the angle between M and K, i.e., the incident angle of lights; REG be the light source intensity; COM $\in$ [0, 1] be the diffuse reflection coefficient, which depends on the material attribute of plant/building, and the wavelength of the incident light. According to the law of cosines, the intensity of diffuse reflected light REM at O can be calculated by:

$R{{E}_{M}}=R{{E}_{G}}C{{O}_{M}}cos\omega ,\omega \in \left[ 0,\frac{\pi }{2} \right]$                (3)

Let COJ be the specular reflection coefficient COJ; β be the angle between the viewpoint direction U and the specular reflection direction S; m be a constant related to the smoothness of plant/building surface (the greater the value of m, the smoother the object surface). Then, the light intensity REJ in the viewing direction of specular reflected light, which measures the level of specular reflection, can be calculated by Phong model:

$R{{E}_{J}}=R{{E}_{G}}C{{O}_{J}}co{{s}^{m}}\beta $                 (4)

If the light conditions are fixed, an object close to the light source will appear brighter than that far away from the source. Thus, the attenuation of the light intensity must be considered during the processing of landscape images. Let l be the propagation distance of light. Then, the secondary attenuation function of landscape image light intensity can be established as:

$g\left( l \right)=min\left( 1,\frac{1}{{{p}_{0}}+{{p}_{1}}l+{{p}_{2}}{{l}^{2}}} \right)$                 (5)

Formula (5) shows that the different light effects in the landscape scene can be realized by regulating the adjustment coefficients p0, p1 and p2 of the computer vision system. The RGB color space can be selected to illustrate the color scene of a landscape image. Therefore, this paper decides to build a lighting model of each color component in the RGB color space. Then, the intensities of ambient light, diffuse reflected light, and specular reflected light can be respectively described as:

$R{{E}_{S}}=\left( R{{E}_{SR}},R{{E}_{SG}},R{{E}_{SB}} \right)$           (6)

$R{{E}_{M}}=\left( R{{E}_{MR}},R{{E}_{MG}},R{{E}_{MB}} \right)$                     (7)

$R{{E}_{G}}=\left( R{{E}_{GR}},R{{E}_{GG}},R{{E}_{GB}} \right)$          (8)

The coefficients of ambient light reflection COS, diffuse reflection COM, and specular reflection COJ can be respectively calculated by:

$C{{O}_{S}}=\left( C{{O}_{SR}},C{{O}_{SG}},C{{O}_{SB}} \right)$           (9)

$C{{O}_{M}}=\left( C{{O}_{MR}},C{{O}_{MG}},C{{O}_{MB}} \right)$            (10)

$C{{O}_{J}}=\left( C{{O}_{JR}},C{{O}_{JG}},C{{O}_{JB}} \right)$                   (11)

In the RGB color space, the light intensities of the red, green, and blue channels can be respectively given by:

$\begin{align}  & R{{E}_{R}}=R{{E}_{SR}}C{{O}_{SR}}+\sum\limits_{i=1}^{m}{g\left( {{l}_{h}} \right)}R{{E}_{GR,i}}C{{O}_{MR}}\left( {{K}_{h}}\bullet M \right) \\  & +\sum\limits_{i=1}^{m}{g\left( {{l}_{h}} \right)}R{{E}_{GR,i}}C{{O}_{JR}}{{\left( {{F}_{h}}\bullet M \right)}^{m}} \\ \end{align}$                  (12)

$\begin{align}  & R{{E}_{G}}=R{{E}_{SG}}C{{O}_{SG}}+\sum\limits_{i=1}^{m}{g\left( {{l}_{h}} \right)}R{{E}_{GG,i}}C{{O}_{MG}}\left( {{K}_{h}}\bullet M \right) \\  & +\sum\limits_{i=1}^{m}{g\left( {{l}_{h}} \right)}R{{E}_{GG,i}}C{{O}_{JG}}{{\left( {{F}_{h}}\bullet M \right)}^{m}} \\ \end{align}$              (13)

$\begin{align}  & R{{E}_{B}}=R{{E}_{SB}}C{{O}_{SB}}+\sum\limits_{i=1}^{m}{g\left( {{l}_{h}} \right)}R{{E}_{GB,i}}C{{O}_{MB}}\left( {{K}_{h}}\bullet M \right) \\  & +\sum\limits_{i=1}^{m}{g\left( {{l}_{h}} \right)}R{{E}_{GB,i}}C{{O}_{JB}}{{\left( {{F}_{h}}\bullet M \right)}^{m}} \\ \end{align}$                  (14)

To simplify the adjustment of light effects, the different reflection coefficients can be decomposed as:

$\left[\begin{array}{l}C O_{S R} \\ C O_{S G} \\ C O_{S B}\end{array}\right]=C O_{S}\left[\begin{array}{c}Z D_{M R} \\ Z D_{M G} \\ Z D_{M B}\end{array}\right],\left[\begin{array}{l}C O_{M R} \\ C O_{M G} \\ C O_{M B}\end{array}\right]=C O_{M}\left[\begin{array}{c}Z D_{M R} \\ Z D_{M G} \\ Z D_{M B}\end{array}\right],\left[\begin{array}{c}C O_{J R} \\ C O_{J G} \\ C O_{J B}\end{array}\right]=C O_{J}\left[\begin{array}{c}Z D_{M R} \\ Z D_{M G} \\ Z D_{M B}\end{array}\right]$.              (15)

Formula (15) shows that, when the ambient light intensity and light source intensity of a landscape image are fixed, the surface colors of plants/buildings on different levels are jointly determined by COS, COM, and COJ.

2.2 Landscape image enhancement

Based on color constancy, image enhancement can improve the quality of landscape images. The enhancement helps to enrich the information in the target image, improve the identification effect of the objects in the image, and promote the automatic extraction effect on the color distribution features from the image. To achieve color constant image enhancement, the premise is to ignore the landscape color features not required for image quality decline and attenuation, and to highlight the necessary features. Under this premise, the processed landscape image could be more effective than the original image in certain application scenarios.

To simplify the calculation process of color constant image enhancement, the original landscape image should be transformed to the logarithmic space, that is, the multiplication and division operations in formula (16) should be replaced with linear addition and subtraction:

$\frac{Q}{W}=\delta \left( \frac{{{l}_{2}}}{{{l}_{1}}} \right)\times \delta \left( \frac{{{l}_{3}}}{{{l}_{2}}} \right)\times \delta \left( \frac{{{l}_{4}}}{{{l}_{3}}} \right)\times ...\times \delta \left( \frac{{{l}_{m}}}{{{l}_{m}}-1} \right)$         (16)

Then, the illumination variation should be estimated by comparing the values of different pixels in the landscape image. Let [c, d] be the grayscale interval of the original landscape image g(a, b); [e, f] be the grayscale interval of h(a, b) after contrast stretching. Taking contrast stretching as a linear transform, the relationship between the original landscape image and the linearly transformed image can be characterized by:

$h\left( a,b \right)=e+\frac{f-e}{d-c}\left( g\left( a,b \right)-c \right)$                    (17)

The values of c and d can be obtained from the histograms of the original and processed landscape images, while the values of e and f can be derived from the grayscale interval of the mapping function. Since |d-c| is always smaller than |f-e|, the grayscale difference and contrast difference between pixels are increased through discretization, which does not change the number of pixels in the landscape image. The growing differences improve the quality of the landscape image, and facilitate the automatic extraction of color features. Figure 2 compares the histograms of the original and processed landscape images after color constant image enhancement.

(a) Original image

(b) Processed image

Figure 2. Histograms of the original and processed landscape images after color constant image enhancement

2.3 Color feature extraction from landscape images

The basic color features and dominant color system of a landscape image can be determined according to the landscape image captured in real time by the computer vision system, as well as the landscape environment information provided by field operators. Here, the relative color system bias component (CSBC) extracted from the RGB space is taken as a basic color feature of the landscape image. The greater the color system bias, the better the automatic extraction of color features. Let SO be the CSBC of the original landscape image, and SO be the corresponding grayscale component. Then, the relative color system bias CDCS of the original landscape image can be calculated by:

$CD{{C}_{S}}=\frac{{{S}_{O}}-{{G}_{O}}}{{{G}_{O}}}$                (18)

The extracted relative color system bias should be normalized to further enhance the robustness of color feature extraction from landscape images under different light conditions.

Table 1 shows the composition of color clusters in landscape images. In the hue-saturation-value (HSV) color space of the landscape image colors, the H component is mostly light green, dark green, white, yellow, etc. The H value is independent of the lighting conditions of the imaging site, and related only to the colors of plants/buildings on different levels in the landscape images. Therefore, the mean value of H was selected as the color feature of landscape images. The H value can be simply calculated by:

After a landscape image completes RGB-HSV transform, the mean value of H was calculated. The greater the mean value, the smaller the CSBC. The inverse is also true.

$H\left( LI \right)=\left\{ \begin{align}  & 360-{{\cos }^{1}}\frac{\left[ \left( L{{I}_{R}}-L{{I}_{G}} \right)+\left( L{{I}_{R}}-L{{I}_{B}} \right) \right]}{2\sqrt{{{\left( L{{I}_{R}}-L{{I}_{G}} \right)}^{2}}+\left( L{{I}_{R}}-L{{I}_{B}} \right)\left( L{{I}_{G}}-L{{I}_{B}} \right)}} \\  & L{{I}_{B}}>L{{I}_{G}} \\  & {{\cos }^{-1}}\frac{\left[ \left( L{{I}_{R}}-L{{I}_{G}} \right)+\left( L{{I}_{R}}-L{{I}_{B}} \right) \right]}{2\sqrt{{{\left( L{{I}_{R}}-L{{I}_{G}} \right)}^{2}}+\left( L{{I}_{R}}-L{{I}_{B}} \right)\left( L{{I}_{R}}-L{{I}_{B}} \right)}} \\  & L{{I}_{B}}\le L{{I}_{G}} \\ \end{align} \right.$         (19)

Table 1. Composition of color clusters in landscape images

Cluster number

Type of color cluster

Plant

Ornamental position

Seasonal aspect color

Spring

Summer

Autumn

Winter

LI1

Multi-color type

Chinese ash, Chinese parasol

Leaves

Light green

Light green

Light green

\

Silk tree, willow

Flowers and leaves

Pink

Light green

\

\

Ivy

Leaves

Light green

Light green

Red

\

Masson pine, sweet-scented osmanthus

Flowers and leaves

Dark green

Dark green

Dark green

Dark green

LI2

Multi-color type

Oriental plane

Leaves

Yellow

Light green

\

\

Oriental cherry

Flowers and leaves

White

Light green

\

\

Cape jasmine

Flowers and leaves

White

Light green

Light green

Light green

Red maple

Leaves

\

Light green

Red

\

LI3

Multi-color type

Canna

Flowers

\

Light green

Yellow

\

Oriental cherry

Flowers

White

Light green

\

\

Southern magnolia

Flowers and leaves

Light green

Light green

Light green

Light green

LI4

Multi-color type

Winter jasmine

Flowers and leaves

Yellow

Light green

Light green

Light green

Scarlet sage

Flowers

Red

Light green

\

\

Gingko

Leaves

Light green

Light green

Yellow

\

Boston ivy

Leaves

Dark green

Light green

Dark green

Dark green

Figure 3. Analysis results on CSBC

Figure 3 presents the analysis results on CSBC. Theoretically, the color space-based CSBC analysis had a low discriminability. The color features were randomly extracted from landscape images of different seasons. For example, the green component of spring landscape images was sometimes smaller than that of autumn images. This calls for the extraction of high-level color features.

Based on machine vision and image processing, the automatic extraction of color features from landscape images aims to extract image information through computer vision system, and then classify landscape image pixels to different classes of image color features. To obtain color feature classes more accurately, this paper integrates different complementary components from multiple color spaces of the original image. Our strategy can be detailed as follows:

Step 1. Image graying is the basis of multi-color space fusion. Based on Camshift algorithm, this paper transforms the original image into a color probability distribution map, better reflection. The distribution law of the original landscape image grayscale value, using the color histograms of plants/buildings on different levels in the landscape image. The map better reflects the distribution law of grayscales in the original image. Next, other components like hue and saturation were added for further fusion.

Step 2. The background differencing algorithm was improved to quantify the feature difference between plants/buildings on different levels and the background of the landscape image in different color spaces. The improved algorithm was applied to perform differential operations on the nine color feature sub-space (CFSS) components LIR, LIG, LIB, LIH, LIS, LIh, LIL, LIa, and LIb, which are extracted from the original landscape image in three color spaces, namely, RGB, HSV, and Lab3. In this way, 9 decomposed sub-images could be obtained

Step 3. Through the extraction and background differencing of the components from multiple color spaces and multiple CFSSs of the landscape image, the information of multiple color spaces was successfully fused based on the differential results. Let Φ1 be the target area of the plants/buildings on different levels, which is identified through background differencing of grayscale; $\Phi_{2}, \Phi_{3} \ldots, \Phi_{q}$, q=2, 3, …, 9 be the target area identified through background differencing of the other eight components. Then, the target areas of different objects in the landscape image could be obtained through simultaneous background differencing of multiple CFSS components.

Step 4. The color results of different components were fused and denoised. To check if the landscape image contains a noise component c, it is necessary to search for K nearest neighbors similar to the image in the normalized landscape image library. Let l be the Euclidean distance between two landscape images c and τ; n be the number of image color features; φa be the adjustable parameter. Then, the similarity between the eigenvariables of K+1 images can be defined by:

${{O}_{c\tau }}\left( a \right)=\exp \left( -\frac{l_{c\tau }^{2}}{n{{\psi }_{a}}} \right)\text{        }\tau =1,2,...K$             (20)

The similarity in color features between landscape image c and K nearest samples can be defined as:

${{g}_{c\tau }}=1-\left| {{O}_{c\tau }}\left( a \right)-max\left( {{O}_{c\tau }}\left( b \right) \right) \right|$           (21)

There are K color feature matching scores. If the matching scores of the target landscape image are all below the preset threshold, then the image must be noisy. Therefore, the greater the K value or the preset threshold, the smaller the probability of noise presence in the landscape image. After eliminating the noise from the image, the final image color features would be exposed.

3. High-Level Color Feature Extraction Coupling SVM and FCM

The low-level color features extracted from the landscape image library could be compiled into a logic low-level color eigenvector library. The landscape images trained by SVM or FCM could form a decision matrix or a membership matrix, revealing the probability of each landscape image sample belonging to a class of color features. This paper combines two machine learning algorithms, namely, SVM and FCM, to improve the classification accuracy. Figure 4 shows the workflow of the hybrid algorithm in color feature extraction.

Figure 4. Flow of hybrid algorithm in color feature extraction

Suppose the color features of the landscape image library can be divided into ξ classes. After SVM training, a color feature classifier would be generated for each class. For an original, untrained landscape image, the color feature classifiers would produce a ξ-dimensional color feature decision vector (CFDV); each dimension reflects the probability of the sample belonging to a class of color features. The greater the probability, the more likely for the sample to belong to the corresponding class. Let Qτw be the probability of sample w belonging to class δ ($\delta \in \xi$). Then, an ξ-dimensional CFDV can be produced from w:

$g_{w}^{svm}=\left\{ {{Q}_{1w}},{{Q}_{2w}},...,{{Q}_{\xi w}} \right\}$            (22)

Suppose FCM has $\xi$ fuzzy groups FGs. Let λjw be the membership of FCM-trained sample w to class j of color features (j $\in$ FG). Then, the membership matrix of the landscape image can be established as:

$g_{w}^{fcm}=\left\{ {{\lambda }_{1w}},{{\lambda }_{2w}},...,{{\lambda }^{FG}}_{w} \right\}$                      (23)

Similarity is commonly measured by Euclidean distance. Compared to this common metric, the cosine of the angle between vectors can clearly characterize the semantic similarity in the vector space model for color feature retrieval. To learn high-level color features, the SVM or FCM could measure the similarity between original landscape images by angle cosine of vectors.

Suppose the target landscape image w and a known sample u in the library are trained by SVM and FCM. After the training, two color feature decision matrices gSw and gCw can be obtained, as well as two membership matrices gSu and gCu. Let Qδw and Qδu be the probabilities of w and u belonging to class δ of color features ($\delta \in \xi$), respectively. Then, the SVM-based similarity of landscape images can be given by:

$\begin{align}  & {{O}_{S}}\left( g_{w}^{S},g_{u}^{S} \right)=1-cos\theta  =1-\frac{\sum\limits_{\delta =1}^{\xi }{{{Q}_{\delta w*}}{{Q}_{\delta u}}}}{\sqrt{\sum\limits_{\delta =1}^{\xi }{{{\left( {{Q}_{\delta w}} \right)}^{2}}}}*\sqrt{\sum\limits_{\delta =1}^{\xi }{{{\left( {{Q}_{\delta u}} \right)}^{2}}}}} \\ \end{align}$               (24)

Let λjw and λju be the memberships of w and u to class j of color features ($j \in FG$), respectively. Then, the FCM-based similarity of landscape images can be given by:

$\begin{align}  & {{O}_{C}}\left( g_{w}^{C},g_{u}^{C} \right)=1-cos\omega = \\  & 1-\frac{\sum\limits_{j=1}^{FG}{{{\lambda }_{jw*}}{{\lambda }_{ju}}}}{\sqrt{\sum\limits_{j=1}^{FG}{{{\left( {{Q}_{jw}} \right)}^{2}}}}*\sqrt{\sum\limits_{j=1}^{FG}{{{\left( {{Q}_{ju}} \right)}^{2}}}}} \\ \end{align}$               (25)

Let γl and γ2 be two nonnegative weight coefficients. The values of the two coefficients should be determined empirically through experiments. The only constraint is that the sum of the two should be 1. Next, the SVM-based similarity and FCM-based similarity can be fused by:

${{O}_{S-C}}\left( w,u \right)={{\gamma }_{1}}{{O}_{S}}+{{\gamma }_{2}}{{O}_{C}}$               (26)

The smaller the value of OS-C, the more similar the color features between samples; the greater the value, the larger the difference between color spaces. The proposed SVM-FCM algorithm compresses low-level color eigenvectors, merges high-level color features, and classifies color features in line with subjective cognition. Figure 5 presents the color features extracted by our algorithm from the landscape images of the four seasons.

Figure 5. Color features extracted by our algorithm from the landscape images of the four seasons

4. Experiments and Results Analysis

The color features of landscape images, including REH, REM, REJ, CDCS, hue H, CFDV, OS and OC, were grouped in pairs, and subject to Pearson correlation analysis. The results in Table 2 show that the REH of foreground plants in landscape images has a significant negative correlation with REJ and CDCS. To extract and classify color features from landscape images, it is often necessary to coordinate the color brightness and CSBC of plants/buildings on different levels in the images. The plants/buildings with a high hue H tend to have bright colors, and relatively large OS and OC. Their color features are more likely to be correctly classified.

Table 2. Correlations between color features

 

REH

REM

REJ

CDCS

Hue H

CFDV

OS

OC

REH

1

 

 

 

 

 

 

 

REM

-0.175

1

 

 

 

 

 

 

REJ

-0.423

0.237

1

 

 

 

 

 

CDCS

-0.481

0.258

0.635

1

 

 

 

 

Hue H

0.059

0.272

-0.059

0.095

1

 

 

 

CFDV

-0.058

0.235

0.127

0.381

0.213

1

 

 

OS

-0.213

0.189

0.638

0.072

0.067

-0.214

1

 

 

-0.167

0.124

-0.015

-0.019

0.259

-0.237

0.495

1

Figure 6. Hue values of objects on different levels

Figure 6 shows the distribution range of the hues of plants/buildings on different levels. Note that H1 and H2 stand for the hues of foreground and background objects, respectively. It can be seen that the hues of plants/buildings on different levels clustered in different areas. The H1 values of foreground plants/buildings mainly concentrated in [0°, 80°] and [230°, 260°], the intervals of red, orange, yellow, and purple color systems. The H2 values of background plants/buildings mainly concentrated in [80°, 230°], the interval of yellow and green color systems. Therefore, the foreground objects of the landscape images cover more color systems than the background objects.

Figure 7. Mean hue distribution of landscape images

According to the distribution of mean hue of landscape images (Figure 7), the landscape images after the extraction of low-level color features had smaller mean hues than those after the extraction of high-level color features. The CSBC similarity of landscape images was separately measured by angle cosine and Euclidean distance. The two metrics captured basically the same trend of hue. Of course, the CSBC obtained by Euclidean distance was larger than that obtained by angle cosine. It can be seen from Figure 7 that, the smaller the mean hue of a landscape image, the lower the extraction accuracy of its color features. This agrees with the results of manual extraction of color features. There was no obvious boundary between the hue values of low-level color extraction and those of high-level color extraction. The two sets of hues were only overlapped partially. Therefore, the color features of landscape images cannot be classified accurately, solely relying on hue. This paper introduces saturation of landscape images to the color feature analysis. Figure 8 illustrates the relationship between mean hue and mean saturation of landscape images.

Figure 8. Relationship between mean hue and mean saturation of landscape images

Figure 9. Saturations of objects on different levels

Figure 9 compares the statistical distributions of saturations of plants/buildings on different levels. Note that S1 and S2 stand for the saturations of foreground and background objects, respectively. It can be seen that the S1 values of foreground plants/buildings mainly concentrated in [10°, 80°], while the S2 values of background plants/buildings mainly concentrated in [0°, 90°]. There was no obvious difference between the concentration areas of the saturations for objects on different levels.

Figure 10. Values of objects on different levels

Figure 10 compares the statistical distributions of values of plants/buildings on different levels. Note that V1 and V2 stand for the values of foreground and background objects, respectively. It can be seen that the V1 values of foreground plants/buildings mainly concentrated in [60°, 95°], while the V2 values of background plants/buildings mainly concentrated in [40°, 80°]. There was an obvious difference between the concentration areas of the values for objects on different levels: the foreground plants/buildings had relatively high values; the landscape would have a better sense of depth, if the foreground and background objects have a 30% value difference.

Figure 11. Color ratios of objects on different levels

In a landscape image, the color ratio of objects refers to the color area of plants/buildings on different levels as a proportion of the total area of the landscape. Figure 11 compares the statistical distributions of color ratios of plants/buildings on different levels. It can be seen that the color ratios of foreground and background objects concentrated in [0.05, 0.35] and [0.25, 0.35], respectively. Foreground objects have a smaller color ratio than background objects.

5. Conclusions

Based on image processing, this paper mainly proposes an automatic extraction method for the color features of landscape images. Firstly, the lighting model was constructed for landscape images based on color constancy theories, the landscape images were enhanced without sacrificing color constancy, and the low-level color features were extracted from the samples of the landscape image library. Next, an improved method was developed by coupling SVM with FCM, and used to extract the high-level color features from landscape images. After that, experiments were carried out to disclose the correlations between color features, draw the scatter plots for the hues, saturations, and values of objects on different levels, and compare the color ratios of objects on different levels. The experimental results verify the effectiveness of our method, and promote our understanding of landscape color features.

  References

[1] Salmi, A., Hammouche, K., Macaire, L. (2021). Constrained feature selection for semisupervised color-texture image segmentation using spectral clustering. Journal of Electronic Imaging, 30(1): 013014. https://doi.org/10.1117/1.JEI.30.1.013014

[2] Wang, G., Liu, Y., Xiong, C. (2015). An optimization clustering algorithm based on texture feature fusion for color image segmentation. Algorithms, 8(2): 234-247. https://doi.org/10.3390/a8020234

[3] Păvăloi, I., Ignat, A. (2019). Iris occluded image classification using color and SIFT features. In 2019 E-Health and Bioengineering Conference (EHB), pp. 1-4. https://doi.org/10.1109/EHB47216.2019.8970046

[4] Lee, Y.H., Bang, S.I. (2019). Improved image retrieval and classification with combined invariant features and color descriptor. Journal of Ambient Intelligence and Humanized Computing, 10(6): 2255-2264. https://doi.org/10.1007/s12652-018-0817-0

[5] Zhou, Z., Zhao, X., Zhu, S. (2018). K-harmonic means clustering algorithm using feature weighting for color image segmentation. Multimedia Tools and Applications, 77(12): 15139-15160. https://doi.org/10.1007/s11042-017-5096-9

[6] Xu, G., Li, X., Lei, B., Lv, K. (2018). Unsupervised color image segmentation with color-alone feature using region growing pulse coupled neural network. Neurocomputing, 306: 1-16. https://doi.org/10.1016/j.neucom.2018.04.010

[7] Muhimmah, I., Heksaputra, D. (2018). Color feature extraction of HER2 Score 2+ overexpression on breast cancer using Image Processing. In MATEC Web of Conferences, 154: 03016. https://doi.org/10.1051/matecconf/201815403016

[8] Sharan, R.V., Moir, T.J. (2018). Pseudo-color cochleagram image feature and sequential feature selection for robust acoustic event recognition. Applied Acoustics, 140: 198-204. https://doi.org/10.1016/j.apacoust.2018.05.030

[9] Bu, H.H., Kim, N.C., Yun, B.J., Kim, S.H. (2020). Content-based image retrieval using multi-resolution multi-direction filtering-based CLBP texture features and color autocorrelogram features. Journal of Information Processing Systems, 16(4): 991-1000. https://doi.org/10.3745/JIPS.02.0138

[10] Shinde, S.R., Sabale, S., Kulkarni, S., Bhatia, D. (2015). Experiments on content based image classification using Color feature extraction. In 2015 international conference on communication, Information & Computing Technology (ICCICT), pp. 1-6. https://doi.org/10.1109/ICCICT.2015.7045737

[11] Pradhan, J., Pal, A.K., Banka, H. (2019). Principal texture direction based block level image reordering and use of color edge features for application of object based image retrieval. Multimedia Tools and Applications, 78(2): 1685-1717. https://doi.org/10.1007/s11042-018-6246-4

[12] Shirai, K., Ito, Y., Miyao, H., Maruyama, M. (2021). Efficient pixel-wise SVD required for image processing using the color line feature. IEEE Access, 9: 79449-79460. https://doi.org/10.1109/ACCESS.2021.3083895

[13] Chen, J., Chen, L. (2021). Multi-dimensional color image recognition and mining based on feature mining algorithm. Automatic Control and Computer Sciences, 55(2): 195-201. https://doi.org/10.3103/S0146411621020048

[14] Fatima, S., Seshashayee, M. (2020). Hybrid color feature image categorization using machine learning. In 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS), pp. 687-692. https://doi.org/10.1109/ICISS49785.2020.9316111

[15] Mufarroha, F.A., Anamisa, D.R., Hapsani, A.G. (2020). Content based image retrieval using two color feature extraction. In Journal of Physics: Conference Series, 1569(3): 032072. https://doi.org/10.1088/1742-6596/1569/3/032072

[16] Souahlia, A., Belatreche, A., Benyettou, A., Ahmed-Foitih, Z., Benkhelifa, E., Curran, K. (2020). Echo state network‐based feature extraction for efficient color image segmentation. Concurrency and Computation: Practice and Experience, 32(21): e5719. https://doi.org/10.1002/cpe.5719

[17] Raja, R., Kumar, S., Mahmood, M.R. (2020). Color object detection based image retrieval using ROI segmentation with multi-feature method. Wireless Personal Communications, 112(1): 169-192. https://doi.org/10.1007/s11277-019-07021-6

[18] Cai, X., Ge, Y., Cai, R., Guo, T. (2019). Image color style migration method for mobile applications based color feature extraction. Journal of Computational Methods in Sciences and Engineering, 19(4): 879-890. https://doi.org/10.3233/JCM-190020

[19] Li, Y., Ma, C., Zhang, T., Li, J., Ge, Z., Li, Y., Serikawa, S. (2019). Underwater image high definition display using the multilayer perceptron and color feature-based srcnn. IEEE Access, 7: 83721-83728. https://doi.org/10.1109/ACCESS.2019.2925209

[20] Jena, J.J., Girish, G., Patro, M. (2018). Evaluating effectiveness of color information for face image retrieval and classification using SVD feature. Communications in Computer and Information Science, 905: 249-259.

[21] Zhao, L., Zhang, W., Sun, Z.G., Chen, Q. (2018). Brake pad image classification algorithm basedon color segmentation and information entropy weighted feature matching. Journal of Tsinghua University (Science and Technology), 58(6): 547-552. https://doi.org/10.16511/j.cnki.qhdxxb.2018.26.025

[22] Palanisamy, G., Ponnusamy, P., Gopi, V.P. (2019). An improved luminosity and contrast enhancement framework for feature preservation in color fundus images. Signal, Image and Video Processing, 13(4): 719-726. https://doi.org/10.1007/s11760-018-1401-y

[23] Zhang, H., Qin, J., Zhang, B., Yan, H., Guo, J., Gao, F. (2020). A Multi-class detection system for android malicious apps based on color image features. In International Conference on Security and Privacy in New Computing Environments, pp. 186-206. https://doi.org/10.1007/978-3-030-66922-5_13

[24] Kanaparthi, S.K., Raju, U.S.N., Shanmukhi, P., Aneesha, G.K., Rahman, M.E.U. (2019). Image retrieval by integrating global correlation of color and intensity histograms with local texture features. Multimedia Tools and Applications, 79(47-48): 34875-34911. https://doi.org/10.1007/s11042-019-08029-7

[25] Kumar, C.S., Sharma, V.K., Sharma, A., Yadav, A.K., Kumar, A.P. (2020). Semantic segmentation of color images via feature extraction techniques. In Journal of Physics: Conference Series, 1478(1): 012025. https://doi.org/10.1088/1742-6596/1478/1/012025

[26] Verma, T., Dubey, S. (2020). Impact of color spaces and feature sets in automated plant diseases classifier: A comprehensive review based on rice plant images. Archives of Computational Methods in Engineering, 27(5): 1611-1632. https://doi.org/10.1007/s11831-019-09364-6

[27] Han, D., Wu, P., Zhang, Q., Han, G., Tong, F. (2016). Feature extraction and image recognition of typical grassland forage based on color moment. Transactions of the Chinese Society of Agricultural Engineering, 32(23): 168-175. https://doi.org/10.11975/j.issn.1002-6819.2016.23.023

[28] Khan, M.F., Monir, S.M., Naseem, I. (2021). Robust image hashing based on structural and perceptual features forauthentication of color images. Turkish Journal of Electrical Engineering & Computer Sciences, 29(2): 648-662. https://doi.org/10.3906/elk-2002-6

[29] Ponti, M., Nazaré, T.S., Thumé, G.S. (2016). Image quantization as a dimensionality reduction procedure in color and texture feature extraction. Neurocomputing, 173: 385-396. https://doi.org/10.1016/j.neucom.2015.04.114