Color Enhancement of Low Illumination Garden Landscape Images

Color Enhancement of Low Illumination Garden Landscape Images

Qian Zhang Shuang Lu Lei LiuYi Liu Jing Zhang Daoyuan Shi 

School of Art and Design, Zhengzhou University of Light Industry, Zhengzhou 450002, China

Henan Civil Affairs School, Zhengzhou 450002, China

Corresponding Author Email: 
2016034@email.zzuli.edu.cn
Page: 
1747-1754
|
DOI: 
https://doi.org/10.18280/ts.380618
Received: 
5 August 2021
|
Revised: 
3 November 2021
|
Accepted: 
16 November 2021
|
Available online: 
31 December 2021
| Citation

© 2021 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

The unfavorable shooting environment severely hinders the acquisition of actual landscape information in garden landscape design. Low quality, low illumination garden landscape images (GLIs) can be enhanced through advanced digital image processing. However, the current color enhancement models have poor applicability. When the environment changes, these models are easy to lose image details, and perform with a low robustness. Therefore, this paper tries to enhance the color of low illumination GLIs. Specifically, the color restoration of GLIs was realized based on modified dynamic threshold. After color correction, the low illumination GLI were restored and enhanced by a self-designed convolutional neural network (CNN). In this way, the authors achieved ideal effects of color restoration and clarity enhancement, while solving the difficulty of manual feature design in landscape design renderings. Finally, experiments were carried out to verify the feasibility and effectiveness of the proposed image color enhancement approach.

Keywords: 

low illumination, garden landscape images (GLIs), color enhancement, convolutional neural network (CNN)

1. Introduction

Photography fans of garden landscape convey their perception of the beauty of garden landscapes and their understanding of garden landscape design to the human visual system in the form of images [1-11]. The generation of landscape design renderings is greatly affected by the color matching, design layout and other information in the original image [12-14]. Based on high quality garden landscape images (GLIs), designers can effectively visualize the design intent and conception of the actual landscape effect [16-19]. In the real world, GLIs taken in an unfavorable shooting environment are insufficiently exposed, unevenly illuminated, and generally dark. These features severely hinder the acquisition of actual landscape information in garden landscape design. With the continuous development of computer technology, low quality, low illumination GLIs can be enhanced through advanced digital image processing [20-22]. The enhanced GLIs can promote the final expressiveness of the landscape scheme, and fully reflect the designer's personal aesthetics.

Currently, many cities in China lack landscape images. Yao and Kang [23] introduced the principle of big data visualization to urban landscape images, and discussed the application of urban landscape image enhancement in China. The improving effect of big data visualization on urban landscape images was discussed from multiple dimensions, including online questionnaire survey, big data software visualization, and urban landscape image improvement. On this basis, several countermeasures were developed for enhancing landscape images of Chinese cities. In harsh environments (e.g., low illumination environment), the images collected by sensors may degrade, and feature low visibility, low brightness, and low contrast. To improve such images, Ma et al. [24] proposed a low-light level sensor image enhancement algorithm based on the hue-saturation-intensity (HSI) color model: the piecewise exponential method was adopted to process the saturation of the original image; a deep convolutional network (DCN) was specially designed to enhance the intensity (I) component. Yamashita et al. [25] and Yamashita et al. [26] suggested using a single sensor to simultaneously capture red, green, blue (RGB) and near-infrared (NIR) information, trying to enhance the color images from low-light scenes. Under the guidance of the NIR information, the joint denoising technique was adopted to reconstruct the corresponding color image, and the estimated color image was iteratively restored based on the constructed guide image. Jung [27] presented a selective image fusion technique, which applies adaptive guided filter-based denoising and selective detail transfer to pixels considered reliable in binocular image fusion. By constructing an experimental system of color-plus-mono camera, it was demonstrated that the binocular just-noticeable-difference (BJND)-aware denoising and selective detail transfer are helpful in improving the image quality during low light shooting.

Deep learning-based image enhancement requires lots of images to support network training. During the training, the joint estimation of intermediate parameters is far from sufficient. As a result, the models thus trained have a low applicability. When the environment changes, these models are easy to lose image details, and perform with a low robustness. Therefore, this paper tries to enhance the color of low illumination GLIs. Firstly, Section 2 explains the color restoration of GLIs based on modified dynamic threshold, establishes a color correction framework, and expounds the principle of color transform of GLIs. Next, Section 3 designs a convolutional neural network (CNN) to restore and enhance the color of corrected low illumination GLIs. In this way, the authors achieved ideal effects of color restoration and clarity enhancement, while solving the difficulty of manual feature design in landscape design renderings. Finally, experiments were carried out to verify the feasibility and effectiveness of the proposed image color enhancement approach.

2. Color Restoration

After being converted to the luma-blue difference-red difference (YCbCr) color space, the original GLI is divided into multiple blocks. The mean value AVo of Cr and mean value AVe of Cb of each block are calculated. The cumulative absolute differences RPo and RPe of Cr and Cb of each block can be respectively calculated by:

$R P_{o}=\sum_{i, j}\left(\left|C_{r}(i, j)-A V_{o}\right|\right) / M$           (1)

$R P_{e}=\sum_{i, j}\left(\left|C_{b}(i, j)-A V_{e}\right|\right) / M$           (2)

The blocks with relatively small RPo and RPe are identified. These blocks should be removed, for they cannot provide sufficient color information. The mean values of AVo, AVe, RPo and RPe of the remaining blocks are taken as the AVo, AVe, RPo and RPe of the entire GLI. Next, the candidate set of white pixels can be judged and generated by:

$\begin{aligned}

&\left|C_{r}(i, j)-\left(A V_{o}+R P_{o} \times \operatorname{sign}\left(A V_{o}\right)\right)\right| \\

&<1.5 \times R P_{o}

\end{aligned}$           (3)

$\begin{aligned}

&\left|C_{b}(i, j)-\left(1.5 \times A V_{e}+R P_{e} \times \operatorname{sign}\left(A V_{e}\right)\right)\right| \\

&<1.5 \times R P_{e}

\end{aligned}$           (4)

The pixels with the top 10% brightness in the set are selected as the final white pixels. After that, all white pixels are adjusted. The first step is to compute the reference values of each white pixel in the three channels, i.e., the mean gray values of the three channels rRV, gRV, and bRV. Next, the gain of each channel can be calculated by:

$\begin{aligned}

&r_{G}=Y_{\max } / r_{R V} \\

&g_{G}=Y_{\max } / g_{R V} \\

&b_{G}=Y_{\max } / b_{R V}

\end{aligned}$           (5)

Based on the results of formula (5), the color values of the three channels of the GLI are modified by the framework shown in Figure 1. The pixels with color values surpassing the threshold can be identified by:

$\begin{aligned}

&r^{\prime}=r \times r_{G} \\

&g^{\prime}=g \times g_{G} \\

&b^{\prime}=b \times b_{G}

\end{aligned}$           (6)

For a GLI taken in a variable environment, if a single channel has a low gray value, then the mean gray value of that channel must be low, and the gain of that channel must be large. Through the abovementioned processing, the weak single-channel features could be compensated for.

Figure 1. Color correction framework

If the GLI has a small overall brightness, the Y value after color restoration will be relatively high, pushing up the overall brightness of the image. This will interfere with the judgement of candidate white pixels. To solve the problem, this paper introduces the attenuation offset parameter matrix ψ to quantify the dynamic threshold to a fixed range. The ψ depends on the photoelectric imaging environment. Based on this matrix, the Y, Cb and Cr of the original parameters are quantified again through the following derivation process:

$\left[\begin{array}{l}

Y^{\prime} \\

C_{b}{ }^{\prime} \\

C_{r}{ }^{\prime}

\end{array}\right]=W \cdot\left[\begin{array}{l}

Y \\

C_{b} \\

C_{r}

\end{array}\right]$           (7)

First, the attenuation of each color in the low illumination image is considered. Let ∫ωγd(τ), ξd, and ηd be the vector integral, scattering coefficient, and attenuation coefficient of the light scattered into the sensor from all directions, respectively. The value of ∫ωγd(τ) is positively proportional to ξd. Since the global background light is a function of wavelength, we have:

$\gamma^{d}(\infty)=\frac{l_{k} l_{x}}{\eta^{d}} \int_{\omega} \gamma^{d}(\tau) d \tau$           (8)

where, lk and Sx are constants. Let ξ(μd) be the reference wavelength scattering coefficient. Then, the linear relationship between scattering coefficient ξd and wavelength μ can be expressed as:

$\xi^{d}=\left(-0.00113 \mu_{d}+1.62517\right) \xi\left(\mu_{d}\right)$           (9)

Further, it can be derived that the global background light is proportional to ξd and inversely proportional to ηd:

$\gamma^{d}(\infty) \propto \frac{\xi_{\mu}}{\eta^{d}}$           (10)

The color channel with the smallest attenuation in the low illumination environment is defined as channel o. Based on the color attenuation of channel o, the attenuation ratio of any other color channel can be deduced as:

$\frac{\eta^{d}}{\eta^{o}}=\frac{\xi^{d} \gamma^{o}(\infty)}{\xi^{o} \gamma^{d}(\infty)} \quad d \in\{r, g\}$           (11)

The relationship between the attenuation ratios of the three channels can be expressed as:

$\begin{aligned}

&\sigma^{o}(a)=e^{-\delta(a)} \\

&\sigma^{d}(a)=\left(\sigma^{o}(a)\right)^{\frac{\eta^{d}}{\eta^{o}}} \quad d \in\{r, g\}

\end{aligned}$           (12)

Let W be the color space conversion matrix. According to the image transmission and display rules of the International Telecommunication Union (ITU), the RGB-YCbCr color space conversion can be expressed as:

$\begin{aligned}

&{\left[\begin{array}{c}

Y^{\prime} \\

C_{b}^{\prime} \\

C_{r}^{\prime}

\end{array}\right]=} \\

&{\left[\begin{array}{ccc}

0.3 & 0.59 & 0.12 \\

-0.17 & -0.33 & 0.5 \\

0.5 & -0.42 & -0.08

\end{array}\right] \cdot\left[\begin{array}{l}

R^{\prime} \\

G^{\prime} \\

B^{\prime}

\end{array}\right]=W \cdot\left[\begin{array}{l}

R^{\prime} \\

G^{\prime} \\

B^{\prime}

\end{array}\right]}

\end{aligned}$           (13)

The color space conversion matrix can be obtained based on the three-channel attenuation formulas:

$\left[\begin{array}{c}

R^{\prime} \\

G^{\prime} \\

B^{\prime}

\end{array}\right]=\left[\begin{array}{ccc}

\sigma^{r}(a) & & \\

& \sigma^{g}(a) & \\

& & \sigma^{b}(a)

\end{array}\right] \cdot\left[\begin{array}{c}

R \\

G \\

B

\end{array}\right]$           (14)

Combining formulas (13) and (14):

$\begin{aligned}

&{\left[\begin{array}{l}

Y^{\prime} \\

C_{b}^{\prime} \\

C_{r}^{\prime}

\end{array}\right]=W \cdot\left[\begin{array}{l}

R^{\prime} \\

G^{\prime} \\

B^{\prime}

\end{array}\right]} \\

&=W \cdot\left[\begin{array}{ll}

\sigma^{\prime}(a) & \\

& \sigma^{g}(a) & \\

& & \sigma^{b}(a)

\end{array}\right] \cdot\left[\begin{array}{l}

R \\

G \\

B

\end{array}\right]

\end{aligned}$           (15)

The color transformation of the original GLI can be expressed as:

$\left[\begin{array}{l}

Y \\

C_{b} \\

C_{r}

\end{array}\right]=W \cdot\left[\begin{array}{l}

R \\

G \\

B

\end{array}\right] \Rightarrow W^{-1} \cdot\left[\begin{array}{l}

Y \\

C_{b} \\

C_{r}

\end{array}\right]=\left[\begin{array}{l}

R \\

G \\

B

\end{array}\right]$           (16)

Substituting formula (16) into formula (15):

$\begin{aligned}

&{\left[\begin{array}{l}

Y^{\prime} \\

C_{b}^{\prime} \\

C_{r}^{\prime}

\end{array}\right]=W \cdot\left[\begin{array}{lll}

\sigma^{r}(a) & & \\

& \sigma^{g}(a) & \\

& & \sigma^{b}(a)

\end{array}\right]} \\

&W^{-1} \cdot\left[\begin{array}{l}

Y \\

C_{b} \\

C_{r}

\end{array}\right]=\psi \cdot\left[\begin{array}{l}

Y \\

C_{b} \\

C_{r}

\end{array}\right]

\end{aligned}$           (17)

The attenuation offset parameter matrix ψ can be derived by:

$\begin{aligned}

&\psi=W \cdot\left[\begin{array}{lll}

\sigma^{r}(a) & & \\

& \sigma^{g}(a) & \\

& & \sigma^{b}(a)

\end{array}\right] W^{-1} \\

&=W \cdot\left[\begin{array}{lll}

\left(\sigma^{o}(a)\right)^{\frac{\eta^{r}}{\eta^{o}}} & & \\

& \left(\sigma^{o}(a)\right)^{\frac{\eta^{g}}{\rho^{\circ}}} & \\

& & \left(\sigma^{o}(a)\right)^{\frac{\eta^{b}}{p^{\rho}}}

\end{array}\right] W^{-1}

\end{aligned}$           (18)

For the color channel with the least attenuation in the low illumination environment, the attenuation coefficient is assumed to satisfy ηd=ηo. After this treatment, the single channel of the original GLI with a relatively low gray value can be compensated for, thereby balancing the gray value distribution across the three channels. The GLI color transform is illustrated in Figure 2.

Figure 2. GLI color transform

3. Color Restoration and Enhancement

This paper designs a CNN to restore and enhance the color of corrected low illumination GLIs. In this way, the authors achieved ideal effects of color restoration and clarity enhancement, while solving the difficulty of manual feature design in landscape design renderings. The proposed network consists of a color restoration module and a color enhancement module.

3.1 Color restoration

The architecture of the color restoration module is illustrated in Figure 3. In the color restoration module, the convolutional kernel is of the size 3×3. The convolution operation is expressed as g3×3. For the color channels, the number of convolutional filters is denoted by i, belonging to {1:32}. Each channel of the input GLI LSP is processed by the convolutional layer to obtain a false color mapping SUt:

$S U_{t}^{i}=\left\{g_{3 \times 3}\left(L S P_{r}^{i}\right), g_{3 \times 3}\left(L S P_{g}^{i}\right), g_{3 \times 3}\left(L S P_{b}^{i}\right)\right\}$          (19)

Figure 3. Architecture of color restoration module

Through the false color correction of each SUt, it is possible to obtain the enhanced false color mapping RFt. Let function F be global average pooling. Then, the mean, i.e., the gray value, of the three channels can be calculated by:

$S D_{t}=\underset{N \times M \times D}{F}\left(S U_{t}\right)$          (20)

The single-channel mean of SUt can be calculated by:

$D T^{i}=\underset{N \times M}{F}\left(S U_{p}^{i}\right)$          (21)

Let SDt/DTi be the gain coefficient; D be the number of color channels of SUt; i∈{r,g,b} be the serial number of a color channel; N and M be the space size. Then, the false color mapping can be expressed as:

$R F_{t}^{i}=S U_{t}^{i} \cdot \frac{S D_{t}}{D T^{i}}$          (22)

3.2 Color enhancement

To fully integrate the GLI outputted by the color restoration module into the color enhancement module, this paper introduces an adaptive instance normalization module to the constructed model. The mean and standard deviation of the feature map f of the color enhancement module are calculated, and then normalized by the said module. Let F* and Q* be the height and width of the feature map f, respectively. Then, the calculation results can be expressed as:

$\lambda_{d}=\frac{1}{F^{*} Q^{*}} \sum_{t}^{F^{*}} \sum_{w}^{Q^{*}} f_{t, w, d}$          (23)

$\rho_{d}^{2}=\frac{1}{F^{*} Q^{*}} \sum_{t}^{F^{*}} \sum_{w}^{Q^{*}}\left(f_{t, w, d}-\lambda_{d}\right)^{2}+\theta$          (24)

here, θ=0.00001. The affine transformation parameters Φ* and χ* can be obtained through convolution of the color restored GLI. The feature map normalized by the color enhancement module is subjected to affine transformation. In this paper, an adaptive instance normalization module with color restoration function is added to the residual block to improve the color enhancement effect. The adaptive instance normalization module operates on a pixel-by-pixel basis: Based on Φ* and χ*, the feature points of the entire image are restored pixel by pixel. Let λd and ρd be the mean and standard deviation of feature map f of color channel d, respectively. Then, we have:

$f_{t, w, d}^{*}=\Phi_{t, w, d}^{*}\left(\frac{f_{t, w, d}-\lambda_{d}}{\rho_{d}}\right)+\chi_{t, w, d}^{*}$          (25)

3.3 Loss function

To generate a more realistic enhanced GLI and achieve the learning objective of the neural network, this paper adopts the minimum absolute error (MAE) as the loss in the color restoration module and the color enhancement module. Let B be the input clear GLI; a be the input low-illumination GLI; g(a) be the processed image; f(a) be the image outputted after color restoration. Then, the loss function can be expressed as:

$\operatorname{Loss}_{M A E}=\|B-g(a)\|_{1}+\|B-f(a)\|_{1}$          (26)

The two terms in the MAE loss function are of equal importance. They are eventually merged into the total loss for backpropagation.

To minimize the percentual feature difference between the enhanced image and the real image, this paper introduces the perceptual loss function based on the pre-trained VGG16 network. The VGG16 training can enhance the visual authenticity of the GLI. Let Ψi(g(a)), Ψi(f(a)) and Ψi(B) be the feature maps of g(a), f(a), and B, respectively; Xi, Yi, and Zi be the number of channels, height, and width of the feature map, respectively. Then, the perceptual loss function can be expressed as:

$\operatorname{LosS}_{N E T}=\frac{1}{X_{i} Y_{i} Z_{i}}\left(\begin{array}{l}

\left\|\Psi_{i}-(g(a))-\Psi_{i}-(B)\right\|_{2} \\

+\left\|\Psi_{i}-(f(a))-\Psi_{i}-(B)\right\|_{2}

\end{array}\right)$          (27)

To better restore the color details and design structure of GLIs, this paper uses the gradient loss functions in the horizontal and vertical directions to train the constructed neural network. The gradient losses in the two directions GRr and GRf can be respectively calculated by:

$G R_{r}=\left\|g_{r}(a)-B_{r}\right\|_{2}$          (28)

$G R_{f}=\left\|g_{f}(a)-B_{f}\right\|_{2}$          (29)

Let ϕ be the adjustment parameter of the loss function. Then, the total loss of the color restoration and enhancement neural network for GLIs can be given by:

$\operatorname{LosS}_{T}=\operatorname{LosS}_{M A E}+\phi \operatorname{LosS}_{N E T}+G R_{r}+G R_{f}$          (30)

4. Experiments and Results Analysis

To verify the effectiveness of the proposed GLI color restoration algorithm, the performance of the modified dynamic threshold algorithms with and without ψ were quantified and analyzed. Table 1 presents the results of color cast detection based on equivalent circle, and evaluation results of GLI color quality.

The results show that the modified dynamic threshold algorithm with ψ outperformed that without ψ in GLI color restoration, evidenced by the relatively good restored color quality of all three types of GLIs (landscape architecture, landscape plants, and landscape water system). Despite a few color offsets in some images, the modified dynamic threshold algorithm with ψ perform excellently in the overall color restoration of GLIs. After enlarging the restored images, it can be found that some details of sculptures and artificial landscapes were better restored, and the color of water surfaces involving reflection/deflection was expressed accurately without over-exposure, after the modified dynamic threshold algorithm was coupled with ψ.

Table 1. Qualified evaluation of GLI color restoration

 

Landscape architecture

Landscape plants

Landscape water system

Comparative images

Original image

Without ψ

With

ψ

Original image

Without ψ

With

ψ

Original image

Without ψ

With

ψ

Test results

1.253

1.362

1.045

4.263

0.8526

1.074

8.256

1.627

1.362

Evaluation results

0.3628

0.5284

0.6281

0.4812

0.4628

0.5326

0.4158

0.4785

0.5529

The histogram equalization simulation was carried out on GLIs captured in a low illumination environment. Figure 4 displays the histogram changes of the images before and after equalization.

(1) Before equalization

(2) After equalization

Figure 4. Histograms before and after introducing MAE loss and perpetual loss

Based on the principle of the color restoration and enhancement model and Figure 4, the CNN-based image enhancement of GLIs can be regarded as an approximate calculation process from continuous state to discrete state. As shown in Figure 4, the quantization error was small after introducing MAE loss and perpetual loss. As a result, there was a certain difference in the gray levels outputted by different gray pixel values after mapping. This effectively prevents the problems of traditional image enhancement methods: the merge of grayscales and the color information loss of GLIs. It can be intuitively seen from Figure 4 that, before introducing the MAE loss and perceptual loss, the pixels in the enhanced image were discretely distributed, and the histogram failed to retain the shape of the original image; after the two losses were introduced, the grayscale was uniformly distributed across the interval, and the histogram matched the shape of the original image.

The restoration of the color information of the output GLI is greatly affected by the gain and offset of different values. By changing the size of the affine transformation parameters, it is possible to control the degree of color information restoration. Under the same experimental environment, a series of simulations were conducted with our algorithm, single-scale Retinex, and multi-scale Retinex. The color restoration results of these algorithms are compared in Figure 5.

Figure 5. Experimental results of color recovery of different algorithms

Figure 5 clearly displays that the GLI processed by our algorithm was much clearer, better in quality, and higher in brightness and contrast than that handled by single-scale Retinex, or multi-scale Retinex. By contrast, the image processed by single-scale Retinex had a low contrast, and that processed by multi-scale Retinex, and multi-scale was too white. The two contrastive algorithms fail to output realistic enhanced images that conform to our visual perception.

Tables 2-4 compare the quality of the GLIs enhanced by different algorithms. All three algorithms managed to enhance the color effect of the original GLIs. However, our algorithm was superior than the two traditional color enhancement algorithms, in terms of discrete entropy, clarity and contrast, and effectively improved the readability of low illuminance GLIs.

Table 2. Discrete entropy metric of each algorithm

Objects

Lawns

Trees

Pools

Roads

Rockeries

Original image

5.326

5.625

5.124

5.826

5.392

Single-scale Retinex

5.842

7.362

6.495

7.152

7.025

Multi-scale Retinex

6.114

7.285

6.295

7.025

7.952

Our algorithm

6.174

7.025

6.385

6.119

6.258

Objects

Flowers

Sculptures

Benches

Fences

Landscape stones

Original image

5.482

5.112

5.386

5.924

5.628

Single-scale Retinex

7.114

7.258

7.062

6.258

7.385

Multi-scale Retinex

7.415

7.228

6.958

7.151

7.335

Our algorithm

6.745

6.185

7.284

5.296

7.118

Table 3. Clarity metric of each algorithm

Objects

Lawns

Trees

Pools

Roads

Rockeries

Original image

0.316

3.048

0.859

0.527

0.524

Single-scale Retinex

1.328

6.582

2.748

2.563

2.162

Multi-scale Retinex

1.428

7.259

2.748

2.625

2.147

Our algorithm

1.002

6.285

2.115

1.172

0.851

Objects

Flowers

Sculptures

Benches

Fences

Landscape stones

Original image

0.263

0.485

0.274

0.857

1.864

Single-scale Retinex

0.748

1.625

1.147

3.265

5.185

Multi-scale Retinex

0.859

1.285

1.759

2.185

7.629

Our algorithm

0.952

1.425

0.852

3.625

5.362

Table 4. Contrast metric of each algorithm

Objects

Lawns

Trees

Pools

Roads

Rockeries

Original image

0.041

0.362

0.148

0.015

0.026

Single-scale Retinex

0.057

0.085

0.485

0.152

0.263

Multi-scale Retinex

0.396

1.258

0.248

0.152

0.263

Our algorithm

0.544

0.984

0.557

0.421

0.442

Objects

Flowers

Sculptures

Benches

Fences

Landscape stones

Original image

0.014

0.025

0.041

0.074

0.085

Single-scale Retinex

0.525

0.142

0.824

0.148

0.362

Multi-scale Retinex

0.048

0.157

0.748

0.596

0.724

Our algorithm

0.413

0.154

0.799

0.642

0.821

5. Conclusions

This paper designs a novel method for color enhancement of low illumination GLIs. To achieve the ideal effects of color recovery and clarify enhancement, the authors detailed how to restore the color of GLIs based on modified dynamic threshold, and constructed a CNN for restoring and enhancing the color of low illumination GLIs, which overcomes the difficulty of manual feature design in landscape design renderings. Through experiments, the performance of the modified dynamic threshold algorithms with and without ψ were quantified and analyzed. According to results of color cast detection based on equivalent circle, and evaluation results of GLI color quality, the modified dynamic threshold algorithm with ψ outperformed that without ψ in GLI color restoration. In addition, the histogram changes of the GLIs before and after introducing the MAE loss and perceptual loss were recorded. The results show that, after the two losses were introduced, the grayscale was uniformly distributed across the interval, and the histogram matched the shape of the original image. Finally, the color restoration results of different algorithms were compared. The comparison further confirms that our algorithm was superior than the two traditional color enhancement algorithms, in terms of discrete entropy, clarity and contrast, and effectively improved the readability of low illuminance GLIs.

Acknowledgment

2021 Philosophy and Social Science planning project of Henan Province, Research on the strategy of screening, protection and Utilization of rural red cultural resources in Central Plains, Grant No.: 2021BYS051; 2021 Philosophy and Social Science project of Henan Province, Research on the protection of traditional Village landscape features in Henan province, Grant No.: 2021BYS048; Research on color identification system and planning path of traditional villages in central China under the background of rural revitalization strategy, Special Application for Key Research and Development and Promotion of Henan Province, Grant No.: 212400410381; Research on Strategies for Memory Protection and Inheritance of Industrial and Trade Traditional Villages in Henan from the Perspective of Village Culture, Grant No.: 2021-ZZJH-453; Research on Spatial Satisfaction Evaluation and Renewal Protection Strategy for Inheritance of Traditional Village Context in Southern Henan province, Grant No.: 2021-ZDJh-422; Research on promoting the characteristic development of Henan cultural industry with social innovation, Subject of Henan social science planning, Grant No.: 2018BYS022; Research on Spatial Feature Improvement design of Traditional Village Landscape in Southern Henan Under Protection Early Warning Strategy, Grant No.: 2020-ZZJH-519.

  References

[1] Wangda, P., Hussin, Y.A., Bronsveld, M.C., Karna, Y.K. (2019). Species stratification and upscaling of forest carbon estimates to landscape scale using GeoEye-1 image and lidar data in sub-tropical forests of Nepal. International Journal of Remote Sensing, 40(20): 7941-7965. https://doi.org/10.1080/01431161.2019.1607981

[2] Li, Z., Han, X., Wang, L.Y., Zhu, T.Y., Yuan, F.T. (2020). Feature extraction and image retrieval of landscape images based on image processing. Traitement du Signal, 37(6): 1009-1018. https://doi.org/10.18280/ts.370613

[3] Gudmann, A., Csikós, N., Szilassi, P., Mucsi, L. (2020). Improvement in satellite image-based land cover classification with landscape metrics. Remote Sensing, 12(21): 3580. https://doi.org/10.3390/rs12213580

[4] Snavely, R.A., Uyeda, K.A., Stow, D.A., O’Leary, J.F., Lambert, J. (2019). Mapping vegetation community types in a highly disturbed landscape: integrating hierarchical object-based image analysis with lidar-derived canopy height data. International Journal of Remote Sensing, 40(11): 4384-4400. https://doi.org/10.1080/01431161.2018.1562588

[5] Endo, Y., Kanamori, Y., Kuriyama, S. (2019). Animating landscape: self-supervised learning of decoupled motion and appearance for single-image video synthesis. arXiv preprint arXiv:1910.07192.

[6] Trimble, J., Berezovsky, J. (2021). Barkhausen Imaging: A magneto-optical approach to mapping the pinning landscape in soft ferromagnetic films. Journal of Magnetism and Magnetic Materials, 523: 167585. https://doi.org/10.1016/j.jmmm.2020.167585

[7] Park, C., Lee, I.K. (2020). Emotional landscape image generation using generative adversarial networks. In Proceedings of the Asian Conference on Computer Vision.

[8] Popelková, R., Mulková, M. (2016). Multitemporal aerial image analysis for the monitoring of the processes in the landscape affected by deep coal mining. European Journal of Remote Sensing, 49(1): 973-1009. https://doi.org/10.5721/EuJRS20164951

[9] Lu, X., Zhang, J., Hong, J., Wang, L. (2016). Analysis of wetland landscape evaluation and its driving factors in Yellow River Delta based on remote sensing image. Transactions of the Chinese Society of Agricultural Engineering, 32(1): 214-223. https://doi.org/10.11975/j.issn.1002-6819.2016.z1.030

[10] Lu, S., Zhang, Q., Liu, Y., Liu, L., Zhu, Q., Jing, K. (2020). Retrieval of multiple spatiotemporally correlated images on tourist attractions based on image processing. Traitement du Signal, 37(5): 847-854. https://doi.org/10.18280/ts.370518

[11] Kim, D., Noh, Y. (2021). An aerosol extinction coefficient retrieval method and characteristics analysis of landscape images. Sensors, 21(21): 7282. https://doi.org/10.3390/s21217282

[12] Yin, S. (2014). Explore the use of computer-aided design in the landscape renderings. In Applied Mechanics and Materials, 687: 1166-1169. https://doi.org/10.4028/www.scientific.net/AMM.687-691.1166

[13] Li, D. (2020). Explore the application of computer modeling and rendering technology in rural landscape color design. In Journal of Physics: Conference Series, 1578(1): 012024. https://doi.org/10.1088/1742-6596/1578/1/012024

[14] Kim, S.Y., Lee, K. (2006). Design and implementation of mobile 3D city landscape authoring/rendering system. In Innovations in 3D Geo Information Systems, pp. 439-446. https://doi.org/10.1007/978-3-540-36998-1_35

[15] Omodani, M., Ohta, M., Tanaka, T., Hoshino, Y. (1993). High-quality photographic color image reproduction using ion flow printing and its application to color facsimile. The Journal of imaging science and technology, 37(1): 37-42.

[16] Zhao, Y. (2021). Fast image blending for high-quality panoramic images on mobile phones. Multimedia Tools and Applications, 80(1): 499-516. https://doi.org/10.1007/s11042-020-09717-5

[17] Wang, B., He, J., Yu, L., Xia, G.S., Yang, W. (2020). Event enhanced high-quality image recovery. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16, pp. 155-171. https://doi.org/10.1007/978-3-030-58601-0_10

[18] Ruiz-Santaquiteria, J., Espinosa-Aranda, J.L., Deniz, O., Sanchez, C., Borrego-Ramos, M., Blanco, S. (2018). Low-cost oblique illumination: An image quality assessment. Journal of Biomedical Optics, 23(1): 016001. https://doi.org/10.1117/1.JBO.23.1.016001

[19] Shi, Z., Guo, B., Zhao, M., Zhang, C. (2018). Nighttime low illumination image enhancement with single image using bright/dark channel prior. EURASIP Journal on Image and Video Processing, (1): 1-15. https://doi.org/10.1186/s13640-018-0251-4

[20] Song, M.Z., Qu, H.S., Zhang, G.X., Tao, S.P., Jin, G. (2018). Low-illumination image denoising method for wide-area search of nighttime sea surface. Optoelectronics Letters, 14(3): 226-231. https://doi.org/10.1007/s11801-018-7268-x

[21] Song, M.Z., Qu, H.S., Li, L.M., Zhang, G.X., Jin, G. (2017). Pooling strategy for quality evaluation of full-reference model low illumination image. Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 25: 160-167. https://doi.org/10.3788/OPE.20172514.0160

[22] Zhang, S.T., Ning, D.Q., Wang, L. (2015). Real-time image intensification in tobacco sorting system under low illumination. Tobacco Science and Technology, 48(1): 96-100. https://doi.org/10.16135/j.issn1002-0861.20150117

[23] Yao, L., Kang, Z.M. (2020). Research on urban landscape image enhancement under the background of big data visualization. In 2020 International Conference on Big Data and Social Sciences (ICBDSS), pp. 29-32. https://doi.org/10.1109/ICBDSS51270.2020.00014

[24] Ma, S., Ma, H., Xu, Y., Li, S., Lv, C., Zhu, M. (2018). A low-light sensor image enhancement algorithm based on HSI color model. Sensors, 18(10): 3583. https://doi.org/10.3390/s18103583

[25] Yamashita, H., Sugimura, D., Hamamoto, T. (2017). Low-light color image enhancement via iterative noise reduction using RGB/NIR sensor. Journal of Electronic Imaging, 26(4): 043017. https://doi.org/10.1117/1.JEI.26.4.043017

[26] Yamashita, H., Sugimura, D., Hamamoto, T. (2015). Enhancing low-light color images using an RGB-NIR single sensor. In 2015 Visual Communications and Image Processing (VCIP), pp. 1-4. https://doi.org/10.1109/VCIP.2015.7457844

[27] Jung, Y.J. (2017). Enhancement of low light level images using color-plus-mono dual camera. Optics Express, 25(10): 12029-12051. https://doi.org/10.1364/OE.25.012029