Application of the Auxiliary Function Method to the Search for the Global Minimum of Functions of Many Variables

Application of the Auxiliary Function Method to the Search for the Global Minimum of Functions of Many Variables

Tutkusheva Zhailan Salavatovna* | Otarov Khassen Toktarovich

Department of Mathematics, K. Zhubanov Aktobe Regional University, Aktobe 030000, Kazakhstan

Corresponding Author Email: 
zhailan_k@mail.ru
Page: 
1323-1329
|
DOI: 
https://doi.org/10.18280/mmep.110523
Received: 
12 November 2023
|
Revised: 
9 February 2024
|
Accepted: 
20 February 2024
|
Available online: 
30 May 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

In early works, we presented a new economical and effective method for finding the global optimum of a function of many variables, which was conditionally called the auxiliary function method. The essence of the method is that a multi-extremal and multivariable objective function is transformed into a convex function $g_m(F, \alpha)$ of one variable, which is the Lebesgue integral over a compact where the objective function is considered: $g_m(F, \alpha)=\int_E[|F(x)-\alpha|-F(x)+\alpha]^m d \mu$, $m \in N$. The function $g_m(F, \alpha)$ was called the auxiliary function. In early works, the properties of the auxiliary function and the algorithm of the new method were studied, the convergence of the method was proven, and computational experiments were carried out with multiextremal functions in three-dimensional space. Based on these results and in order to demonstrate the advantages of using the auxiliary function method, this paper considers the problem of finding global minima of objective functions in a four-dimensional space constructed on the basis of hyperbolic and exponential potentials and conducts a comparative analysis of the results obtained. In this work, as a result of completed computational experiments on test functions in three-dimensional and four-dimensional space, where auxiliary functions with different values of the degree $m \in N$ were expanded, important conclusions were obtained and proven. As a result, the change in the auxiliary function depending on its degree m is clearly shown. This result provides even more opportunities to improve the efficiency of the constructed method. Next, you can set up first- and second-order methods to find the "oldest" zero auxiliary function.

Keywords: 

global minimum, auxiliary function method, multidimensional optimization, non-convex optimization, multi-extreme optimization, test function construction, auxiliary function

1. Introduction

Many scientists are involved in the development and research of global optimization methods [1-5].

Our proposed method has an advantage over other known global optimization methods, both deterministic and stochastic, since in our case, regardless of the number of variables and local extrema of the objective function, the number of iterations required to achieve the desired value of the function does not increase. In addition, our approach guarantees finding the value of the global minimum with higher accuracy, which is not always provided by other methods. For example, when applying the well-known gradient descent method, as a result of a process over a set of selected initial points, the smallest of the local minima corresponding to these points is taken as the desired global minimum [6, 7]. However, there is no complete guarantee that the real global minimum is located between these local minima. And by “going down” through different points you can reach the same local extremum, which requires extra costs.

As for the application of the well-known brute-force method, the global minimum is defined as the smallest of the extrema corresponding to the nodal points of the grid [8-11]. Meanwhile, the global minimum may not be taken at the nodal point, and “refining” the mesh to achieve higher accuracy also requires additional costs. In addition, as the number of variables in the objective function increases, the problem will become even more complex.

Chinese scientists Chew and Zheng [12] and Zheng and Zhuang [13] explored the integral method of global optimization, which considers the mean value of a function and its variance. In this method, a special integral function is used.

In the works of one of the authors. the auxiliary function method was introduced to find the global minimum of a multivariable function. The essence of the method is that a multi-extremal and multivariable objective function is transformed into a convex function $g_m(F, \alpha)$ of one variable, which is a Lebesgue integral over a compact, where the objective function itself is considered:

$\begin{gathered}g_m(F, \alpha)=\int_E[|F(x)-\alpha|-F(x)+\alpha]^m d \mu, \\ m \in N,\end{gathered}$               (1)

At the same time, important properties (non-negativity, strict convexity, uniform continuity [14]) of the function $g_m(F, \alpha)$ were studied and proved. It is proved that the objective function takes the global minimum α only at the point $\hat{x}$, where,

$\begin{gathered}\hat{\alpha}=globminF(\hat{x}), \\ \hat{\alpha}=\sup \left\{\alpha \in R: g_m(F, \alpha)=0\right\} .\end{gathered}$             (2)

In addition, it was found that the value of the global minimum of the objective function is contained in the interval $\left[c_0, d_0\right]$, where,

$g_m\left(F, c_0\right)=0$ and $g_m\left(F, d_0\right)>0$            (3)

2. Computational Experiment

We use test functions based on hyperbolic and exponential abilities [15, 16], which, in our opinion, are more representative for solving optimization problems compared to the well-known test functions (power, trigonometric and others) considered in our previous works [17, 18]. In addition, in this article, unlike previous ones, independent functions with three functions with at least several structures are used.

Example 1. Based on the hyperbolic potentials of 2 variables, we construct a test function:

$\begin{aligned} F_1(x, y)=-\frac{1}{5\left(|x-3|^2+|y-4|^2\right)+1}- & \frac{1}{2\left(|x+5|^5+|y+8|^5\right)+1.8}-\frac{1}{10\left(|x+5|^{2.2}+|y-6|^{2.2}\right)+1.5} \\ & -\frac{1}{2\left(|x-7|^{2.5}+|y+8|^{2.5}\right)+2.5}-\frac{1}{3\left(|x|^3+|y|^3\right)+2}\end{aligned}$              (4)

The graph of continuous function (4) is illustrated in Figure 1.

Figure 1. Graph of a function (4)

It has five local minima at points: (3;4), (-5;-8), (-5;6), (7;-8) and (0;0). One of them is the global minimum point. Thus, the coordinates and value of the global minimum $z_1(3 ; 4) \approx-1.0055606411$ are determined.

Now, to find the same minimum, we apply the above-mentioned method of the auxiliary function. Consider it in the square [-10;10]×[-10;10]. In this case, for Eq. (4) the auxiliary function will have the form:

$g_m\left(F_1, \alpha\right)=\int_{-10}^{10} \int_{-10}^{10}\left[\left|F_1(x, y)-\alpha\right|-\left(F_1(x, y)-\alpha\right)\right]^m d x d y$              (5)

From our earlier article [5], it is known that if the conditions $g_m\left(F, c_0\right)=0$ and $g_m\left(F, d_0\right)>0$ are satisfied, then $\widehat{\alpha} \in$ $\left[c_0 ; d_0\right)$.

Next, we apply the half division method to the auxiliary function to find the highest zero $\hat{\alpha}$. We find the middle of the interval $\alpha_0=\frac{c_0+d_0}{2}$ and at this point we calculate the value of the auxiliary function $g_m\left(F, \alpha_0\right)$. Depending on the value of the auxiliary function, we select one of two intervals:

if $g_m\left(F, \alpha_0\right)=0$, then $\widehat{\alpha} \in\left[\alpha_0 ; d_0\right)$;

if $g_m\left(F, \alpha_0\right)>0$, then $\widehat{\alpha} \in\left[c_0 ; \alpha_0\right)$.

Until the required accuracy is achieved, we will repeat the process of dividing the gap in half. The results of each iteration are shown in Tables 1-3. A detailed description is given in our earlier works [1-6].

 Table 1. Values of auxiliary function (5) at m=6, 4, 3

$\alpha$

$g_6\left(z_1, \alpha\right)$

$g_4\left(z_1, \alpha\right)$

$g_3\left(z_1, \alpha\right)$

0

13.6008

9.14783

1.9

-10

0

0

0

-5

0

0

0

-2.5

0

0

0

-1.25

0

0

0

-0.625

0.008649

0.0189936

0.0312887

-0.9375

2,5*10-7

1,4*10-5

0.000100888

-1.09375

0

0

0

-1.015625

0

0

0

-0.9765925

1,5*10-9

4,5*10-7

7,8*10-6

-0.99609375

1,8*10-12

5,1*10-9

2,7*10-7

-1.005859375

0

0

0

-1.0009765625

2,4*10-14

2,8*10-10

3,1*10-8

-1.0034179688

2,4*10-16

1,3*10-11

3,1*10-7

-1.0046386719

1,6*10-18

4,6*10-13

2,5*10-10

-1.0052490235

2,3*10-21

6*10-15

9,7*10-12

-1.0055541993

1,8*10-31

1,1*10-21

8,5*10-17

-1.0057067872

0

0

0

-1.0056304933

0

0

0

-1.0055923463

0

0

0

-1.0055732728

0

0

0

-1.0055637361

0

0

0

-1.0055589677

5,6*10-35

5*10-24

1,5*10-18

-1.0055613519

0

0

0

-1.0055601598

3,2*10-38

3,4*10-26

3,6*10-20

-1.0055607559

0

0

0

-1.0055604579

10*10-41

7,2*10-28

10*10-21

-1.0055606069

4,1*10-45

8,7*10-31

1,3*10-23

-1.0055606814

0

0

0

-1.0055606441

0

0

0

-1.0055606255

3,6*10-47

3,8*10-32

1,2*10-24

-1.0055606348

1,5*10-49

9,9*10-34

7,9*10-26

-1.0055606394

5,5*10-53

5*10-36

1,5*10-27

-1.0055606417

0

0

0

-1.0055606405

8,3*10-56

6,5*10-38

5,8*10-29

-1.0055606411

0

0

0

-1.0055606408

9*10-58

3,2*10-39

6*10-30

-1.0055606409

5,2*10-59

4,7*10-40

1,4*10-30

-1.0055606410

2*10-61

1,2*10-41

8,9*10-32

-1.0055606410625

10*10-70

3,4*10-47

6,3*10-36

Table 2. Values of auxiliary function (7) at m=6, 4, 3

$\alpha$

$g_6\left(F_1, \alpha\right)$

$g_4\left(F_1, \alpha\right)$

$g_3\left(F_1, \alpha\right)$

1

467228

120272

61522

0

7,6

7,17809

8,73493

-10

0

0

0

-5

0

0

0

-2,5

0

0

0

-1,25

0

0

0

-0,625

0,0013

0,0039362

0,00755927

-0,9375

4,3∙10-9

2,7∙10-7

2,3∙10-6

-1,09375

0

0

0

-1,015625

0

0

0

-0,9765925

1,5∙10-11

6∙10-9

1,2∙10-7

-0,99609375

1,3∙10-15

1,2∙10-11

1,2∙10-9

-1,005859375

0

0

0

-1,0009765625

1,1∙10-22

2,4∙10-16

3,4∙10-13

-1,0034179688

0

0

0

-1,0021972657

0

0

0

-1,0015869141

0

0

0

-1,0012817383

4,5∙10-28

5,9∙10-20

6,7∙10-16

-1,0014343262

0

0

0

-1,0013580323

0

0

0

-1,0013198853

2,1∙10-33

1,6∙10-23

1,4∙10-18

-1,0013389588

0

0

0

-1,0013294221

0

0

0

-1,0013246537

2,7∙10-38

9∙10-27

5,2∙10-21

-1,0013270379

0

0

0

-1,0013258458

0

0

0

-1,0013252498

2,5∙10-41

8,6∙10-29

1,6∙10-22

-1,0013255478

0

0

0

-1,0013253988

2,1∙10-43

3,5∙10-30

1,4∙10-23

-1,0013254733

7,2∙10-46

8∙10-32

8,5∙10-25

-1,0013255106

6,3∙10-50

1,6∙10-34

8∙10-27

-1,0013255292

0

0

0

-1,0013255199

7,2∙10-57

3,7∙10-39

2,7∙10-30

-1,0013255246

0

0

0

-1,0013255223

0

0

0

-1,0013255211

0

0

0

-1,0013255205

4,6∙10-62

1,3∙10-42

6,8∙10-33

-1,0013255208

0

0

0

-1,0013255207

0

0

0

-1,0013255206

0

0

0

-1,00132552055

5∙10-64

6,3∙10-44

7,1∙10-34

-1,00132552058

6,1∙10-67

7,2∙10-46

2,5∙10-35

 Table 3. Values of auxiliary functions (9) at $m$=6, 4, 3

$\alpha$

$g_6\left(F_2, \alpha\right)$

$g_4\left(F_2, \alpha\right)$

$g_3\left(F_2, \alpha\right)$

0

1,8∙10-6

12620,1

1132,9

-10

0

0

0

-5

356,491

29,211

8,75615

-7.5

0

0

0

-6.25

0,469733

0,27444

0,21661

-6.875

3∙10-6

6,1∙10-5

0,00027718

-7.1875

0

0

0

-7.03125

0

0

0

-6.953125

2,3∙10-9

4,7∙10-7  

6,8∙10-6

-6.9921875

1,5∙10-14

6∙10-11

3,8∙10-9

-7.01171875

0

0

0

-7.001953125

0

0

0

-6.9970703125

4,1∙10-17

1,2∙10-12

2∙10-10

-6.99951171875

9,8∙10-22

9,93∙10-16

9,9∙10-13

-7.000732421875

0

0

0

-7.0001220703125

0

0

0

-6.99981689453125

3,3∙10-24

2,2∙10-17

5,8∙10-14

-6.999969482421875

2,8∙10-28

4,3∙10-20

5,3∙10-16

-7.000045776367187

0

0

0

-7.000007629394531

1,1∙10-35

5,1∙10-25

1,1∙10-19

-7.000026702880859

0

0

0

-7.000017166137690000

0

0

0

-7.000012397766110000

0

0

0

-7.000010013580320000

0

0

0

-7.000008821487420000

1,7∙10-37

3,1∙10-26

1,3∙10-20

-7.000009417533870000

2,6∙10-39

1,9∙10-27

1,6∙10-21

-7.000009715557090000

3,6∙10-41

1,1∙10-28

1,9∙10-22

-7.000009864568710000

4,4∙10-43

5,8∙10-30

2,1∙10-23

-7.000009939074510000

4,1∙10-45

2,6∙10-31

2∙10-24

-7.000009976327410000

2∙10-47

7,4∙10-33

1,4∙10-25

-7.000009994953870000

1,1∙10-50

4,9∙10-35

3,3∙10-27

-7.000010004267090000

0

0

0

-7.000009999610480000

3∙10-53

9,8∙10-37

1,7∙10-28

-7.000010001938790000

6,8∙10-58

7,7∙10-40

8,2∙10-31

-7.000010003102940000

0

0

0

-7.000010002520870000

0

0

0

-7.000010002229830000

2∙10-60

1,6∙10-41

4,5∙10-32

-7.000010002375350000

6,6∙10-65

1,6∙10-44

2,6∙10-34

-7.000010002448110000

0

0

0

-7.000010002411730000

0

0

0

-7.000010002393540000

4,1∙10-67

5,5∙10-46

2∙10-35

-7.000010002402630000

5,7∙10-70

6,9∙10-48

7,6∙10-37

We will solve the problem of finding the value $\hat{\alpha}=$ $\sup \left\{\alpha \in R: g_m\left(F_1, \hat{\alpha}\right)=0\right\}$ in cases $m=3,4,6$.

Figure 2. Graph of a function (5)

Thus, the global minimum is determined with accuracy: $\varepsilon=$ $10^{-10}, \hat{\alpha} \approx-1.00556064108125$. The number $m$ is responsible for the smoothness of the auxiliary function. Figure 2 shows graphs of the auxiliary function for different $\mathrm{m}$: the smaller, the faster the function grows and vice versa. Its graph for any natural $m$ is a strictly increasing function.

Note that to obtain the values of $g_m\left(z_1, \alpha\right)$, Sobolev’s cubature formulas [19, 20] were used in the C++ computing environment.

Example 2. Based on the hyperbolic potentials of 3 variables, we construct a test function:

$\begin{aligned} F_2(x, y, z)=-\frac{1}{5 \cdot\left(|x-3|^2+|y-4|^2+|z+1|^2\right)+1}- & \frac{1}{2 \cdot\left(|x+5|^5+|y+8|^5+|z-2|^5\right)+1.8}-\frac{1}{10 \cdot\left(|x+5|^{2.2}+|y-6|^{2.2}+|z-9|^{2.2}\right)+3} \\ & -\frac{1}{2 \cdot\left(|x-7|^{2.5}+|y+8|^{2.5}+|z+1|^{2.5}\right)+2.5}\end{aligned}$              (6)

It is not difficult to see that this function has four local minimum: $(3 ; 4 ;-1),(-5 ;-8 ; 2),(-5 ; 6 ; 9),(7 ;-8 ;-1)$, and the global minimum is reached at the point $(3 ; 4 ;-1)$. Preliminary calculations of the global minimum gave the value $\hat{\alpha}=$ $-1,0013255206$.

Now, to find the global minimum $F_2(x, y, z)$, we apply the auxiliary function method.

Considering the function $F_2(x, y, z)$ in a cube $[-10 ; 10] \times[-$ $10 ; 10] \times[-10 ; 10]$, let 's build an auxiliary function for it:

$g_m\left(F_2, \alpha\right)=\int_{-10}^{10} \int_{-10}^{10} \int_{-10}^{10}\left[\left|F_2(x, y, z)-\alpha\right|-\left(F_2(x, y, z)-\alpha\right)\right]^m d x d y \, d z$,                   (7)

where, $m \in N$.

Thus, the transition from the objective function $F_2(x, y, z)$ to the auxiliary function $g_m\left(F_2, \alpha\right)$ is carried out. It is known that the auxiliary function depends on one variable $\alpha$ and has "advantageous" properties: non-negativity, strict convexity and uniform continuity [6]. Now we are not dealing with a multi-extremal function, but a convex function of one variable. The value of the global minimum $\hat{\alpha}$ of the objective function $F_2(x, y, z)$ corresponds to the "oldest" zero of the auxiliary function $g_m\left(F_2, \alpha\right)$. The search for its "oldest" zero is performed by the method of half division.

The three-dimensional integral is calculated by Sobolev's cubature formulas with a boundary layer [9, 10]. Let's carry out a computational experiment with the $g_m\left(F_2, \alpha\right)$ function for several values of $m$. The auxiliary function for any natural $m$ is a strictly increasing function. Its graph is illustrated as follows in Figure 3.

Figure 3. Graph of the auxiliary function (7)

Results of finding the value $\hat{\alpha}=\sup \left\{\alpha \in R: g_m\left(z_1, \hat{\alpha}\right)=\right.$ $0\}$ in cases $m=3,4,6$ are reflected in Table 2.

Table 1 shows the results of calculating the global minimum of the function $F_2(x, y, z)$ using half division to the function $g_m\left(F_2, \alpha\right)$ with an accuracy of $\varepsilon=10^{-10}$.

At the same time, special attention was paid to the change in the behavior of the auxiliary function $g_m(F, \alpha)$ depending on the indicator $\mathrm{m}$. The table shows the results for $m=3,4,6$.

As can be seen, when finding $\hat{\alpha}$ using the method of halves, regardless of the values of $m$, the required calculation accuracy is achieved in the same number of iterations. In addition, it should be noted that when finding the value of $\hat{\alpha}$ using methods related to the derivative of the auxiliary function, improve the efficiency of the process by varying the values of the exponent $m$.

Using the auxiliary function method, the global minimum is determined with accuracy: $\varepsilon=10^{-10}, \hat{\alpha} \approx-1,0013255206$ for all three values of $m$.

It’s found value coincides with a previously known value.

At the same time, the specified accuracy is achieved in 42 iterations.

Example 3. We will construct a multi-extremal objective function (8) of 3 variables based on exponential potentials.

$\begin{aligned} & F_3(x, y, z)=-2 \cdot e^{-3 \cdot\left(|x-1|^{1.5}+|y-2|^{1.5}+|z+2|^{1.5}\right)} \\ & -3 \cdot e^{-7 \cdot\left(|x+3|^2+|y+1|^2+|z+5|^2\right)}-7 \cdot e^{-12 \cdot\left(|x-2|^4+|y|^4+|z-1|^4\right)} \\ & -4 \cdot e^{-2 \cdot\left(|x+4|^{0.5}+|y-4|^{0.5}+|z+3|^{0.5}\right)}-2 \cdot e^{-5 \cdot\left(|x|^{1.3}+|y|^{1.4}+|z+6|^{1.4}\right)} .\end{aligned}$                    (8)

It is not difficult to make sure that function (8) has five local minima: (1;2;-2), (-3;-1;-5), (2;0;1), (-4;4;-3), (0;0;-6), of these (2;0;1) is the point of the global minimum. According to preliminary calculations, the values of the global minimum are: $\hat{\alpha}=-7,0000100024$.

To calculate the global minimum $F_3(x, y, z)$ we use the auxiliary function method.

Consider the function $F_3(x, y, z)$ in a cube [-10;10]×[-10;10] × [-10;10]. The auxiliary function for (3) has the form:

$g_m\left(F_3, \alpha\right)=\int_{-10}^{10} \int_{-10}^{10} \int_{-10}^{10}\left[\left|F_3(x, y, z)-\alpha\right|-\left(F_3(x, y, z)-\alpha\right)\right]^m d x d y \, d z$,               (9)

where, $m \in N$.

The transition from the objective function $F_3(x, y, z)$ to the auxiliary function $g_m\left(F_3, \alpha\right)$ is performed. Since the value of the global minimum $\hat{\alpha}$ of the objective function $F_3(x, y, z)$ corresponds to the "oldest" zero of the auxiliary function $g_m\left(F_3, \alpha\right)$, we work with a simplified auxiliary function (9). We will perform a computational experiment for several values of $m$. The behavior of the auxiliary function for different values of $m$ is illustrated in Figure 4.

Similarly to the first example, the method of half division is used to search for the "oldest" zero of the auxiliary function $g_m\left(F_3, \alpha\right)$. Approximations to the "oldest" zero are shown in Table 3.

Figure 4. Graph of auxiliary function (9) 

As can be seen, by the method of an auxiliary function, the global minimum in this case was determined in a small number of iterations with accuracy: $\varepsilon=10^{-10}$. At the same time $\hat{\alpha} \approx-7,00001000239808$ for all three values of m. The resulting value of the global minimum fully corresponds to the declared value.

In order to study the behavior of the auxiliary function depending on the degree $m \in N$, we prove the following statement.

3. Result

Theorem. The path $r>s, r, s \in N, r>1, s>1$ then cope:

1) if $\alpha-\hat{\alpha} \leq 0$, then $g_r(F, \alpha)=0$ and $g_s(F, \alpha)=0$;

2) if $0<\alpha-\hat{\alpha} \leq \frac{1}{2}$, then there is a need for $g_r(F, \alpha) \leq$ $g_s(F, \alpha)$.

1) Since $\hat{\alpha}=F(\hat{x}) \leq F(x)$, then $\alpha \leq \hat{\alpha} \leq F(x)$. For $\forall \alpha \in$ $(-\infty, \hat{\alpha}]$, that is $\alpha<F(\hat{x})$, the module is expanded with a negative sign:

$g_r(F, \alpha)=\int_E[|F(x)-\alpha|-(F(x)-\alpha)]^r d \mu=\int_E[F(x)-\alpha-F(x)+\alpha]^r d \mu=0$.

By analogy $g_s(F, \alpha)=0$. That is, for $\alpha \leq \hat{\alpha}$, the auxiliary function is zero.

For $\alpha-F(x)>0$, the module is expanded with a negative sign:

$g_r(F, \alpha)=\int_E[|F(x)-\alpha|-(F(x)-\alpha)]^r d \mu=\int_E[-F(x)+\alpha-F(x)+\alpha]^r d \mu=\int_E[2(\alpha-F(x))]^r d \mu=2^r \int_E[\alpha-F(x)]^r d \mu>0$.

By analogy $g_s(F, \alpha)>0$. Therefore, the value of the function is positive for $\alpha>\hat{\alpha}$.

2) Consider the difference of auxiliary functions with different exponents: $r$ and $s, r>s$ and $\alpha>\hat{\alpha}$.

$\begin{aligned} g_r(F, \alpha)-g_s(F, \alpha)=\int_E[\mid F(x) & -\alpha \mid-(F(x)-\alpha)]^r d \mu-\int_E[|F(x)-\alpha|-(F(x)-\alpha)]^s d \mu \\ & =\int_{E / E(F, \alpha)}[|F(x)-\alpha|-(F(x)-\alpha)]^r d \mu+\int_{E(F, \alpha)}[|F(x)-\alpha|-(F(x)-\alpha)]^r d \mu \\ & -\int_{E / E(F, \alpha)}^E[|F(x)-\alpha|-(F(x)-\alpha)]^s d \mu-\int_{E(F, \alpha)}^{E(F)}[|F(x)-\alpha|-(F(x)-\alpha)]^s d \mu \\ & =\int_{E(F, \alpha)}[|F(x)-\alpha|-(F(x)-\alpha)]^r d \mu-\int_{E(F, \alpha)}[|F(x)-\alpha|-(F(x)-\alpha)]^s d \mu \\ & =\int_{E(F, \alpha)}^r 2^r \cdot[\alpha-F(x)]^r d \mu-\int_{E(F, \alpha)} 2^s \cdot[\alpha-F(x)]^s d \mu \\ & =\int_{E(F, \alpha)}\left(2^r \cdot[\alpha-F(x)]^r-2^s \cdot[\alpha-F(x)]^s\right) d \mu .\end{aligned}$

$g_r(F, \alpha)-g_s(F, \alpha)=\int_{E(F, \alpha)}\left(2^r \cdot[\alpha-F(x)]^r-2^s \cdot[\alpha-F(x)]^s\right) d \mu$.

If $0<\alpha-\hat{\alpha} \leq \frac{1}{2}$, i.e. $\hat{\alpha}<\alpha \leq \hat{\alpha}+\frac{1}{2}$, then $\alpha-F(x) \leq \frac{1}{2}$.

Then $2^r \cdot[\alpha-F(x)]^r-2^s \cdot[\alpha-F(x)]^s \leq 0$ since $r>s$.

Therefore, $g_r(F, \alpha) \leq g_s(F, \alpha)$.

The theorem is proved.

4. Conclusion

From the reasoning in the theorem, it follows that for $0<$ $\alpha-\hat{\alpha} \leq \frac{1}{2}$ the auxiliary function (1) converges to the "oldest" zero faster, for a smaller value of $m$.

The result obtained in this paper provides even more opportunities to improve the efficiency of the auxiliary function method. In the future, we can consider finding the value of $\hat{\alpha}$ using methods related to the derivative of the auxiliary function and improve the efficiency of the process by varying the values of the exponent $m$.

  References

[1] Horst, R., Pardalos, PM. Thoai N.V. (2000). Introduction to Global Optimization. New York: Springer Science & Business Media, p. 354.

[2] Numerica, A. (2004). Complete Search in Continuous Global Optimization and Constraint Satisfaction. Cambridge University Press, UK, p. 94.

[3] Locatelli, M., Schoen, F. (Eds.). (2013). Global Optimization: Theory, Algorithms, and Applications. Philadelphia. In MOS-SIAM Series on Optimization, Society for Industrial and Applied Mathematics, p. 666. 

[4] Floudas, C.A., Pardalos, P.M. (2014). Recent Advances in Global Optimization. In Princeton Series in Computer Science, Princeton University Press, USA, p. 644.

[5] Liberti, L., Maculan, N. (Eds.). (2006). Global Optimization: From Theory to Implementation. New York: Springer Science & Business Media, p. 428.

[6] Polyak, B.T. (1983). Introduction to Optimization. Optimization Software, Inc., Publications Division, New York, p. 384.

[7] Polyak, B.T. (1963). Gradient methods for minimizing functionals, solving equations and inequalities. USSR Computational Mathematics and Mathematical Physics, 4(6): 17-32. https://doi.org/10.1016/0041-5553(64)90079-5

[8] Ali, M., Törn, A., Viitanen, S. (1999). Stochastic global optimization: problem, classes and solution techniques. Journal of Global Optimization, 14(4): 437-447. http://doi.org/10.1023/A:1008395408187

[9] Mockus, J. (1994). Application of Bayesian approach to numerical methods of global and stochastic optimization. Journal of Global Optimization, 4: 347-365. https://doi.org/10.1007/BF01099263

[10] Rinnooy Kan, A.H.G., Timmer, G.T. (1987). Stochastic global optimization methods part I: Clustering methods. Mathematical Programming 39: 27-56. https://doi.org/10.1007/BF02592070

[11] Kalos, M.H., Whitlock, P.A. (2008). Monte Carlo Methods. Wiley-VCH Verlag GmbH & Co. KGaA.

[12] Chew, S.H., Zheng, Q. (2012) Integral Global Optimization: Theory, Implementation and Applications. Springer Berlin, Heidelberg.

[13] Zheng, Q., Zhuang, D.M. (1995). Integral global minimization: Algorithms, implementations and numerical tests. Journal of Global Optimization, 7: 421-454. https://doi.org/10.1007/BF01099651

[14] Tutkusheva, J.S. (2022). Application of the method of dividing a segment in half in global optimization based on an auxiliary function. Abai Kazakh National Pedagogical University Bulletin, Series of Physics & Mathematical Sciences, 79(3): 39-45. https://doi.org/10.51889/2022-3.1728-7901.05

[15] Kuznetsov, A.V., Ruban A.I. (2010). Search for the main minima of multiextremal functions with active consideration of inequality constraints. Journal of Siberian Federal University, Engineering & Technologies, 3: 335-346.

[16] Rouban, A. (2013). Global optimization method based on the selective averaging coordinate with restrictions. Vestnik Tomskogo Gosudarstvennogo Universiteta-Upravlenie Vychislitelnaja Tehnika I Informatika-Tomsk State University Journal of Control and Computer Science, 22(1): 114-123. https://doi.org/10.17223/19988605/50/6

[17] Tutkusheva, Zh.S., Ramazanov, M.D. (2020). Calculation of the coordinates of the global minimum of an arbitrary smooth function of several variables. Bulletin of KazNITU. Physical and Mathematical Sciences, 3(139): 662-666.

[18] Kaidassov, Z., Tutkusheva, Z.S. (2021). Algorithm for calculating the global minimum of a smooth function of several variables. Mathematical Modeling of Engineering Problems, 8(4): 591-596. https://doi.org/10.18280/mmep.080412 

[19] Sobolev, S.L., Vaskevich, V.L. (1996). Cubature Formula. Novosibirsk: Sobolev Institute of Mathematics, p. 484. (in Russian). 

[20] Ramazanov, M.D. (2010). Theory of lattice cubature formulas with a bounded boundary layer. Ufa Mathematical Journal, 2(3): 63-82.