Algorithm for Calculating the Global Minimum of a Smooth Function of Several Variables

Algorithm for Calculating the Global Minimum of a Smooth Function of Several Variables

Zhetkerbay KaidassovZhailan S. Tutkusheva 

Department of Mathematics, K. Zhubanov Aktobe Regional University, 34 A. Moldagulova Ave., Aktobe 030000, Republic of Kazakhstan

Corresponding Author Email: 
kaidassov6686@kaiost.cn
Page: 
591-596
|
DOI: 
https://doi.org/10.18280/mmep.080412
Received: 
14 July 2021
|
Revised: 
28 July 2021
|
Accepted: 
10 August 2021
|
Available online: 
31 August 2021
| Citation

© 2021 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Every year the interest of theorists and practitioners in optimisation problems is growing, and extreme problems are found in all branches of science. Local optimisation problems are well studied and there are constructive methods for their solution. However, global optimisation problems do not meet the requirements in practice; therefore, the search for the global minimum remains one of the major challenges for computational and applied mathematics. This study discusses the search for the global minimum of multidimensional and multiextremal problems with high precision. Mechanical quadrature formulas, that is, the formulas for approximate integration were applied to calculate the integrals. Of all the approximate integration formulas, the Sobolev lattice cubature formulas with a regular boundary layer were chosen. In multidimensional examples, the Sobolev formulas are optimal. Computational experiments were carried out in the most popular C++ programming language. Based on the computational experiments, a new algorithm was proposed. In three-dimensional space, the calculations of the global minimum have been described using specific examples. Computational experiments show that the proposed algorithm works for multiextremal problems with the same amount of time as for convex ones.

Keywords: 

cubature formulas, absolute minimum, global minimum, extreme problem, optimisation problem

1. Introduction

Nowadays, global optimisation algorithms are widely used in various fields of science and technology, in particular, in those situations when it is required to obtain the best result of the objective function that evaluates the quality of the decision being made. Due to the importance of the problem, a great number of algorithms and methods for solving the problem of multiextremal optimisation have been developed. Today, the development of global optimisation methods is stimulated by the development of electronic computing facilities and is largely associated with the availability of high-performance parallel computer systems. A large number of different approaches to parallelising global optimisation algorithms have appeared. However, in publications devoted to this topic, the problem of overcoming the “curse of dimensionality” in the problem of multidimensional global optimisation is rarely discussed [1].

The search and definition of the algorithm for calculating the global minimum of a smooth function of many variables suggests the need to solve one of the most common problems in applied mathematics, namely, the problem of the multiextremal nature of the behaviour of objective functions. Interest in solving such problems is determined both by a significant number of extreme problems that require timely resolution, and by the accelerated development of computing power that can provide timely and high-quality resolution of such problems.

In this context, the objective functions with constraints imposed on them are presented in the form of closed systems, are difficult to differentiate and are too complex to compute. In addition, the constraints imposed on the objective function are often also a significant problem; in particular, this concerns the constraints that define the allowable region of a complex shape or a narrow region in the form of a bundle. For this reason, the development of effective algorithms for finding the global minimum of a function of many variables and their software implementations is an urgent area in modern scientific research [2].

In general, a lot of studies are devoted to the problems of finding the extreme values of functions of one and many variables, in which various theoretical and applied aspects of the issues under consideration are presented [3]. This scientific study proposes a method for calculating the global minimum of a smooth function of many variables. The method consists of two stages: the first is to find the value of the global minimum point, and the second is to determine the coordinates of the global minimum [4, 5]. A complex combination of sequences of actions within these two stages provides the desired result. The results of this study can find wide practical use in various fields of science and technology, where an accurate calculation of the global minimum of a function of many variables is required to solve a variety of practical problems.

2. Materials and Methods

In modern mathematics, various methods for finding the global minimum have been proposed. If the function is given on the grid of nodes $\left\{\varepsilon k \mid k \in Z^{n}\right\}$, then it would be possible to enumerate the values of the function at these nodes. But for small $\varepsilon$ errors and large n dimensions of the arguments, such an enumeration requires at least $\varepsilon^{-n}$ operations and therefore is impossible; $\varepsilon$– calculation accuracy. For example, $\varepsilon=10^{-8}$, n=5 leads to a computation volume of the order of 1040, which is impossible on any existing multiprocessor systems [6, 7]. If the function is convex, then it is sufficient to apply the well-known steepest descent method. But here are also certain difficulties. If the step is chosen small (to ensure convergence), then the method converges slowly. Increasing the step (to accelerate the convergence) can lead to the divergence of the method. A constructive algorithm operating in a more general case is shown in works [8, 9].

The algorithm calculates multiple integrals [10]. Since ancient times, the formula of rectangles, the formula of trapeziums, and the Simpson formula have been used to calculate integrals. The above formulas give a good approximation of integrals of smooth functions with bounded derivatives of the 2nd or 4th order. More complex formulas: Gregory, Gauss, Chebyshev – use the boundedness of higher order derivatives to obtain a more accurate result. For large numerical calculations, it becomes useful to optimise the process of approximate calculation of the integral, if the calculation is performed using approximate integration formulas, then to optimise the formulas themselves. This problem is especially important for functions of several variables, where each decrease in the lattice spacing increases the number of nodes used [11, 12].

Next, the study formulates the global optimisation problem. Assume $f: Q \rightarrow R$ continuously differentiable function (1):

$f(x)=f\left(x_{1}, \ldots, x_{n}\right)$$x \in Q=\left\{x \in R^{n} \mid a_{i} \leq x_{j} \leq b_{j}, 1 \leq j \leq n\right\}$,                   (1)

The set of feasible solutions Q– n-dimensional cube. It is required to find the coordinates and value of the global minimum point $(\hat{x} ; \hat{y})$. Assume that (2-3):

$\hat{y}=\operatorname{globmin} f(x)$           (2)

$\hat{x}=\operatorname{argglobmin}_{x \in Q} f(x)=\arg \hat{y}$              (3)

The algorithm for finding the global minimum point consists of two stages. The first stage consists in finding the value of the global minimum point (2), and the second – in finding the coordinates of the global minimum (3). To find the point of the global minimum, an indicator (defining) function is constructed (4):

$g(\alpha)=\int_{Q}[|f(x)-\alpha|-(f(x)-\alpha)]^{m} d x, g \in W_{p}^{m}$                  (4)

that is:

$g(\alpha)=\int_{a_{1}}^{b_{1}} \ldots \int_{a_{n}}^{b_{n}}\left[\left|f\left(x_{1}, \ldots, x_{n}\right)-\alpha\right|-\right.$$\left.\left(f\left(x_{1}, \ldots, x_{n}\right)-\alpha\right)\right]^{m} d x_{1} \ldots d x_{n}$                   (5)

Eq. (5) defines the first tangency of the hyperplane y=a with a given function f(x)=f(x1,...,xn) from below. The point of the first contact of the plane with this function is the minimum value of the function. For clarity, note $\hat{y} \approx \hat{\alpha} $. Depending on the value of $\alpha$, the indicator function takes on a positive value or zero: $-|f(x)-\alpha| \leq f(x)-\alpha \leq|f(x)-\alpha|$, $-\int_{a}^{b}|f(x)-\alpha| d x \leq \int_{a}^{b}(f(x)-\alpha) d x \leq \int_{a}^{b}|f(x)-\alpha| d x$. To each part of the inequality add $-\int_{a}^{b}(f(x)-\alpha) d x: -\int_{a}^{b}|f(x)-\alpha| d x  $ $-\int_{a}^{b}(f(x)-\alpha) d x \leq \int_{a}^{b}(f(x) -\alpha) d x-\int_{a}^{b}(f(x)-\alpha) d x \leq \int_{a}^{b}|f(x)-\alpha| d x-\int_{a}^{b}(f(x)-\alpha) d x, $ $ 0 \leq \int_{a}^{b}|f(x)-\alpha| d x-\int_{a}^{b}(f(x)-\alpha) d x$. Hence the function (4) takes on a positive value or zero depending on $\alpha$. Indicator function (4) takes a positive value if $f(x) \leq \alpha$. Indicator function (4) takes zero if $f(x)>\alpha$.

3. Results and Discussion

For clarity, study analyses the operation of the indicator function using the single variable function graph (Figure 1). The objective function graph y=f(x) is shown in red. Choosing the values of $\alpha$, it is necessary to find the tangent at the point of the global minimum (Figure 1).

Figure 1. Single variable function graph

Initial values $\alpha_{1}$ and $\alpha_{2}$ were arbitrarily chosen such that $g\left(\alpha_{1}\right)=0$ and $g\left(\alpha_{2}\right)>0$. Therefore, $\hat{\alpha} \in\left[\alpha_{1} ; \alpha_{2}\right]$

If the indicator function takes the value of zero $g\left(\alpha_{1}\right)=0$, then the blue line $y=\alpha_{1}$ passes below the given function or relates to it from below. Indeed, in Eq. (5), the modulus is expanded with a positive sign, the expression becomes zero. If the indicator function takes a positive value $g\left(\alpha_{2}\right)=0$, then the green line $y=\alpha_{2}$ intersects the given function or passes above the given function. The next new value $\alpha$ is chosen as the arithmetic mean of the numbers $\alpha_{1} \text { and } \alpha_{2}$ (6):

$\alpha_{3}=\frac{\alpha_{1}+\alpha_{2}}{2}$,        (6)

$g\left(\alpha_{3}\right)>0$, then the purple line $y=\alpha_{3}$ intersects the given function. Further, value $\hat{\alpha}$ is sought from the interval $\left[\alpha_{1} ; \alpha_{3}\right]$. Thus, a search for the value of $\alpha$ from the interval $\left[\alpha_{k} ; \alpha_{k+1}\right]$ continues, where $g\left(\alpha_{k}\right)=0$ and $g\left(\alpha_{k+1}\right)>0$. This iteration continues as long as (7) is executed:

$\varepsilon_{f} \geq \alpha_{k+1}-\alpha_{k}$,            (7)

Then, the right end of the interval (8) is taken as the value of the global minimum (8):

$\hat{\alpha}=\alpha_{k+1} \approx \hat{y}$,        (8)

The first part of the task was completed: the value of the global minimum point was found. We proceed to the second part, where we need to find the coordinates x1,..,xn of the point at which the value of the global minimum $\hat{\alpha}$ is taken.

For function $f(x)=f\left(x_{1}, \ldots, x_{n}\right)$ coordinates are sought in the n-dimensional cube Q. Divide Q into 2n parts, that is, in half for each coordinate. 2n n-dimensional cubes are obtained. Coordinates of the global minimum are also sought using the indicator function. Only the integral is calculated already in the small cubes obtained using the partition Q. At least in one of the cubes, the modified indicator function will be positive for $\alpha=\hat{\alpha}$. This cube will be taken as a basis with the application of the algorithm described above. Next, study finds a smaller cube with a face equal to $\varepsilon_{x}$ (9):

$\varepsilon_{x} \geq x_{k+1}-x_{k}$,               (9)

The centre of this little cube will be the coordinates of the global minimum (10):

$\hat{x} \approx \frac{x_{k+1}+x_{k}}{2}$,                   (10)

The value and coordinates of the global minimum have been found. Next, the study calculates how much operation is needed for this. To find the values of the global minimum on the interval $\left[\alpha_{1} ; \alpha_{2}\right]$ according to the described algorithm, k-calculations of the indicator function are required, where (11):

$k_{f} \approx\left[\log _{2}\left(\frac{\alpha_{2}-\alpha_{1}}{\varepsilon_{f}}\right)\right]+2$,                (11)

The number of calculations grows logarithmically with $\frac{1}{\varepsilon}$. Such an increase in the number of calculations is much less than in the enumeration method. To calculate the value of the global minimum on the segment [0; 1] with accuracy $\varepsilon \geq 10^{-6}$ and with two variables, it is enough to calculate g(a) 22 times to reach the answer with the required accuracy. To find the coordinates of the global minimum with accuracy in the n-dimensional cube Q according to the described algorithm, the following Eq. (12) is required to calculate the indicator function:

$k_{x} \approx 2^{n} \cdot \log _{2}\left(\frac{b-a}{\varepsilon_{\bar{x}}}\right)$                   (12)

Thus, the number of calculations grows logarithmically with increasing $\frac{1}{\varepsilon}$. Figure 2 illustrates the growth in the number of calculations in the enumeration method (blue graph) $\varepsilon^{-n}$ and the growth in the number of calculations in the proposed method (red graph) (13) with respect to the growth of $\frac{1}{\varepsilon}$ for functions of two variables.

$k=k_{f}+k_{x} \approx \log _{2}\left(\frac{\alpha_{2}-\alpha_{1}}{\varepsilon_{f}}\right)+2+2^{n} \cdot \log _{2}\left(\frac{b-a}{\varepsilon_{\bar{x}}}\right)$,     (13)

Here $\varepsilon_{f}=\varepsilon_{\bar{x}} \geq 10^{-6}$ Such an increase in the number of calculations is much less than in the enumeration method. In the enumeration method, the more variables, the faster the number of calculations tends to infinity.

In optimisation theory, there are test functions that serve as an artificial landscape [1]. They are useful for evaluating the performance of optimisation algorithms. The new optimisation method was tested with dozens of test functions. The performed computational experiments show high precision in fewer iterations. Thus, the study considers three of them, and the reference values are compared with the results of computational experiments. Figure 3 shows a graph of the multiextremal Ackley function [8] in three-dimensional space. Objective function (14):

$\begin{aligned} f(x, y) &=-20 e^{-0.2 \sqrt{0.5\left(x^{2}+y^{2}\right)}} \\ &-e^{0.5(\cos (2 \pi x)+\cos (2 \pi y))}+e+20 \\ &-5 \leq x, y \leq 5 \end{aligned}$     (14)

It has one global minimum point. The reference value of the global minimum point of the Ackley function f(0;0)=0 (Figure 3).

Figure 2. An increase in the number of calculations in the enumeration method (blue graph)  and an increase in the number of calculations in the proposed method (red graph)

Figure 3. Ackley function graph

The computational experiments show the following results: $\hat{f}=10^{-10} \approx 0$; $\hat{x} \in[-0.0000000001 ; 0]$, $\hat{x}=\frac{-0.0000000001+0}{2}=-0.0000000001 \approx 0$; $\hat{y} \in[-0.0000000001 ; 0]$, $\hat{y}=\frac{-0.0000000001+0}{2}=-0.0000000001 \approx 0$.

The accuracy of the value of the global minimum point reaches $\varepsilon_{f}=10^{-10}$ and the accuracy of the coordinates of the global minimum $\varepsilon_{x}=10^{-10}$ depends on the shape of the “pit” where the global minimum point is located. If the “pit” is narrow, then the coordinate accuracy is high. If it is wide, then the accuracy of the coordinates is lower. The study considers the following example, where the global minimum is in a wide, almost flat “hole”.

The second test function is the Booth function (Figure 4). It has a global minimum, which is located almost on a flat surface, which complicates the search for a global minimum for some methods [13-17]. The proposed method finds the global minimum of such functions with a fairly good accuracy. The Booth function equation [18] (15):

$f(x, y)=(x+2 y-7)^{2}+(2 x+y-5)^{2}$$\quad-10 \leq x, y \leq 10$           (15)

The reference value of the global minimum point f(1;3)=0.

Figure 4. Booth function graph

The above method is used to find the following results: $\hat{f}=10^{-10} \approx 0 ; \hat{x} \in[0.9999847413 ; 1.0000228883]$, $\hat{x} \approx \frac{0.9999847413+1.0000228883}{2} \approx 1.0000038148 \approx 1$; $\hat{y} \in[2.9999923707 ; 3.0000305177]$, $\hat{y} \approx \frac{2.9999923707+3,0000305177}{2} \approx 3.0000114442 \approx 3$. The accuracy of the value of the global minimum point reaches $\varepsilon_{f}=10^{-10}$, and the accuracy of the coordinates of the global minimum $\varepsilon_{x}=10^{-5}$. $\varepsilon_{f}$ is higher than $\varepsilon_{x}$, it depends on the properties of the objective function in the vicinity of the minimum point.

Next, the study considers the four-extremal function from Kuznetsov and Rubann [19] (16):

$\begin{aligned} f(x, y) &=-3 e^{-3\left(|x-3|^{1.5}+|y|^{1.5}\right)} \\ &-5 e^{-2.5\left(|x+3|^{2.5}+|y|^{2.5}\right)} \\ &-7 e^{-\left(|x|^{1.2}+|y-3|^{1.2}\right)} \\ &-10 e^{-2\left(|x|^{2}+|y+3|^{2}\right)} \\ &-4 \leq x, y \leq 4, \end{aligned}$                  (16)

Kuznetsov and Ruban [19] search for a global minimum point by dividing the original region into subregions. Figure 5 shows the spatial view of function (16). The function has four local minima: $(-3 ; 0),(0 ; 3),(3 ; 0)$ and $(0 ;-3)$. Of these $(0 ;-3)$ is global.

Figure 5. Function graph

Next, the study finds the global minimum using the indicator function (5). The calculations show the following result: $\hat{f} \approx-10.0013071522832 $ $\hat{x} \in[-0.0000610352 ; 0]$ , $\hat{x} \approx \frac{-0.0000610352+0}{2} \approx-0.0000305176 \approx 0$; $\hat{y} \in[-3 ;-2.9999389648]$, $\hat{y} \approx \frac{-3+2.9999389648}{2} \approx-2,9999694824 \approx-3$.

Indeed, when substituting the coordinates of the global minimum $f(0 ;-3) \approx-10.001307152283$, into function (16), a value is obtained equal to the value calculated using the indicator function. This is another proof that the proposed method works with a sufficiently high precision. The global minimum value is calculated with high precision.

The arguments of the global minimum are relatively less accurate, since the global minimum is determined by a parabolic surface [20-22]. In this algorithm, the main difficulty is the high-precision calculation of the indicator function (5). It is necessary to calculate the integral “in the best way”. To calculate $g(\alpha)$, the Sobolev lattice cubature formulas with a regular boundary layer [10] were applied. The approximate value of the integrals (17) of a function n of real variables $x \in\Omega C R^{n}$ is usually given in the form of cubature formulas ( x– n-dimensional coordinate vector).

$I(f)=\int_{\Omega} d x \varphi(x)$                 (17)

The approximate value of integral (17) will be sought in the form of a linear combination of the values of the function $\varphi(x)$ in N points $x^{(1)}, x^{(2)}, \ldots, x^{(N)}$, called nodes; N– number of intervals, into which each component of the vector $\bar{x}$ is supposed to be split within the limits of its variation (18):

$K(\varphi)=\sum_{k=1}^{N} C_{k} \varphi\left(x^{(k)}\right)$                 (18)

where: $x^{(k)}$– nodes of quadrature formulas,  Ck– coefficients of quadrature formulas.

The main task is to choose a law for determining the coefficients {Ck}, that provides the fastest approximation of the integral $I(\varphi)$ by the cubature formula $K_{N}(\varphi)$, at $N \rightarrow \infty$. The coefficients of the Sobolev formulas are found by solving a linear algebraic system [9].

To do this, it is necessary to find a polynomial P(x) of degree not higher than m, which coincides with the function $\varphi (x)$ at given points (19):

$P\left(x^{(k)}\right)=\varphi\left(x^{(k)}\right), k=1,2, \ldots, N$                  (19)

The coefficients of the Sobolev formulas are found by solving the linear algebraic system (20):

$\left(\begin{array}{lll}a_{1} & \ldots & a_{M}\end{array}\right) \cdot\left(\begin{array}{cccc}1 & 1 & \ldots & 1 \\ x_{1} & x_{2} & \ldots & x_{M} \\ \ldots & \cdots & \ldots & \cdots \\ x_{1}^{M-1} & x_{2}^{M-1} & \cdots & x_{M}^{M-1}\end{array}\right)=$$\left(\varphi\left(x_{1}\right) \quad \varphi\left(x_{2}\right) \quad \cdots \quad \varphi\left(x_{M}\right)\right)$                 (20)

Determinant of integer matrix (21) is the Vandermonde determinant and is nonzero [2]:

$\Delta=\left|\begin{array}{cccc}1 & 1 & \ldots & 1 \\ x_{1} & x_{2} & \ldots & x_{M} \\ \ldots & \ldots & \ldots & \ldots \\ x_{1}^{M-1} & x_{2}^{M-1} & \ldots & x_{M}^{M-1}\end{array}\right|=\prod_{i>j}\left(x_{i}-x_{j}\right)$,              (21)

This matrix is poorly conditioned, so the 6x6 matrix was settled in the calculations, when bad conditioning does not greatly affect the calculations (22):

$\left(\begin{array}{llllll}1 & \frac{1}{2} & \frac{1}{3} & \frac{1}{4} & \frac{1}{5} & \frac{1}{6}\end{array}\right)$=$=\left(\begin{array}{llllll}a_{1} & a_{2} & a_{3} & a_{4} & a_{5} & a_{6}\end{array}\right)$$\cdot\left(\begin{array}{ccccccc}1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 2 & 3 & 4 & 5 & 6 \\ 1 & 4 & 9 & 16 & 25 & 36 \\ 1 & 8 & 27 & 64 & 125 & 216 \\ 1 & 16 & 81 & 256 & 625 & 1296 \\ 1 & 32 & 24.3 & 1024 & 3125 & 7776\end{array}\right)$,            (22)

From here we find the coefficients ak (23):

$\left(\begin{array}{llllll}a_{1} & a_{2} & a_{3} & a_{4} & a_{5} & a_{6}\end{array}\right)$$=\left(\begin{array}{llllll}1 & \frac{1}{2} & \frac{1}{3} & \frac{1}{4} & \frac{1}{5} & \frac{1}{6}\end{array}\right)$$\left(\begin{array}{ccccccc}1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 2 & 3 & 4 & 5 & 6 \\ 1 & 4 & 9 & 16 & 25 & 36 \\ 1 & 8 & 27 & 64 & 125 & 216 \\ 1 & 16 & 81 & 256 & 625 & 1296 \\ 1 & 32 & 243 & 1024 & 3125 & 7776\end{array}\right)$,                   (23)

To achieve good accuracy, it is necessary to find the inverse matrix (23) in a rational form. When searching for the inverse matrix using computer programmes, a matrix with decimal elements and truncated ends is obtained [23]. Therefore, the required matrix was calculated manually (24):

$W^{-1}=\left(\begin{array}{cccccc}6 & -\frac{87}{10} & \frac{29}{6} & -\frac{31}{24} & \frac{1}{6} & -\frac{1}{120} \\ -15 & \frac{117}{4} & -\frac{461}{24} & \frac{137}{24} & -\frac{19}{24} & \frac{1}{24} \\ 20 & -\frac{127}{3} & 31 & -\frac{121}{12} & \frac{3}{2} & -\frac{1}{12} \\ -15 & 33 & -\frac{307}{12} & \frac{107}{12} & -\frac{17}{12} & \frac{1}{12} \\ 6 & -\frac{27}{2} & \frac{65}{6} & -\frac{95}{24} & \frac{2}{3} & -\frac{1}{24} \\ -1 & \frac{137}{60} & -\frac{15}{8} & \frac{17}{24} & -\frac{1}{8} & \frac{1}{120}\end{array}\right)$,                   (24)

The coefficients of the cubature formula are equal to (25):

$\left\{\begin{array}{c}C_{1}=a_{1}+a_{2}+a_{3}+a_{4}+a_{5}+a_{6}=1 \\ C_{2}=a_{2}+a_{3}+a_{4}+a_{5}+a_{6}=-\frac{2838}{1440} \\ C_{3}=a_{3}+a_{4}+a_{5}+a_{6}=\frac{5085}{1440} \\ C_{4}=a_{4}+a_{5}+a_{6}=-\frac{4896}{1440} \\ C_{5}=a_{5}+a_{6}=\frac{2402}{1440} \\ C_{6}=a_{6}=-\frac{475}{1440}\end{array}\right.$,                     (25)

In this case, the equation for calculating the defining smooth function of several variables will have the form (26):

$\int F(x) d x=h^{n} \sum C_{k} f(h * k)$                       (26)

A computational experiment shows that the calculation of $g(\alpha)$ using the Sobolev formulas gives good accuracy. All computational experiments were carried out in the C++ programming language [24]. The above method is very economical and calculates with high precision. Efficiency of algorithms and high precision of calculations are of decisive importance in the issues of their use in solving problems of global optimisation of functions of many variables. The algorithms presented in this study are distinguished by their comparative simplicity in implementation, high precision characteristics and high performance with multi-extremity of the objective function. In addition, this method is easy to use by different specialists in their respective fields of activity. A promising direction for the further development of the considered method is the modification of cubature formulas.

4. Conclusions

Study on the creation of an algorithm for calculating the global minimum of a smooth function of many variables has led to the following conclusions. The problem of developing effective algorithms for solving the problem of global optimisation of functions of many variables is of particular importance for science and technology. Considerable experience has been accumulated in this area and a large number of algorithms and methods for solving multiextremal optimisation problems have been developed. At the same time, overcoming the fundamental difficulty of global optimisation problems associated with the rapid growth of the number of calculations depending on the dimension of the optimised function.

The findings of this study are qualitative in terms of the effectiveness of the global optimisation method. The proposed method avoids the problem of dimensionality growth and multiextremality. The experiments conducted clearly show that the algorithm overcomes these problems.

  References

[1] Kovartsev, A.N., Popova-Kovartseva, D.A. (2011). On the question of the efficiency of parallel algorithms for global optimization of functions of many variable. Computer Optics, 2(35): 256-261.

[2] Kuznetsov, A.V., Ruban, A.I., Kuznetsov, A.V. (2010). Algorithms for global optimization of functions in the presence of inequality constraints. Siberian Federal University Bulletin, 8: 72-80.

[3] Ramazanov, M.D., Kaidasov, J., Tutkusheva, Z.S. (2020). Studying the effectiveness of a new algorithm with a defining function for finding the global minimum of a smooth function. Izvestiya NAS RK. Physics and Mathematics Series, 4(332): 95-102. https://doi.org/10.32014/2020.2518-1726.70

[4] Zhaba, V.I. (2017). Deuteron: Analytical forms of wave function and density distribution. Scientific Herald of Uzhhorod University. Series “Physics”, 42: 191-196. https://doi.org/10.24144/2415-8038.2017.42.191-195

[5] Mustafin, A. (2015). Coupling-induced oscillations in two intrinsically quiescent populations. Communications in Nonlinear Science and Numerical Simulation, 29(1-3): 391-399. https://doi.org/10.1016/j.cnsns.2015.05.019 

[6] Aizstrauta, D., Celmina, A., Ginters, E., Mazza, R. (2013). Validation of integrated acceptance and sustainability assessment methodology. Procedia Computer Science, 26: 33-40. https://doi.org/10.1016/j.procs.2013.12.005

[7] Kantarbayeva, A., Mustafin, A. (2020). A biologically inspired fluid model of the cyclic service system. Mathematical Modelling and Analysis, 25(4): 505-521. https://doi.org/10.3846/mma.2020.10801

[8] Ramazanov, M.D., Tutkusheva, Z.S. (2020). Calculation of the coordinates of the global minimum of an arbitrary smooth function of several variables. Bulletin of KazNTU, 3(139): 662-666.

[9] Tutkusheva, Z.S. (2018). Determination of the coordinates of the global minimum of an arbitrary smooth function. Modern Innovations, 4(32): 5-7. https://cyberleninka.ru/article/n/opredelenie-koordinat-globalnogo-minimuma-proizvolnoy-gladkoy-funktsii/viewer.

[10] Sobolev, S.L. (1974). Introduction to the Theory of Cubature Formulas. Nauka, Moscow.

[11] Kovartsev, A.N., Popova-Kovartseva, D.A., Abolmasov, P.V. (2013). Investigation of the efficiency of global parallel optimization of functions of several variables. Bulletin of the Nizhny Novgorod University named after N.I. Lobachevsky, 3(1): 252-261. http://www.vestnik.unn.ru/ru/nomera?anum=6023.

[12] Ackley’s Function. (2021). http://www.cs.unm.edu/~neal.holts/dga/benchmarkFunction/ackley.html.

[13] Wang, Y., Balakrishnan, S., Singh, A. (2019). Optimization of smooth functions with noisy observations: Local minimax rates. IEEE Transactions on Information Theory, 65(11): 7350-7366. https://doi.org/10.1109/TIT.2019.2921985

[14] Kyrylenko, V.K., Durkot, M.O., Pop, M.M., Pisak, R.P., Rubish, V.M. (2020). Research of chalcogenide materials for memory devices with random access based on phase transitions. Scientific Herald of Uzhhorod University. Series “Physics”, 47: 7-20. https://doi.org/10.24144/2415-8038.2020.47.7-20

[15] Swenson, B., Sridhar, A., Poor, H.V. (2020). On distributed stochastic gradient algorithms for global optimization. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8594-8598. https://doi.org/10.1109/ICASSP40776.2020.9054279

[16] Aragón Artacho, F.J., Vuong, P.T. (2020). The boosted difference of convex functions algorithm for nonsmooth functions. SIAM Journal on Optimization, 30(1): 980-1006. https://doi.org/10.1137/18M123339X

[17] Li, S., Li, Q.W., Zhu, Z.H., Tang, G.G., Wakin, M.B. (2020). The global geometry of centralized and distributed low-rank matrix recovery without regularization. IEEE Signal Processing Letters, 27: 1400-1404. https://doi.org/10.1109/LSP.2020.3008876

[18] Chen, Y., Sun, Y., Yin, W. (2019). Run-and-inspect method for nonconvex optimization and global optimality bounds for R-local minimizers. Mathematical Programming, 176(1-2): 39-67. https://doi.org/10.1007/s10107-019-01397-w

[19] Kuznetsov, A.V., Ruban, A.I. (2010). Search for the main minima of multiextremal functions with active consideration of inequality constraints. Bulletin of the Siberian Federal University, 3: 335-346. http://elib.sfu-kras.ru/bitstream/handle/2311/1781/09_Kuznetsov.pdf;jsessionid=893E6DED6E5B680C3EF982EDF6DF0CB5?sequence=1.

[20] Zhao, M.H., Alimo, S.R., Beyhaghi, P., Bewley, T.R. (2019). Delaunay-based derivative-free optimization via global surrogates with safe and exact function evaluations. Proceedings of the IEEE Conference on Decision and Control, pp. 4636-4641. https://doi.org/10.1109/CDC40024.2019.9029996

[21] Zhang, X., Wang, L.X., Yu, Y.D., Gu, Q.Q. (2018). A primal-dual analysis of global optimality in nonconvex low-rank matrix recovery. 35th International Conference on Machine Learning, 80: 5862-5871. 

[22] Liang, J.T., Xia, W.J. (2020). Global exponential stabilization of nonlinear systems via width time-dependent periodically intermittent smooth control. Advances in Difference Equations, 2020: 576. https://doi.org/10.1186/s13662-020-02969-3

[23] Endres, S.C., Sandrock, C., Focke, W.W. (2018). A simplicial homology algorithm for Lipschitz optimization. Journal of Global Optimization, 72(1): 181-217. https://doi.org/10.1007/s10898-018-0645-y

[24] Sergienko, A.B. (2015). Test functions for global optimization. Reshetnev Siberian State University of Science and Technology, Krasnoyarsk. https://raw.githubusercontent.com/Harrix/HarrixTestFunctions/master/_HarrixTestFunctions.pdf.