OPEN ACCESS
The present study considers an updated solution for an improved secant method. The study presents certain changes to improve the existing method with given conditions for the roots of the nonlinear equation where f(x)=0 and the function is continuous. Here an improved method has been developed that starts with the basic secant method, and after considering these conditions, an efficient solution is obtained. This modified method will increase the convergence rate faster than the standard secant method by performing a few iterations. The present solution confirmed that the proposed method is effective compared to the standard secant method. The present work developed based on additional conditions derived from the standard secant method and is responsible for reducing the number of iterations.
nonlinear equation, secant method, modified secant method, numerical solution
The present work aims to find improvement in the nonlinear equation solutions that require many iterations. The secant method is one of the best, considering its faster convergence rate than the Regula Falsi and Bisection method. The formulation uses three successive points of the iteration instead of just two, and the order of convergence is 1.839 [1]. An improved Newton Raphson’s method has been developed, where a definite area integral is approximated as trapezoidal area in place of the rectangular area [2]. A zerofinding method is used to solve nonlinear equations, which is most efficiently used with the traditional iterative method in which the order of convergence is improved [3]. The leapfrog Newton's method has been developed using Newton's method as an intermediate step. At a simple root, the order of convergence is cubic, and the computational efficiency is lower but still quite comparable to Newton's approach [4]. To approximate a locally unique solution of a nonlinear equation in Banach spaces, the idea uses Lipschitz and center–Lipschitz instead of just Lipschitz conditions in the convergence analysis [5]. An improved RegulaFalsi method with an order of convergence of 3 combines the usual RegulaFalsi method and a Newtonlike method to solve for f(x) = 0 [6]. A novel approach for solving nonlinear equations that is similar to the Secant method. The convergence analysis reveals that the asymptotic order of convergence of this method is $(1+\sqrt{3})$ [7]. A modified secant method is analysed under the hypothesis of secondorder derivatives (Lipschitz continuous) with an error analysis [8]. An improved Regula Falsi (IRF) method based on the classic RegulaFalsi (RF) method has been tested considering many numerical examples and the results confirm that the proposed method performs well compared to the traditional RegulaFalsi method [9]. Another novel class of spectral conjugate gradient algorithms studied to achieve highorder accuracy in predicting the secondorder curvature [10]. To obtain similar convergence to Newton's approach without analyzing any derivatives, a generalization of the Secant method to semismooth equation is suggested [11]. The NewtonRaphson method makes it clear that the correction needed to obtain the right value of the root decreases as the derivative f^{’}(x) increases [12]. To increase the Secant method's applicability, changes have been made to the resolution of a linear system in each step required to use the secant approach for multiple matrix multiplications [13]. To solve a nonlinear equation, modification has been done using the secant method, which involves the development of the inverse of the firstorder divided differences of a function of several variables at two points [14]. Although the NewtonRaphson method converges quickly close to the root, its global characteristics are poor [15]. Several researchers tried to reduce the number of iterations, which reduces the computational cost of the Secant method. The present paper aims to find an improved method that is faster than the standard Secant method. According to the present study, some changes to the Secant method's standard conditions are required. These conditions are as follows:
Condition 1:  Either $f\left(x_a\right) * f\left(x_b\right)$ or $f\left(x_a\right) * f\left(x_b\right)>0$
where, x_{a} and x_{b} are two initial guesses, and f(x_{a}) and f(x_{b}) are function value. It demonstrates that the signs of f(x_{a}) and f(x_{b}) can be opposite or identical. The Modified Secant method also satisfied the requirement mentioned above, which is true.
Condition 2:  After making two initial estimations, x_{a} and x_{b}, the first root value can be calculated using the standard Secant formula. If the value obtained is x_{1}, then the equation has a root only if f(x_{1})=0; otherwise, f(x_{1})>0 or f(x_{1})<0. Now, the formula for the following iterations includes two values, one of which is the new iteration value, i.e., x_{1}, and the other one is immediately before it, i.e., x_{b} but not x_{a}.
This Modified Secant approach no longer allows condition 2. The entire fundamental is based on condition 2. The Modified Secant approach has been developed in this paper, which will also show the transformation of the standard Secant method into the Modified Secant method. The reader must have an understanding of triangular properties. Consider Figure 1, where x_{a} and x_{b }are the two roots. Draw a chord that joins two points, f(x_{a}) and f(x_{b}), due to this chord, we now have two triangles i.e., $\Delta\left(f\left(x_a\right), x_1, x_a\right)$ and $\Delta\left(f\left(x_b\right), x_1, x_b\right)$ , we can use similar triangular properties to solve the standard Secant formula. As we know, triangular properties are:
Figure 1. Function curve with two initial guesses i.e., x_{a} and x_{b}
Figure 1 shows the two function values, f(x_{a}) and f(x_{b})are of opposing signs.
The two most recent root approximations are used to find the next approximation in the standard Secant technique [16].
As the standard Secant method uses succession roots to check the function value. The improved Secant technique used condition 2 and considers, instead of focusing on two recent values, i.e., x_{a} and x_{b} to calculate a new approximate value, i.e., x_{1}. Suppose to emphasis the function value by repeatedly checking the function value after each new iteration by using Modified Secant method conditions (A and B). In that case, the number of iterations reduces significantly. The standard Secant method for the 2^{nd} iteration after the first iteration formula is given below.
$x_2=\frac{\left(x_b * f\left(x_1\right)x_1 * f\left(x_b\right)\right)}{f\left(x_1\right)f\left(x_b\right)}$ (1)
The Modified Secant approach for the 2^{nd} iteration of the formula, Conditions (A and B), is as follows:
If $a b s\left(f\left(x_a\right)\right)a b s\left(f\left(x_1\right)\right)<a b s\left(f\left(x_b\right)\right)a b s\left(f\left(x_1\right)\right)$ use x_{a} with x_{1} for 2^{nd} iteration(A).
If $a b s\left(f\left(x_b\right)\right)a b s\left(f\left(x_1\right)\right)<a b s\left(f\left(x_a\right)\right)a b s\left(f\left(x_1\right)\right)$ use x_{b }with x_{1 }for 2^{nd} iteration(B).
The Modified Secant method applied immediately after the first iteration, and the starting formulation remains the same for Regula Falsi, the Secant method, and the Modified Secant method. Let two initial guesses be, $x_a$ and $x_b$(the function values of both negative, and positive, or one negative and one positive, have no effect on the initial formula) [17]. By employing a similar triangular property. The first iteration formula is as follows: Refer to Figure 1.
Figure 2. Function curve with three roots, i.e., x_{a}, x_{b} and x_{1 }for finding x_{2}
$\begin{gathered}\frac{\left(x_1x_a\right)}{\left(f\left(x_a\right)\right)}=\frac{\left(x_bx_1\right)}{f\left(x_b\right)} \\ f\left(x_b\right) * x_1f\left(x_b\right) * x_a=\left(f\left(x_a\right) * x_b\right)+f\left(x_a\right) * x_1 \\ f\left(x_b\right) * x_1f\left(x_a\right) * x_1=\left(f\left(x_a\right) * x_b+f\left(x_b\right) * x_a\right. \\ \left(f\left(x_b\right)f\left(x_a\right)\right) * x_1=f\left(x_b\right) * x_af\left(x_a\right) * x_b \\ x_1=\frac{\left(x_a * f\left(x_b\right)x_b * f\left(x_a\right)\right)}{\left(f\left(x_b\right)f\left(x_a\right)\right)}\end{gathered}$ (2)
For the second iteration, use the conditions (A and B): Since the modified approach will compute after the first iteration, it's critical to illustrate the second iteration formula. Refer to Figure 2.
If $a b s\left(f\left(x_a\right)\right)a b s\left(f\left(x_1\right)\right)<a b s\left(f\left(x_b\right)\right)a b s\left(f\left(x_1\right)\right)$ use x_{a} with x_{1 }for new iteration (A).
$\begin{gathered}\frac{\left(x_2x_a\right)}{\left(f\left(x_a\right)\right)}=\frac{\left(x_1x_2\right)}{f\left(x_1\right)} \\ f\left(x_1\right) * x_2f\left(x_1\right) * x_a=f\left(x_a\right) * x_1+f\left(x_a\right) * x_1 \\ f\left(x_1\right) * x_2f\left(x_a\right) * x_2=x_1 * f\left(x_a\right)+f\left(x_1\right) * x_a \\ \left(f\left(x_1\right)f\left(x_a\right)\right) * x_2=f\left(x_1\right) * x_ax_1 * f\left(x_a\right) \\ x_2=\frac{\left(x_a * f\left(x_1\right)x_1 * f\left(x_a\right)\right)}{\left(f\left(x_1\right)f\left(x_a\right)\right)}\end{gathered}$ (3)
If $a b s\left(f\left(x_b\right)\right)a b s\left(f\left(x_1\right)\right)<a b s\left(f\left(x_a\right)\right)a b s\left(f\left(x_1\right)\right)$use x_{b} with x_{1 }for new iteration (B).
$\begin{gathered}\frac{\left(x_bx_2\right)}{f\left(x_b\right)}=\frac{\left(x_1x_2\right)}{f\left(x_1\right)} \\ x_b * f\left(x_1\right)x_2 * f\left(x_1\right)=f\left(x_b\right) * x_1f\left(x_b\right) * x_2 \\ x_b * f\left(x_1\right)f\left(x_b\right) * x_1=x_2 * f\left(x_1\right)f\left(x_b\right) * x_2 \\ x_b * f\left(x_1\right)f\left(x_b\right) * x_1=\left(f\left(x_1\right)f\left(x_b\right)\right) * x_2 \\ x_2=\frac{\left(x_b * f\left(x_1\right)x_1 * f\left(x_b\right)\right)}{\left(f\left(x_1\right)f\left(x_b\right)\right)}\end{gathered}$ (4)
The standard Secant technique requires only one formula to solve for the second iteration; however, the modified Secant method employs two conditions (A and B) on the second iteration.
3.1 Proof of the modified secant method
The Regula Falsi, Secant technique and Modified Secant method require two initial guesses to solve nonlinear equations. The modified procedure starts after the first iteration since it requires three function values to meet conditions (A and B).
The Secant method has the disadvantage that it cannot approximate the root value more accurately with each iteration, which makes the function value close to zero. By presenting conditions (A and B), the problem has been eliminated. The physical relevance of these two requirements has been reduced to a bare minimum computational cost. Afterwards, one must consider the second condition (mentioned in the introduction). For the second approximation root, the Secant approach is employed x_{b }with x_{1}, but not x_{a} with x_{1}. The Secant approach does not provide any information about x_{a }with x_{1}. Therefore, these two conditions (A and B) are introduced to clarify this. These two conditions connect new approximate roots and older roots to meet the situation best. Consider condition (A) (mentioned in the introduction part); in this case, the lefthand side difference is less than the righthand side difference. This distinction is significant because it reveals how these roots are best selected after each iteration. These two conditions (A and B) can be used to achieve this.
A straightforward illustration Consider the initial predictions, x_{a} and x_{b.} Let's use the values f(x_{a}))= 2, f(x_{b})=3, and f(x_{1})=1.8, The modified secant technique will then satisfy conditions (A and B) by using the value of f(x_{a}), f(x_{b}) and f(x_{1}) for the next iteration, i.e., x_{2}, which will aid in finding functions near zero. The Modified Secant method stops performing once the differences between the lefthand and righthand sides in either of the conditions (A or B) are less than the error, which depends on the type of problem setting, whether the condition is satisfied or not, and works as a stopping criterion.
Conditions (A and B) follow the Secant method requirement to show the Modified Secant Method.
If $\operatorname{abs}\left(f\left(x_a\right)\right)a b s\left(f\left(x_1\right)\right)<a b s\left(f\left(x_b\right)\right)a b s\left(f\left(x_1\right)\right)$ use x_{a} with x_{1 }for 2^{nd} iteration. (A)
If $a b s\left(f\left(x_b\right)\right)a b s\left(f\left(x_1\right)\right)<a b s\left(f\left(x_a\right)\right)\operatorname{abs}\left(f\left(x_1\right)\right)$ use x_{b} with x_{1 }for 2^{nd} iteration. (B)
x_{a} = 1^{st} initial guess
x_{b} = 2^{nd} initial guess
x_{1} = 1^{st} approximate root after the first iteration using formula 2, given above.
f(x_{a}) = Function value of 1^{st} initial guess
f(x_{b})Function value of 2^{nd} initial guess
f(x_{1}) Function value of 1^{st} approximate root
According to the secant or chord method: 
The Modified Secant method meets the standard Secant method's criteria and adds these two conditions (A and B). This approach is now more accurate than the previous standard Secant method, and the number of iterations has been reduced. Because there are three roots available, the information acquired from these two conditions (A and B) means that a choice can be made between the two older approximate roots and the new approximate root in order to determine the subsequent approximate root.
3.2 Procedure for the modified secant method
The following are the steps to be considered in the proposed modified Secant method.
Step 1:  Consider two initial guesses, let's say, x_{1} and x_{0}
Step 2:  Put x_{1}, x_{0}, f(x_{1}), f(x_{0}) in general, the formula is given below.
$x_{i+1}=\frac{\left(x_{i1}\quad * f\left(x_i\right)x_i * f\left(x_{i1}\quad\right)\right)}{\left(f\left(x_i\right)f\left(x_{i1}\quad\right)\right.}$ (5)
For the 1^{st} iteration, i = 0.
Step 3:  Use Conditions (A and B) for the 2^{nd} iteration.
If $a b s\left(f\left(x_a\right)\right)a b s\left(f\left(x_1\right)\right)<a b s\left(f\left(x_b\right)\right)a b s\left(f\left(x_1\right)\right)$ use x_{a} with x_{1} for 2^{nd} iteration (A):
here, $x_a=x_{1}$ and $x_b=x_0$:
$x_2=\frac{\left(x_{1} \quad* f\left(x_1\right)x_1 * f\left(x_{1}\quad\right)\right)}{\left(f\left(x_1\right)f\left(x_{1}\quad\right)\right)}$ (6)
If $a b s\left(f\left(x_b\right)\right)a b s\left(f\left(x_1\right)\right)<a b s\left(f\left(x_a\right)\right)a b s\left(f\left(x_1\right)\right)$ use x_{b} with x_{a} for 2^{nd} iteration (B).
$x_2=\frac{\left(x_b * f\left(x_1\right)x_1 * f\left(x_b\right)\right)}{f\left(x_1\right)f\left(x_b\right)}$ (7)
Step 4:  Stop executing once the lefthand and righthand side differences in any of the conditions are less than the error. If the condition is satisfied, use conditions (A and B) as a stopping criterion.
Step 5:  Repeat step 3 for the 3^{rd} iteration.
Either condition (A) or condition (B) is met, use the following formula pattern:
$x_{\text {next }}=\frac{\left(x_{\text {old }} \,\,* f\left(x_{\text {new }}\,\,\right)x_{\text {new }} \,\,* f\left(x_{o l d}\,\,\right)\right)}{\left(f\left(x_{n e w}\,\,\right)f\left(x_{o l d}\,\,\right)\right)}$ (8)
where, $x_{n e x t}=x_{i+1}, i=2$
$x_{\text {old }}=x_{i1}, i=2$ and $i=1$
$x_{n e w}=x_i, i=2$
Two values of $x_{\text {old }}$at different (i) represent two old approximate roots as per applicability of conditions (A and B) while x_{new} represents a new approximate root.
For subsequent iterations i=3,4,5………. n, n = Real number.
3.3 Flow chart for steps involved in the proposed modified method
Figure 3 shows the flow chart of modified secant method.
Figure 3. Flow chart for modified secant method
Consider, f(x)=cos(x)x^{*}exp(x), use the Modified Secant method to solve the problem.
Step 1: Consider two initial guesses, let's say, x_{1} and x_{0.}
Let x_{1}=0.5, 1^{st} initial guess
x_{0} = 1, 2^{nd} initial guess
f(x_{1}) =0.0532
f(x_{0}) = 2.1779
Step 2: Put x_{1}, x_{0}, f(x_{1}), f(x_{0}) in general the formula is given below.
$x_{i+1}=\frac{\left(x_{i1} \,\,* f\left(x_i\right)x_i * f\left(x_{i1}\,\,\right)\right)}{\left(f\left(x_i\right)f\left(x_{i1}\,\,\right)\right.}$
For 1^{st} iteration, i=0
$\begin{aligned} x_1 & =\frac{\left(x_{1} * f\left(x_0\right)x_0 * f\left(x_{1}\right)\right)}{\left(f\left(x_0\right)f\left(x_{01}\right)\right.} \\ x_1 & =0.5119 \text { and } f\left(x_1\right)=0.01773\end{aligned}$
Step 3:  Use Conditions (A and B) for the 2^{nd} iteration.
If $a b s\left(f\left(x_a\right)\right)a b s\left(f\left(x_1\right)\right)<a b s\left(f\left(x_b\right)\right)a b s\left(f\left(x_1\right)\right)$ use x_{a} with x_{1 }for 2^{nd} iteration (A).
Initial guess used $x_a=x_{1}, \quad x_b=x_0$, put $f\left(x_{1}\right)=0.0532$ and $f\left(x_0\right)=2.1779$ in ConditionA as follows:
$0.03547<2.16017(A)($ Satisfied)
If $a b s\left(f\left(x_b\right)\right)a b s\left(f\left(x_1\right)\right)<a b s\left(f\left(x_a\right)\right)a b s\left(f\left(x_1\right)\right)$ use x_{b} with x_{1 }for 2^{nd} iteration (B).
Put $f\left(x_{1}\right)=0.0532$ and f(x_{0})= 2.1779 in conditionB as follow:
$2.16017<0.03547(B)($ Not Satisfied $)$
Apply conditionA and the formula given as follows:
$x_2=\frac{\left(x_{1} \,\,* f\left(x_1\right)x_1 * f\left(x_{1}\,\,\right)\right)}{\left(f\left(x_1f\left(x_{1}\right)\,\,\right)\right.}$
x_{2}=0.5178 and f(x_{0}) = 0.000129
Step 4:  Stop executing once the lefthand and righthand side differences in any of the conditions are less than the error. Whether the condition is satisfied or not. Use conditions (A and B) as a stopping criterion.
If $a b s\left(f\left(x_a\right)\right)a b s\left(f\left(x_1\right)\right)<a b s\left(f\left(x_b\right)\right)a b s\left(f\left(x_1\right)\right)$ use x_{a} with x_{1 }for new iteration (A).
Here $x_a=x_0$ and $x_b=x_1$ and $x_1=x_2$
2.17790.000129<0.017730.000129
2.1777<0.01760 (not satisfied but difference is negligible)
x_{2}=0.5178 is the equation's root.
Consider, f(x)=cos(x)x^{*} exp(x), same problem solved by standard Secant method.
Let x_{1}=0.5, 1^{st} initial guess
x_{0} = 1, 2^{nd} initial guess
f(x_{1}) =0.0532
f(x_{0}) = 2.1779
i = 0 for the 1^{st} iteration, and the formula becomes as follows:
$x_1=\frac{\left(x_{1} \,\,* f\left(x_0\right)x_0 * f\left(x_{1}\,\,\right)\right)}{\left(f\left(x_0\right)f\left(x_{01}\,\,\right)\right.}$
x_{1}=0.5119 and f(x_{1})=0.01773
i = 1 for the 2^{nd} iteration, and the formula becomes as follows:
$x_2=\frac{\left(x_0 * f\left(x_1\right)x_1 * f\left(x_0\right)\right)}{\left(f\left(x_1\right)f\left(x_0\right)\right.}$
$\begin{aligned} & x_0=1, f\left(x_0\right)=2.1779, x_1=0.5119, f\left(x_1\right) 0.1773 \\ & x_2=0.5158 \text { and } f\left(x_2\right)=0.0059\end{aligned}$
i = 2 for the 3^{rd} iteration, and the formula becomes as follows:
$x_3=\frac{\left(x_1 * f\left(x_2\right)x_2 * f\left(x_1\right)\right)}{\left(f\left(x_2\right)f\left(x_1\right)\right.}$
$\begin{aligned} & x_3=0.5178 \text { and } f\left(x_3\right)=0.000129 \\ & x_3=0.5178 \text { is the equation's root }\end{aligned}$
The number of iterations can be lowered by using the Modified Secant technique.
5.1 Comparison and validation 1
Consider f(x)= x^{3}2^{*}x5
1^{st} initial guess = 2, 2^{nd} initial guess = 3
Final root = 2.0946
Modified Secant Method = 4 Iteration
Secant Method = 5 Iteration
Regula Falsi Method = 10 Iteration
Bisection Method = 15 Iteration
Newton Raphson Method = 2 Iteration
Values obtained using different root finding methods are tabulated in Table 1 and 2.
Table 1. Comparison of various methods for a lower number of iterations
Modified Secant Method 
Secant Method 
Regula Falsi Method 
Bisection Method 
Newton Raphson Method 
X1=2.0588 
X1=2.0588 
X1=2.05882 
X1=2.5000 
X1=2.1000 
X2=2.0966 
X2=2.0813 
X2=2.08126 
X2=2.2500 
X2=2.0946 
X3=2.0945 
X3=2.0948 
X3=2.08964 
X3=2.1250 

X4=2.0946 
X4=2.0945 
X4=2.09274 
X4=2.0625 


X5=2.0946 
X5=2.09388 
X5=2.0938 



X6=2.09431 
X6=2.1094 



X7=2.09466 
X7=2.1016 



X8=2.09452 
X8=2.0977 



X9=2.09454 
X9=2.0957 



X10=2.09455 
X10=2.0947 




X11=2.0942 




X12=2.0945 




X13=2.0946 




X14=2.0945 




X15=2.0946 

(a) Function vs Root value
(b) Function vs Root value
(c) Function vs Root value
(d) Function vs Root value
Figure 4. (a) Function vs Root value; (b) Function vs Root value; (c) Function vs Root value; (d) Function vs Root value
The figures above shows the comparison of root finding method.
Figure 4 (a) shows comparisons between the Modified Secant Method and Standard Secant, Regula Falsi, Newton Raphson and Bisection method for a lower number of iterations.
Figure 4 (b) shows comparisons between the Modified Secant method and Standard Secant, Regula Falsi and Newton Raphson method for the lower number of iterations.
Figure 4 (c) shows comparisons between the Modified Secant method and Standard Secant and Regula Falsi method for the lower number of iterations.
Figure 4 (d) shows comparisons between Modified Secant and Standard Secant methods for lower number of iterations.
From the present study, it is concluded that the Modified secant method has only four iterations, which can be easily found in Figures 4 (a), (b), (c) and (d) when compared with standard secant, Regula Falsi, and the Bisection method. In Figure 4 (a), compared to Newton Raphson's method, the Modified Secant method requires fewer iterations, but not as much.
5.2 Comparison and validation 2
Consider f(x)= cos(x)x*exp(x)
1^{st} initial guess = 0.5, 2^{nd} initial guess = 1
Final root = 0.5178
Modified Secant method = 2 Iteration
Secant method = 3 Iteration
Regula Falsi method = 8 Iteration
Bisection method = 13 Iteration
Newton Raphson method = 2 Iteration
Table 2. Comparison of various methods for a lower number of iterations
Modified Secant Method 
Secant Method 
Regula Falsi Method 
Bisection Method 
Newton Raphson Method 
X1=0.5119 
X1=0.5119 
X1=0.51193 
X1=0.7500 
X1=0.5180 
X2=0.5178 
X2=0.5159 
X2=0.51585 
X2=0.6250 
X2=0.5178 

X3=0.5178 
X3=0.51713 
X3=0.5652 



X4=0.51755 
X4=0.5312 



X5=0.51769 
X5=0.5156 



X6=0.51774 
X6=0.5234 



X7=0.51775 
X7=0.5195 



X8=0.51776 
X8=0.5176 




X9=0.5186 




X10=0.5181 




X11=0.5178 




X12=0.5177 




X13=0.5178 

The figures below shows the comparison of root finding method.
Figure 4 (e) shows the function value against the root value for the Modified Secant Method, Standard Secant, Regula Falsi, Newton Raphson and Bisection method for the lower number of iterations.
Figure 4 (f) shows function value against root value for Modified Secant method, Standard Secant, Regula Falsi and Newton Raphson method for a lower number of iterations
Figure 4 (g) shows function value against root value for Modified Secant method with Standard Secant and Regula Falsi, the method for a lower number of iterations.
Figure 4 (h) shows function value against root value for the Modified Secant method and Standard Secant method for a lower number of iterations.
It can be seen from Figures below that the modified secant method has two iterations, whereas the standard secant method has three iterations. The same can be seen in Figures 4 (e), 4 (f) and 4 (g). The present study concludes that modified secant method conditions (A and B) work on finding those roots which give solution faster than the standard Secant method.
(e) Function vs Root value
(f) Function vs Root value
(g) Function vs Root value
(h) Function vs Root value
Figure 4. (e) Function vs Root value; (f) Function vs Root value; (g) Function vs Root value; (h) Function vs Root value
The present work shows the Modified Secant approach to locate roots with fewer iterations than the Secant method. This approach can be used to solve any nonlinear equation comprising algebraic, transcendental, or other functions. Compared to the Secant, Regula Falsi, and Bisection methods, the approximate root is computed in fewer iterations. For the Modified Secant method, conditions A and B are crucial. The current work identifies these two conditions (A and B). It shows that they satisfied the Secant assertion and clarified using three roots after the first iteration to choose two roots for the second iteration and subsequent iterations. It is also concluded that the Modified Secant method required less computational cost when compared with Bisection, Regula Falsi and the standard Secant method but not with the NewtonRaphson method.
[1] Tiruneh, A.T. (2019). A modified threepoint Secant method with improved rate and characteristics of Convergence. https://doi.org/10.48550/arXiv.1902.09058
[2] Weerakoon, S., Fernando, T.G.I. (2000). A variant of Newton’s method with accelerated thirdorder convergence. Applied Mathematics Letters, 13(8): 8793. https://doi.org/10.1016/S08939659(00)001002
[3] GrauSánchez, M., DíazBarrero, J.L. (2011). A technique to composite a modified Newton’s method for solving nonlinear equations. http://arxiv.org/abs/1106.0996
[4] Kasturiarachi, A.B. (2002). Leapfrogging Newton’s method. International Journal of Mathematical Education in Science and Technology, 33(4): 521527. https://doi.org/10.1080/00207390210131786
[5] Magreñán, A., Argyros, I.K. (2016). New improved convergence analysis for the secant method. Mathematics and Computers in Simulation, 119: 161170. https://doi.org/10.1016/j.matcom.2015.08.002
[6] Parida, P.K., Gupta, D.K. (2006). An improved RegulaFalsi method for enclosing simple zeros of nonlinear equations. Applied Mathematics and Computation, 177(2): 769776. https://doi.org/10.1016/j.amc.2005.11.034
[7] Wang, X., Kou, J., Gu, C. (2010). A new modified secantlike method for solving nonlinear equations. Computers and Mathematics with Applications, 60(6): 16331638. https://doi.org/10.1016/j.camwa.2010.06.045
[8] Wu, Q., Ren, H. (2006). Convergence ball of a modified secant method for finding zero of derivatives. Applied Mathematics and Computation, 174(1): 2433. https://doi.org/10.1016/j.amc.2005.05.007
[9] Naghipoor, J., Ahmadian, S.A., Soheili, A.R. (2008). An improved Regula Falsi method for finding simple zeros of nonlinear equations. Applied Mathematical Sciences 2(8): 381386. https://profdoc.um.ac.ir/articles/a/1016057.pdf
[10] Livieris, I.E., Pintelas, P. (2013). A new class of spectral conjugate gradient methods based on a modified secant equation for unconstrained optimization. Journal of Computational and Applied Mathematics, 239(1): 396405. https://doi.org/10.1016/j.cam.2012.09.007
[11] Amat, S., Busquier, S. (2003). A modified secant method for semismooth equations. Applied Mathematics Letters, 16(6): 877881. https://doi.org/10.1016/S08939659(03)900115
[12] Longley, W.R. (1932). JB Scarborough, Numerical Mathematical Analysis. https://projecteuclid.org/journals/bulletinoftheamericanmathematicalsociety/volume38/issue5/ReviewJBScarboroughNumericalMathematicalAnalysis/bams/1183495919.pdf, accessed on Jul. 20, 2022.
[13] Amat, S., HernándezVerón, M.A., Rubio, M.J. (2014). Improving the applicability of the secant method to solve nonlinear systems of equations. Applied Mathematics and Computation, 247: 741752. https://doi.org/10.1016/j.amc.2014.09.066
[14] Ezquerro, J.A., Grau, A., GrauSánchez, M., Hernández, M.A., Noguera, M. (2012). Analysing the efficiency of some modifications of the secant method. Computers and Mathematics with Applications, 64(6): 20662073. https://doi.org/10.1016/j.camwa.2012.03.105
[15] Kiusalaas, J. (2015). Numerical Methods in Engineering with MATLAB®. https://doi.org/10.1017/CBO9781316341599
[16] Grewal, B.S. (2019). Numerical Methods in Engineering and Science.
[17] Chapra, S.C., Canale, R.P. (2010). Numerical methods for engineers. McGrawHill, USA.