A Block Solver of Variable Step Variable Order for Stiff ODEs

A Block Solver of Variable Step Variable Order for Stiff ODEs

Jimevwo Godwin OghonyonTemitope Abodunrin Peter Oluwatomi Ogunniyi 

Department of Mathematics, Covenant University, Ota 112104, Nigeria

Department of Physics, Covenant University, Ota 112104, Nigeria

Corresponding Author Email: 
godwin.oghonyon@covenantuniveristy.edu.ng
Page: 
675-682
|
DOI: 
https://doi.org/10.18280/mmep.090314
Received: 
4 September 2021
|
Revised: 
4 January 2022
|
Accepted: 
13 January 2022
|
Available online: 
30 June 2022
| Citation

© 2022 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

A block solver of variable step variable order (BSVSVO) is suggested for stiff ordinary differential equations (ODEs). The block solver is employed to enhance the performance for stiff ODEs via variable step variable order to achieve faster convergence with better accuracy and lesser maximum error. Block solver is formulated via interpolation and collocation together with power series as the basis function approximation. The principal local truncation error (PLTE) of the block solver is utilized to generate the convergence criteria. Some investigation of the theoretical properties will be mentioned and analyzed. The block solver will be implemented using some selected test problems and compared with existing methods to showcase the convergence, high efficiency and accuracy thereby ensuring a better maximum error of the suggested method.

Keywords: 

block solver, variable step, variable order, predictor-corrector algorithm program, stiff ODEs, principal local truncation error

1. Introduction

If given a vector differential equation requiring more dependents variable quantity with regard to the autonomous variable t. Whenever the order of magnitudes of the differentials of the dependent variable quantities with regard to t (agreeing to their varying rates) are importantly different, such a differential equation is called stiff, since it is difficult to be figured out numerically. For this stiff differential equation, authors will always be extremely careful in considering the step size in order to avert numerical instability problem and acquire a sensibly accurate solution within a sensible computation time. Stiff differential equations often spring up in forcible equations referable to the universe of major significance with different time constant quantities. Time constant quantity is the full term utilized by applied scientists and scientist trained in physics to denote the rate of decay. Stiff equations are actually found in a large quantity in the field of oscillations, chemical responses, and electric circuits [1-3].

We consider the stiff ordinary differential equations of the form [4]:

$y^{\prime}=f(x, y), y\left(x_{0}\right)=y_{0}$     (1)

We look for a method to solve problem (1) in the interval $a \leq x \leq b$, where $a$ and $b$ are definite, and we take for granted that $f$ gratifies the stipulation put forward in theorem 1 which ensure that (1) possess a unique continuous differentiable solution [5-8].

The numerical solution to (1) in form of BSVSVO is [2]:

$\sum_{i=1}^{3} y_{n+i}=y_{n}+h \sum_{i=0}^{4} \beta_{i} f_{n-i}$    (2)

 $\sum_{i=1}^{3} y_{n+i}=y_{n-2}+h \sum_{i=1}^{3} \beta_{i} f_{n+i}$    (3)

Eqns. (2) and (3) clearly defined the newly proposed BSVSVOP with variable step and order of the block predictor-corrector method [2].

The system (1) is called linear with fixed coefficient whenever [9, 10]:

$f(x, y)=A y+\varphi(x), \quad y, \varphi \in R^{m}$     (4)

where, $A$ is a fixed $m \times m$ matrix with assumed discrete eigenvalues, $\lambda_{i} \in \mathbb{C}, i=1,2, \ldots, m$ and matching eigenvectors, $c_{i} \in \mathbb{C}^{m}, i=1,2, \ldots, m$.

The system of (4) assumes the universal form:

$y(x)=\sum_{i=1}^{m} k_{i} \exp \left(\lambda_{i} x\right) c_{i}+\varphi(x)$     (5)

where, $k_{i}$ are arbitrary fixeld constants and $\varphi(x)$ is a special integral.

Whenever the eigenvalues of $A$ are $\lambda_{1}, \lambda_{2}, \ldots, \lambda_{m}$, the system is called stiff if.

$$

\Re\left(\lambda_{i}\right)<0, i=1,2, \ldots, m ;

$$ $\max \left|\Re\left(\lambda_{i}\right)\right| \gg \min \left|\Re\left(\lambda_{i}\right)\right|$.

The ratio: $S=\frac{\max \left|\Re\left(\lambda_{i}\right)\right|}{\min \left|\Re\left(\lambda_{i}\right)\right|}$, is named the stiffness ratio of the system of equations [1, 11-13].

Authors have successfully introduced different methods to handle (4). Ibrahim et al. [14] worked on the fixed coefficient A(α) stable block backward differentiation formulas for stiff ODEs. Ijam et al. [15] designed the diagonally implicit block backward differentiation formula with optimal stability properties for stiff ODEs. Ibrahim et al. [10] formulated the stability of fully implicit block backward differentiation formulae. The study [13] proposed the adaptive order of block backward differentiation formulas for stiff ODEs. Akinfenwa et al. [2] proposed an eighth order backward differentiation formula with continuous coefficients for stiff ODEs. The variable step block backward differentiation formula (BDF) was implemented in Ref. [11] for solving first order stiff ordinary differential equations (ODEs). For the successful implementation of (4), the stability property of the backward differentiation formula and block backward differentiation formula is a major tool for the successful implementation to ensure a better efficiency and accuracy. Again, Ibrahim et al. [11] adopted the idea of variable step size block backward differentiation yielding a better efficiency and accuracy due to the implementation of the variable step size technique. Other hand, the block solver banks on error control procedures (variable step variable order) rather than stability properties to decide the efficiency, accuracy and to achieve maximum error. Again, most of the implementation done by BDF and BBDF utilizes uniform step size as against BSVSVO executed via finding a suited step size [16-27]. Yatim et al. [28] developed the stability region of two-point variable step-block backward differentiation formulae.

Theorem 1.1

Suppose $f(x, y)$ be specified and continuous for the entire intervals $(c, d)$ in the neighborhood $\mathrm{D}$ defined by $c \leq x \leq d$, -$\infty \leq y \leq \infty, c$ and $d$ definite, and suppose there be a constant quantity L such that, for all $x, y, y^{\blacksquare}$ such that (c, d) and $\left(x, y^{\blacksquare}\right)$ are together in D:

$\left|f(x, y)-f\left(x, y^{\blacksquare}\right)\right|$$\leq L|y-y^{\blacksquare}|$       (6)

Then, whenever ρ is any established value, there be a unique solution y(x) of the initial value problem (1), where y(x) is continuous and differentiable for every (x, y) in D.

The necessity (6) is called a Lipschitz condition, and the constant quantity L as a Lipschitz constant. This precondition is summarized as follows:

  • f(x, y) continuously differentiable with respect to y for every (x, y) in D.
  • f(x, y) gratifies a Lipschitz condition with respect to y for every (x, y) in D.
  • f(x, y) continuous with respect y for every (x, y) in D [4, 18, 19].

Definition 1.1

Suppose b-block, r-point method defines the block length and h is the step length, then the block length in the time interval is rh. If m=0, 1, 2, … represent the block number and if n=mr, then the b-block, r-point method is rewritten in the next universal form:

$Y_{m}=\sum_{s=1}^{b} A_{s} Y_{m-s}+h \sum_{s=1}^{b} B_{s} F_{m-s}$    (7)

where, $Y_{m}=\left[y_{n+1}, \ldots, y_{n+i}, \ldots y_{n+r}\right]^{T}, F_{m}=\left[f_{n+1}, \ldots, f_{n+i}, \ldots f_{n+r}\right]^{T}$.

$A_{s}$ and $B_{s}$ are $r \times r$ constant coefficient rectangular array of physical quantities [12].

The motivation of the study stems from [18, 19] which suggested the components of variable step variable with variable steps size as the essentials to better efficiency and accuracy. Again, due to the bounded region of absolute stability (RAS) of the block solver and from the extensive computation carried out that has cumulated all over for a prolong period of time. The block solver of variable step variable order (BSVSVO) becomes central to high efficiency and accuracy. Thus, (BSVSVO) possesses the advantage to satisfy the bound of convergence by finding a suited step size for every loop carried out [8, 18, 19].

Over the years, the studies [10, 11, 13, 15, 28] suggested possible ideas to resolve (4). These ideas involve the stability of fully implicit block backward differentiation formulae; variable step size block backward differentiation; adaptive order of block backward differentiation formulas; diagonally implicit block backward differentiation formula with optimal stability properties and stability region of two-point variable step-block backward differentiation formulae. All of these ideas listed are fully developed to handle (4). Nevertheless, the (BSVSVO) due to the bounded stability region is built to solve (1), but in order to compete with [10, 11, 13, 15, 28] and yield better maximum errors. The (BSVSVO) is proposed to provide better solution to (4). Again, the (BSVSVO) has the advantage to determine a suitable variable stepsize to satisfy every bound of convergence.

2. Formulation of the Method

The formulation of the proposed BSVSVO is carried out as a block predictor-corrector pair. For this process, the step and order (p*) of the block predictor method is different from the step and order (p) of the block corrector method. This statement is likewise interpreted as p*>p for both step and order.

The BSVSVO is developed using yn as the point of interpolation and fn, fn-1, fn-2, fn-3 as the points of collocation for the block predictor method, while yn-2 is used as the point of interpolation and fn+1, fn+2, fn+3 as the points of collocation. This is processed with the multinomial basis subroutine of the Eq. (8) [21, 23]:

$y(x)=\sum_{i=0}^{j} a_{i}\left(\frac{x-x_{n}}{h}\right)^{i}$     (8)

where, ai for i=0, 1, 2 , 3, 4 represents the unknown physical quantities needed to be determine in a special way. Again, we assume that the function (8) gratifies (6). Thus, the exact solution of the interpolation points, xn and xn-2 yields:

$\begin{aligned} y\left(x_{n}\right) & \approx y_{n} \\ y\left(x_{n-2}\right) & \approx y_{n-2} \end{aligned}$      (9)

and collocation points of $x_{n-i}, i=0,1,2,3, x_{n+i}, i=1,2,3$ gives birth to [21, 23, 25]:

$y^{\prime}\left(x_{n-i}\right) \approx f_{n-i}, i=0,1\quad2,3$

$y^{\prime}\left(x_{n+i}\right) \approx f_{n+i}, i=1\quad2,3$           (10)

We put together the points of interpolation and collocation of Eqns. (9) and (10) to arrive at the system of equations in the form of AX=B. We assume that the solution subsist (loop will converge) whenever the absolute values of the pre-eminent diagonal components of the constant coefficient square matrix A of the system AX=B are larger than the total of absolute values of the other constant coefficients of the row. Solving AX=B and substituting into (8) will produce the continuous scheme of BSVSVO.

$\begin{aligned} y(x)=y_{n}+\left[\left(\frac{\left(x-x_{n}\right)}{h}+\frac{11}{12}\left(\frac{x-x_{n}}{h}\right)^{2}\right.\right.\\ &+\frac{1}{3}\left(\frac{x-x_{n}}{h}\right)^{3} \\ &\left.+\frac{1}{24}\left(\frac{x-x_{n}}{h}\right)^{4}\right) h f_{n} \\ &+\left(-\frac{3}{2}\left(\frac{x-x_{n}}{h}\right)^{2}-\frac{5}{6}\left(\frac{x-x_{n}}{h}\right)^{3}\right.\\ &\left.-\frac{1}{8}\left(\frac{x-x_{n}}{h}\right)^{4}\right) h f_{n-1} \\ &+\left(\frac{3}{4}\left(\frac{x-x_{n}}{h}\right)^{2}+\frac{2}{3}\left(\frac{x-x_{n}}{h}\right)^{3}\right.\\ &\left.+\frac{1}{8}\left(\frac{x-x_{n}}{h}\right)^{4}\right) h f_{n-2} \\ &+\left(-\frac{1}{6}\left(\frac{x-x_{n}}{h}\right)^{2}-\frac{1}{6}\left(\frac{x-x_{n}}{h}\right)^{3}\right.\\ &\left.\left.-\frac{1}{24}\left(\frac{x-x_{n}}{h}\right)^{4}\right) h f_{n-3}\right] \end{aligned}$               (11)

$\begin{aligned} y(x)=y_{n-2}+\left[\left(\frac{37}{3}\right.\right.&+3 \frac{\left(x-x_{n}\right)}{h}-\frac{5}{4}\left(\frac{x-x_{n}}{h}\right)^{2} \\ &\left.+\frac{1}{6}\left(\frac{x-x_{n}}{h}\right)^{3}\right) h f_{n+1} \\ &+\left(-\frac{50}{3}-3 \frac{\left(x-x_{n}\right)}{h}\right.\\ &+2\left(\frac{x-x_{n}}{h}\right)^{2} \\ &\left.-\frac{1}{3}\left(\frac{x-x_{n}}{h}\right)^{3}\right) h f_{n+2} \\ &+\left(\frac{19}{3}+\frac{\left(x-x_{n}\right)}{h}-\frac{3}{4}\left(\frac{x-x_{n}}{h}\right)^{2}\right.\\ &\left.\left.+\frac{1}{6}\left(\frac{x-x_{n}}{h}\right)^{3}\right) h f_{n+3}\right] \end{aligned}$      (12)

Evaluating (11) and (12) at some selected points of interval, $x_{n+i}, i=1,2,3$ will bring forth the BSVSVO as:

$y(x)=y_{n}+h\left(\mu_{1} y_{i}+\mu_{2} y_{i-1}+\mu_{3} y_{i-2}+\mu_{4} y_{i-3}\right)$

$y(x)=y_{n-2}+h_{1}\left(\beta_{1} y_{i+1}+\beta_{2} y_{i+2}+\beta_{3} y_{i+3}\right)$     (13)

where, $\beta_{1}, \beta_{2}, \beta_{3}, \mu_{1}, \mu_{2}, \mu_{3}$ and $\mu_{4}$ are known physical quantities [16, 21-25].

This aspect deals with formulation of the method. The process is successfully caried out with the use of power series as the basis function approximation together with the method of interpolation and collocation to formulate the (BSVSVO). Furthermore, Eqns. (11) and (12) clearly demonstrated the introduction of variable step and variable order with block solver due to the different points of interpolation and concluding used.

2.1 Some investigation of the theoretical properties

Stability Polynomial of the Predictor-Corrector Methods.

The region of absolute stability of the predictor-corrector pair relies on two components.

(i) The linear multistep method qualified by (ρ1, σ1), (ρ2, σ2) for the predictor and corrector, severally, and

(ii) The mode of implementation, i.e., either P(EC)u mode or P(EC)uE mode.

The stability multinomial for the predictor-corrector system in the P(EC)uE mode u≥1 given by Chase (1962) as:

$\begin{aligned} \Pi_{E}(r, z)=\rho_{2}(r) &-z \sigma_{2}(r)+M_{u}(z) \cdot\left[\rho_{1}(r)\right.\\ &\left.-z \sigma_{1}(r)\right] \end{aligned}$    (14)

where, 

$M_{u}(z)=\left(z \beta_{k}\right)^{u} \frac{1-z \beta_{k}}{1-\left(z \beta_{k}\right)^{u}}$       (15)

when that of the $P(E C)^{\text {u }}$ mode was received as [8]:

$\begin{aligned} \Pi_{E}(r, z)=\beta_{k} r^{k}\left[\rho_{2}(r)-z \sigma_{2}(r)\right]+M_{u}(z) \\ \cdot & {\left[\rho_{1}(r) \sigma_{2}(r)-\rho_{2}(r) \sigma_{1}(r)\right] }\end{aligned}$     (16)

It is evident that if $\left|z \beta_{k}\right|<1$, then (15) entails that:

$\lim _{u \rightarrow \infty} M_{u}(z)=0$     (17)

and, thus, from (14) and (16),

$\Pi_{E}(r, z) \rightarrow \Pi(r, z)$      (18)

and

$\Pi_{c}(r, z) \rightarrow \beta_{k} r^{k} \Pi(r, z)[8]$      (19)

where, $\Pi(r, z)$ is the stability multinomial for the corrector.

This shows that the stability polynomial of the predictorcorrector formula in either mode is basically that of the corrector formula $\left(\rho_{2}, \sigma_{2}\right)$ whenever $u$ is sufficiently large and $h$ small [8].

Theorem 2.1

The order of a predictor-corrector method for first order equations must be greater than or equal to one if it is convergent [9].

Theorem 2.2

Let $\left\{y_{n+1}^{[m]}\right\}$ be a sequence of approximations of $y_{n+1}$ obtained by a $P E C E \ldots$ method. If:

$\left|\frac{\partial f}{\partial y}\left(X_{n+1}, y\right)\right| \leq L[17]$        (20)

(for all $y$ near $y_{n+1}$ including $\mathcal{y}_{n+1}^{[0]}, \mathcal{y}_{n+1}^{[1]} \ldots$ ) where $L$ satisfies the condition $L<\frac{1}{\left|h \beta_{0}\right|}$, then the sequence $\left\{y_{n+1}^{[m]}\right\}$ converges to $y_{n+1}[17]$.

Proof: The numeric solution gratifies the equation [17]:

$y_{n+1}=\sum_{i=0}^{i-1} \alpha_{i} y_{n+i}+h B_{0} f\left(x_{n+1}, y_{n+1}\right)+h \sum_{i=0}^{j-1} B_{i} f_{n+i}$     (21)

The corrector gratifies the equation:

$y_{n+1}^{(m+1)}=\sum_{i=0}^{j-1} \alpha_{i} y_{n+i}+h B_{0} f\left(X_{n+1}, y_{n+1}^{(m)}\right)+h \sum_{i=0}^{i-1} B_{i} f_{n+1}$     (22)

Subtracting these two equations, we arrive at:

$\mathop{y}_{n+1}-\mathop{y}_{n+1}^{\left( m+1 \right)}=h\mathop{\beta }_{0}[\left| f(\mathop{x}_{n+1},\mathop{y}_{n+1})-f(\mathop{x}_{n+1},\mathop{y}_{n+1}^{\left( m \right)}) \right|]$     (23)

Applying the Lagrange mean value theorem to arrive at:

$\mathop{y}_{n+1}-\mathop{y}_{n+1}^{\left( m+1 \right)}=h\mathop{\beta }_{0}(\mathop{y}_{n+1}-\mathop{y}_{n+1}^{\left( m \right)})\frac{\partial f}{\partial y}\left( \mathop{x}_{n+1},\mathop{y}^{*} \right)$         (24)

where, $\mathop{y}_{n+1}^{\left( m \right)}\le $y*$\le $yn+1. Thus,

$\left| \mathop{y}_{n+1}-\mathop{y}_{n+1}^{\left( m+1 \right)} \right|\le\left| h\mathop{\beta }_{0} \right|\left| \mathop{y}_{n+1}-\mathop{y}_{n+1}^{\left( m \right)} \right|$$\left| \frac{\partial{f}}{\partial{y}}\left( \mathop{x}_{n+1},y \right) \right|$

$\le hL\left| \mathop{\beta }_{0} \right|\left| \mathop{y}_{n+1}-\mathop{y}_{n+1}^{\left( m \right)} \right|$

$\le\mathop{[hL\left| \mathop{\beta }_{0} \right|]}^{m}\left| \mathop{y}_{n+1}-\mathop{y}_{n+1}^{\left( 0 \right)} \right|$ 

Now,

$\lim \left|y_{n+1^{-}} y_{n+1}^{(m+1)}\right| \rightarrow 0$, if

$m \rightarrow \infty$

$\mathrm{hL}\left|\boldsymbol{B}_{0}\right|<1$ or $\mathrm{L}<\frac{1}{h\left|\boldsymbol{B}_{0}\right|}[\mathbf{1 7 ]}$.

This means that the conclusion of Theorem 2.2 holds as seen in [17]. Clearly, some theoretical properties have been ascertained in this aspect to buttress the use of the method.

2.2 Prescribing the bound of the convergence criteria of BSVSVO

To set up the prescribe convergence criteria of variable step variable order block predictor-corrector pair, we consider k+1-step with order p*=p+1 for block predictor method, while k-step with order p for block corrector method. This justifies the novelty of the BSVSVO predictor-corrector pair with different step and order which assumes that p*>p. Again, it is very viable to find the principal local truncations error of the block predictor-corrector pair in the absence of approximating higher orders differential. Whenever we assume that p*>p, where p* and p display the order of the block predictor and block corrector methods [3, 5, 7, 18, 19, 21-25].

Instantly, for a method of order p*, k+1-step, the local errors of the variable step, variable order block predictor method is [5, 7, 18, 19, 21, 23, 25]:

$\begin{aligned} y\left(x_{n+1}\right)=y\left(x_{n}\right) &+\frac{h}{24}\left[55 f\left(x_{n}, y\left(x_{n}\right)\right)\right.\\ &-59 f\left(x_{n-1}, y\left(x_{n-1}\right)\right) \\ &+37 f\left(x_{n-2}, y\left(x_{n-2}\right)\right) \\ &\left.-9 f\left(x_{n-3}, y\left(x_{n-3}\right)\right)\right] \\ &+\frac{251}{720} y^{(5)}\left(\hat{\mu}_{n}\right) h^{5} \end{aligned}$

$\begin{aligned} y\left(x_{n+2}\right)=y\left(x_{n}\right) &+\frac{h}{3}\left[27 f\left(x_{n}, y\left(x_{n}\right)\right)\right.\\ &-44 f\left(x_{n-1}, y\left(x_{n-1}\right)\right) \\ &+31 f\left(x_{n-2}, y\left(x_{n-2}\right)\right) \\ &\left.-8 f\left(x_{n-3}, y\left(x_{n-3}\right)\right)\right] \\ &+\frac{269}{90} y^{(5)}\left(\hat{\mu}_{n}\right) h^{5} \end{aligned}$

$\begin{aligned} y\left(x_{n+3}\right)=y\left(x_{n}\right) &+\frac{h}{8}\left[189 f\left(x_{n}, y\left(x_{n}\right)\right)\right.\\ &-369 f\left(x_{n-1}, y\left(x_{n-1}\right)\right) \\ &+279 f\left(x_{n-2}, y\left(x_{n-2}\right)\right) \\ &\left.-75 f\left(x_{n-3}, y\left(x_{n-3}\right)\right)\right] \\ &+\frac{987}{80} y^{(5)}\left(\hat{\mu}_{n}\right) h^{5} \end{aligned}$      (25)

for about $\hat{\mu}_{n} \in\left(x_{n-3}, x_{n+3}\right)$. If we presume that the estimates $\omega_{0}, \omega_{1}, \omega_{2}, \omega_{3} \ldots, \omega_{n}$ are entirely the exact solutions and allow $z$ to act as the resolution to the differential equation gratifying the initial precondition $z\left(x_{n}\right)=\omega_{n}$.

Subsequently,

$z\left(x_{n+1}\right)-\omega_{n+1}^{(0)}=\frac{251}{720} y^{(5)}\left(\hat{\mu}_{n}\right) h^{5}$

$z\left(x_{n+2}\right)-\omega_{n+2}^{(0)}=\frac{269}{90} y^{(5)}\left(\hat{\mu}_{n}\right) h^{5}$

$z\left(x_{n+3}\right)-\omega_{n+3}^{(0)}=\frac{987}{80} y^{(5)}\left(\hat{\mu}_{n}\right) h^{5}$        (26)

for about $\hat{\mu}_{n} \in\left(x_{n-3}, x_{n}\right)$ [5, 7, 18, 19, 21, 23, 25].

In the same manner, investigating the local errors of the block corrector method to be:

$\begin{aligned} y\left(x_{n+1}\right)=y\left(x_{n-2}\right.&) \\ &+\frac{h}{4}\left[57 f\left(x_{n+1}, y\left(x_{n+1}\right)\right)\right.\\ &-72 f\left(x_{n+2}, y\left(x_{n+2}\right)\right) \\ &\left.+27 f\left(x_{n+3}, y\left(x_{n+3}\right)\right)\right] \\ &-\frac{75}{8} y^{(4)}\left(\hat{\mu}_{n}\right) h^{4} \end{aligned}$

$\begin{aligned} y\left(x_{n+2}\right)=y\left(x_{n-2}\right.&) \\ &+\frac{h}{3}\left[44 f\left(x_{n+1}, y\left(x_{n+1}\right)\right)\right.\\ &-52 f\left(x_{n+2}, y\left(x_{n+2}\right)\right) \\ &\left.+20 f\left(x_{n+3}, y\left(x_{n+3}\right)\right)\right] \\ &-\frac{28}{3} y^{(4)}\left(\hat{\mu}_{n}\right) h^{4} \end{aligned}$      (27)

$\begin{aligned} y\left(x_{n+3}\right)=y\left(x_{n-2}\right.&) \\ &+\frac{h}{12}\left[175 f\left(x_{n+1}, y\left(x_{n+1}\right)\right)\right.\\ &-200 f\left(x_{n+2}, y\left(x_{n+2}\right)\right) \\ &\left.+85 f\left(x_{n+3}, y\left(x_{n+3}\right)\right)\right] \\ &-\frac{75}{8} y^{(4)}\left(\hat{\mu}_{n}\right) h^{4} \end{aligned}$

for about $\mu_{n} \in\left(x_{n-2}, x_{n+3}\right)$. If we presume that the estimates $\omega_{0}, \omega_{1}, \omega_{2}, \omega_{3} \ldots, \omega_{n}$ are entirely the exact solution and then allow $z$ to act as the resolution to the differential equation gratifying the initial precondition $z\left(x_{n}\right)=\omega_{n}$. So subtracting $(25)$ from (26) will yield the expression as:

$z\left(x_{n+1}\right)-\omega_{n+1}=-\frac{75}{8} y^{(4)}\left(\bar{\mu}_{n}\right) h^{4}$

$z\left(x_{n+2}\right)-\omega_{n+2}=-\frac{28}{3} y^{(4)}\left(\bar{\mu}_{n}\right) h^{4}$

$z\left(x_{n+3}\right)-\omega_{n+3}=-\frac{75}{8} y^{(4)}\left(\bar{\mu}_{n}\right) h^{4}$     (28)

for about $\bar{\mu}_{n} \in\left(x_{n-2}, x_{n+3}\right)$.

In advancing, we must establish the presumption that for small measures of h:

$z^{(5)}\left(\hat{\mu}_{n}\right) \approx y^{(4)}\left(\bar{\mu}_{n}\right)$         (29)

and the strength of the error-control procedure relies greatly on this presumption stated above.

Furthermore, if we make a subtraction of Eq. (25) from (27), and presume that $z^{(5)}\left(\hat{\mu}_{n}\right) \approx y^{(4)}\left(\bar{\mu}_{n}\right)$, we arrive at:

$\begin{aligned} \omega_{n+1}^{(0)}-\omega_{n+1}=& h^{4}\left[\frac{251}{720} z^{(5)}\left(\hat{\mu}_{n}\right) h+\frac{75}{8} z^{(4)}\left(\bar{\mu}_{n}\right)\right] \\ & \approx \frac{7001}{720} h^{4} z^{(4)}\left(\bar{\mu}_{n}\right) \end{aligned}$

$\begin{aligned} \omega_{n+2}^{(0)}-\omega_{n+2}=& h^{4}\left[\frac{269}{90} z^{(5)}\left(\hat{\mu}_{n}\right) h+\frac{28}{3} z^{(4)}\left(\bar{\mu}_{n}\right)\right] \\ & \approx \frac{1109}{90} h^{4} z^{(4)}\left(\bar{\mu}_{n}\right) \end{aligned}$      (30)

$\begin{aligned} \omega_{n+3}^{(0)}-\omega_{n+3}=& h^{4}\left[\frac{987}{80} z^{(5)}\left(\hat{\mu}_{n}\right) h+\frac{75}{8} z^{(4)}\left(\bar{\mu}_{n}\right)\right] \\ & \approx \frac{1737}{80} h^{4} z^{(4)}\left(\bar{\mu}_{n}\right) \end{aligned}$

then:

$\begin{aligned} z^{(4)}\left(\bar{\mu}_{n}\right) & \approx \frac{720}{7001 h^{4}}\left(\omega_{n+1}-\omega_{n+1}^{(0)}\right) \\ Z^{(4)}\left(\bar{\mu}_{n}\right) & \approx \frac{1109}{90 h^{4}}\left(\omega_{n+2}-\omega_{n+2}^{(0)}\right) \\ z^{(4)}\left(\bar{\mu}_{n}\right) & \approx \frac{1737}{80 h^{4}}\left(\omega_{n+3}-\omega_{n+3}^{(0)}\right) \end{aligned}$         (31)

Utilizing (30) to remove terms of degree $\left(h^{4}\right) z^{(4)}\left(\bar{\mu}_{n}\right)$ from (27) will generate the estimates of the principal local truncation errors as:

$\begin{aligned}\left|\tau_{n+1}\right|=\mid z\left(x_{n+1}\right) &-\omega_{n+1} \mid \approx \frac{6750\left|\omega_{n+1}-\omega_{n+1}^{(0)}\right|}{7001} \\ & \leq \delta \\\left|\tau_{n+2}\right|=\mid z\left(x_{n+2}\right) &-\omega_{n+2} \mid \approx \frac{840\left|\omega_{n+2}-\omega_{n+2}^{(0)}\right|}{1109} \\ & \leq \delta \\\left|\tau_{n+3}\right|=\mid z\left(x_{n+3}\right) &-\omega_{n+3} \mid \approx \frac{250\left|\omega_{n+3}-\omega_{n+3}^{(0)}\right|}{579} \\ & \leq \delta \end{aligned}$         (32)

These mathematical expressions of (31) implies that $\omega_{n+1} \neq \omega_{n+1}^{[0]}, \omega_{n+2} \neq \omega_{n+2}^{[0]}$ and $\omega_{n+3} \neq \omega_{n+3}^{[0]}$ and also, represent the predicted and corrected approximates. Thus, these essential components $\left|\tau_{n+1}\right|,\left|\tau_{n+2}\right|$ and $\left|\tau_{n+3}\right|$ are otherwise seen as the principal local truncation errors and $\delta$ defines the bound of the convergence criteria [3, 5, 7, 8, 18, 19, 21-25].

The execution of BSVSVO is carried out using (31). This process is a combination of (31) with the convergence criteria $(\delta)$  to ensure convergence at every loop. The primary function of (31) is to find a suited step size which could be reduced whenever the norm of the (PLTE) approximates transcends an established convergence criteria, and raised whenever the norm is lesser than or equal to the convergence criteria by a prescribed element [18, 19]. Again, this idea is very important for the conclusion of a suitable step size [5]. Finally, the Eq. (31) showcases the error-control procedure [7]. The computation of the principal local truncation error (31) is applied to find out whether to accept the results of the selected step size or to redo the procedure with a varying step size. This process of (31) establishes the convergence criteria of the method on a trial run [8, 18, 19, 21-25].

3. Results and Discussion

Three test problems of stiff ODEs were solved to demonstrate the efficiency and accuracy of BSVSVO. The numeric results of BSVSVO will be compared with BBDF of various forms. The execution will be assessed in terms of the convergence criteria, efficiency and maximum error.

Furthermore, block backward differentiation formula have been effectual in computing stiff ODEs in terms of cutting down the time of computation and enhancing the accuracy of the approximate solution [14]. This was as a result of some changes performed to the existing block BDF by stretching the method to generate threesome approximate solutions at a single looping. Again, block backward differentiation formula are worthy for resolving stiff ODEs based on the implementation of the diagonally implicit form that is anticipated to be quicker than the fully implicit methods in recent research. Lastly, variable step-block backward differentiation formula (VS-BBDF) was carried out via variable step size technique with an increased step size to an element 1.8 [28]. As a result of the bounded stability property of the block solver which led to the inability to demonstrate more efficiency and accuracy in solving stiff ODEs. The BSVSVO has the capacity to find a suitable step size for every loop to ensure the convergence criteria. This in turn will complement the effort, bridge the gap demonstrated by [14, 15, 28] thereby enhancing the efficiency, error control procedure and maximize error.

We look at numeric solutions utilizing and adopting these three methods [14, 15, 28]. The reasons behind this selection involves methods like that of A(α)-BBDF executed with fixed step size, ρ-DIBBDF with fixed step size and VS-BBDF with variable step size.

i Implicit A($\propto$)-BBDF of order five [14].

ii $\rho$-DIBBDF method with the best choice of the parameter $\rho$ that holds optimal stability properties [15].

iii Stability region of two-point variable step-block backward differentiation formulae method [28].

Test problem 3.1

$y^{\prime}(x)=-20\left(y-x^{2}\right)+2 x$ with initial term $y(0)=\frac{1}{3}$ and $x \in[0,1]$.

Exact solution: $y(x)=x^{2}+\frac{1}{3} e^{-20 x}$.

Source: [14].

Test problem 3.2

$y^{\prime}(x)=5 e^{5 x}(y-x)^{2}+1$ with initial term $y(0)=-1$ and $x \in[0,1]$

Exact Solution: $y(x)=x-e^{-5 x}$.

Source: [15].

Test problem 3.3

$y^{\prime}(x)=-1000 y+3000-2000 e^{-x}$ with initial term $y(0)=0$ and $x \in[0,20]$.

Exact Solution: $y(x)=3-0.998 e^{-1000 x}-2.002 e^{-x}$.

Eigenvalue: $\lambda=-100$.

Source: [28].

Table 1. Results of test problem 1

Method Used

MAXE

Convergence Criteria

$A(\alpha)$- BBDF

9.80872(-03)

10-3

VSVOP-CAP

7.98892(-05)

10-3

VSVOP-CAP

7.99849(-05)

 

VSVOP-CAP

8.08499(-05)

 

BBDF(5)

3.61596(-06)

10-6

3SBBDF

2.78963(-06)

 

$A(\alpha)$- BBDF

2.10240(-06)

 

VSVOP-CAP

1.47862(-10)

10-6

VSVOP-CAP

2.16175(-10)

 

VSVOP-CAP

2.24112(-10)

 

BBDF(5)

3.65378(-10)

10-10

3SBBDF

2.84503(-10)

 

$A(\alpha)$- BBDF

2.15115(-10)

 

VSVOP-CAP

0.

10-10

VSVOP-CAP

0.

 

VSVOP-CAP

0.

 

Tables 1, 2 and 3 shows the method used, MAXE and convergence criteria to actualize the computational results. The process of (BSVSVO) involves the use of (24) with tedious computation of finding the suitable variable step size to guarantee the bound convergence criteria at every loop. This process when achieved will lead to effective actualization of better results. The (BSVSVO) is compared with existing methods utilizing stiff ODEs and solved with fixed step and vary step size. The BBDF possesses strong stability property compare to (BSVSVO) with bounded stability property.

Table 2. Results of test problem 2

Method Used

MAXE

Convergence Criteria

$\rho-$ BBDF(-0.75)

3.02746×10-3

10-3

$\rho-$ BBDF(-0.60)

3.08609×10-3

 

$\rho-$ BBDF(0.50)

3.79190×10-3

 

$\rho-$ BBDF(0.95)

6.39361×10-3

 

VSVOP-CAP

1.84187×10-5

10-3

VSVOP-CAP

1.96091×10-5

 

VSVOP-CAP

1.97311×10-5)

 

$\rho-$ BBDF(-0.75)

3.97922×10-7

10-7

$\rho-$ BBDF(-0.60)

4.07670×10-7

 

$\rho-$ BBDF(0.50)

5.95266×10-7

 

VSVOP-CAP

2.27374×10-13

10-7

VSVOP-CAP

2.42695×10-13

 

VSVOP-CAP

2.4436×10-13

 

$\rho-$ BBDF(-0.75)

3.99347×10-11

10-11

$\rho-$ BBDF(-0.60)

4.09109×10-11

 

$\rho-$ BBDF(0.50)

6.00101×10-11

 

VSVOP-CAP

1.11022×10-16

10-11

VSVOP-CAP

0.

 

VSVOP-CAP

0.

 

Table 3. Results of test problem 3

Method Used

MAXE

Convergence Criteria

VS-BBDF

1.1090e-004

10-2

VSVOP-CAP

4.20214e-003

10-2

VSVOP-CAP

4.2964e-003

 

VSVOP-CAP

4.30589e-003

 

VS-BBDF

1.5807e-006

10-4

VSVOP-CAP

2.61316e-007

10-4

VSVOP-CAP

2.61316e-007

 

VSVOP-CAP

3.79881e-007

 

VS-BBDF

3.93604e-007

10-6

VSVOP-CAP

2.78685e-011

10-6

VSVOP-CAP

4.07725e-011

 

VSVOP-CAP

4.22723e-011

 

4. Conclusions

Table 1, Table 2 and Table 3 display the method used, MAXE and bound of the convergence criteria of the computational results for test problems 1, 2 and 3. The (BSVSVO) is compared with existing methods of A(α)-BBDF, ρ-DIBBDF and VS-BBDF in terms of the method used, MAXE and bound of the convergence criteria. These results displayed by BSVSVO were implemented using the Eqns. (11), (12) and (24) to execute the process. This result of BSVSVO performs better than A(α)-BBDF as discussed in table 1 due to the need of finding a suited step size compare to A(α)-BBDF which is executed with fixed step size. Table 2 shows the computational results of BSVSVO compared with the results of the ρ-DIBBDF method with a better selection of the parametric quantity ρ that establishes optimum stability properties. This would have been better if a suited step size was used for the implementation rather than the fixed step size approach. The BSVSVO possesses the advantage of deciding the suited step size for better efficiency and accuracy compare to ρ-DIBBDF which depends on selecting a better valuate for the parametric quantity ρ. Table 3 results of BSVSVO is compared with VS-BBDF. This VS-BBDF method has an increment of the step size to an element 1.8 which performs better on the 10-2 bound of the convergence criteria compare to BSVSVO. This is due to its inability to find a suited step size. Again, the performance of VS-BBDF was not better for the bound of the convergence criteria of 10-4 and 10-6 compare to BSVSVO which banks on suited step size for better efficiency and accuracy as shown in Table 3. Again, the performance of BSVSVO is seen as an efficient method which possess the capacity to determine a suitable variable step size to ensure the bound of the convergence criteria is satisfied. These benefits distinguish the BSVSVO from other methods of A(α)-BBDF, ρ-DIBBDF and VS-BBDF which possess strong stability properties unlike BSVSVO with bounded stability properties. The BSVSVO has contributed immensely by introducing the error control procedure as seen in (24) with the idea of finding a suitable variable step size. Furthermore, the BSVSVO comes with some limitations of bounded stability properties and the use of power series as the basis function approximation to handle oscillatory and exponential solutions, but rather a special exponentially fitted and trigonometrically fitted methods should be designed to deal with both. Further work can be carried out using exponentially fitted and trigonometrically fitted method to solve stiff as a result of the oscillatory behavior of the system.

Acknowledgment

The authors will want to appreciate CUCRID for sponsorship and all anonymous authors for their contributions towards this research study. This work is supported by the Covenant University Centre for Research Innovation and Development (CUCRID).

Nomenclature

BSVSVO

block solver of variable step variable order

δ

bound of the convergence criteria

MAXE

maximum error(s)

A(α)-BBDF

new 3-point BBDF [14]

ρ-DIBBDF(ρi)

ρ-diagonally implicit block backward differentiation formula (ρ value) [15]

VS-BBDF

variable step-block backward differentiation formulae method [28]

  References

[1] Abodunrin, T., Boyo, A., Usikalu, M., Emetere, M., Ajayi, O., Kotsedi, C., Nuru, Z., Malik, M., Oghonyon, G. (2018). Influence of n-Mosfet transistor on dye-sensitized solar cell efficiency. Heliyon, 4(12): 1-13. https://doi.org/10.1016/j.heliyon.2018.e01078

[2] Akinfenwa, O., Jator, S., Yoa, N. (2011). An eight order backward differentiation formula with continuous coefficients for stiff Ordinary differential equations. International Journal of Mathematical and Computational Sciences, 5(2): 1-6. https://doi.org/10.5281/zenodo.1085431

[3] Ascher, U.M., Petzoid, L.R. (1998). Computer methods for ordinary differential equations and differential-algebraic equations. SIAM, USA.

[4] Bishop, S.A., Ayoola, E.O., Oghonyon, G.J. (2017). Existence of mild solution of impulsive quantum stochastic differential equation with nonlocal conditions. Analysis and Mathematical Physics, 7(3): 255-265. https://doi.org/10.1007/s13324-016-0140-x

[5] Dormand, J.R. (1996). Numerical Methods for Differential Equations: A Computational Approach. CRC Press, New York. https://doi.org/10.1201/9781351075107

[6] Edeki, S.O., Ogundile, P.O., Osoba, B. (2018). Couple FCT-HP for analytical solutions of the generalized time-fractional Newell-whitehead-Segal equation. WSEAS Transactions on Mathematics, 13: 266-274. https://www.wseas.org/multimedia/journals/control/2018/a665903-030.pdf.

[7] Faires, J.D., Burden, R.L. (2012). Initial-value problems for ODEs. Brooks/Cole, USA. 

[8] Gear, C.W. (1971). Numerical Initial Value Problems in ODEs. Prentice-Hall, New Jersey, USA.

[9] Fatunla, S.O. (1988). Numerical Methods for Initial Value Problems in Ordinary Differential Equations. Academic Press, Inc., New York. 

[10] Ibrahim, Z.B., Johari, R., Ismail, F. (2003). On the stability of fully implicit block backward differentiation formula. Matematika, 19(2): 83-89. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.508.712&rep=rep1&type=pdf.

[11] Ibrahim, Z.B., Othman, K.I., Suleiman, M. (2007). Variable step block backward differentiation formulas for solving first order stiff ODEs. Proceedings of the World Congress on Engineering, II: 2-5. http://www.iaeng.org/publication/WCE2007/WCE2007_pp785-789.pdf.

[12] Ibrahim, Z.B., Othman, K.I., Suleiman, M. (2007). Implicit r-point block backward differentiation formula for solving first-order stiff ODEs. Applied Mathematics and Computation, 186(2007): 558-565. https://doi.org/10.1016/j.amc.2006.07.116

[13] Ibrahim, Z.B., Mohd Nasir, N.A.A., Othman, K.I., Zainuddin, N. (2017). Adaptive order of block backward differentiation formulas for stiff ODEs. Numerical Algebra Control and Optimization, 7(1): 95-106. https://doi.org/10.3934/naco.2017006

[14] Ibrahim, Z.B., Noor, N.M., Othman, K.I. (2019). Fixed coefficient A(α) stable block backward differentiation formulas for stiff ordinary differential equations. Symmetry, 11: 1-12. https://doi.org/10.3390/sym11070846

[15] Ijam, H.M., Ibrahim, Z.B. (2019). Diagonally implicit block backward differentiation formula with optimal stability properties for stiff ordinary differential equations. Symmetry, 11: 1-18. https://doi.org/10.3390/sym11111342

[16] Iyengar, S.R.K., Jain, R.K. (2009). Numerical Methods. New Age International (P) Limited, New Delhi, India.

[17] Jain, M.K., Iyengar, S.R.K., Jain, R.K. (2007). Numerical methods for scientific and engineering computation. 5th Ed. New Age International (P), Limited, New Delhi, India.

[18] Lambert, J.D. (1973). Computational Methods in Ordinary Differential Equations. John Wiley and Sons, New York. 

[19] Lambert, J.D. (1991). Numerical Methods for Ordinary Differential Systems. John Wiley and Sons, New York. 

[20] Okagbue, H.I., Agboola, O.O., Opanuga, A.A., Oghonyon, J.G. (2017). Classes of ordinary differential equations obtained for the probability functions of Gumbel distribution. Proceedings of the World Congress on Engineering and Computer Science, 2: 871-875. http://www.iaeng.org/publication/WCECS2017/WCECS2017_pp871-875.pdf. 

[21] Oghonyon, J.G., Okunuga, S.A., Omoregbe, N.A., Agboola, O.O. (2015). A computational approach in estimating the amount of pond and determining the long time behavioural representation of pond pollution. Global Journal of Pure and Applied Mathematics, 11(5): 2773-2786. http://eprints.covenantuniversity.edu.ng/id/eprint/5475.

[22] Oghonyon, J.G., Ehigie, J., Eke, S.K. (2017). Investigating the convergence of some selected properties on block predictor-corrector methods and it’s applications. Journal of Engineering and Applied Sciences, 11(11): 2402-2408. https://doi.org/10.36478/jeasci.2016.2402.2408

[23] Oghonyon, J.G., Adesanya, O.A., Akewe, H., Okagbue, H.I. (2018). Softcode of multi-processing Milne’s device for estimating first-order ordinary differential equations. Asian Journal of Scientific Research, 11(4): 553-559. https://doi.org/10.3923/ajsr.2018.553.559

[24] Oghonyon, J.G., Imaga, O.F., Ogunniyi, P.O. (2018). The reversed estimation of variable step size implementation for solving nonstiff ordinary differential equations. International Journal of Civil Engineering and Technology, 9(8): 332-340. Available online at http://www.iaeme.com/ijciet/issues.asp?JType=IJCIET&VType=9&IType=8.

[25] Oghonyon, J.G., Okunuga, S.A., Okagbue, H.I. (2019). Expanded trigonometrically matched block variable-step-size technics for computing oscillating vibrations. Lecture Notes in Engineering and Computer Science, 2239: 552-557. http://www.iaeng.org/publication/IMECS2019/IMECS2019_pp552-557.pdf.

[26] Voss, D., Abbas, S. (1997). Block predictor-corrector schemes for the parallel solution of ODEs. Computers Math. Applica., 33(6): 65-72. https://doi.org/10.1016/S0898-1221(97)00032-1

[27] Yang, Y.W., Cao, W., Chung, T.S., Morris, J. (2005). Applied Numerical Methods Using Matlab®. John Wiley and Sons, New Jersey.

[28] Yatim, S., Asnor, A., Ibrahim, Z.B. (2017). Stability region of two-point variable step-block backward differentiation formulae. Journal of Algorithms and Computational Technology, 11(2): 192-198. https://doi.org/10.1177/1748301816680508