1. Introduction
In this article, our aim is to construct an efficient algorithm for integrating initial-value differential problems given by
with
, assuming that all prerequisites for the existence of a unique solution are fulfilled.
Differential equations are used to model continuous phenomena that frequently occur in real-world situations. Unfortunately, very few of such equations can be tackled analytically. In this scenario, usually the problem of interest is dealt with numerically, that is, an approximate solution is obtained on a discrete set of points. The classes of Runge–Kutta and linear multi-steps techniques have usually been to obtain reliable approximations of the true solution of (
1). For more details, one can see the monographs written by Butcher [
1], Hairer [
2,
3], Lambert [
4], Brugnano [
5], Rosser [
6] and Milne [
7]. In
Matlab and
Mathematica, many ODE solvers are incorporated with the purpose of carrying out the task efficiently. For a detailed description of the codes, one can see the references by Shampine et al. [
8,
9] and Dormand and Prince [
10]. Some of these built-in codes are specifically designed for solving stiff and non-stiff systems, for instance, the ODE23s solver is an integrated function explicitly crafted to manage stiff systems, as noted by Shampine et al. [
11]. It excels particularly when dealing with coarse tolerances. This solver relies on an adapted Rosenbrock approach of a second order, employing a variable step size methodology through a blend of precise second and third-order formulas, effectively estimating the solution. The development of efficient algorithms with good stability characteristics is a major problem of interest in the numerical analysis of differential systems.
The first Dahlquist’s barrier, as is well known, limits the order that can be achieved in the class of zero-stable linear multi-step methods. Dahlquist established that the order of accuracy, say
p, of a linear
m-step method is as follows:
To overcome the limitations of linear multi-step methods, numerous researchers have proposed hybrid approaches which include information from the solution at off-step points within the required interval of interest. These hybrid methods have also been referred to as linear multi-step methods of a modified type [
1]. For a more comprehensive understanding of these techniques, Lambert’s book [
4] serves as a valuable resource. Block methods were initially designed to obtain starting guesses for implicit multi-step methods, but they have evolved to be used in general-purpose codes [
12]. These block methods aim to provide the approximate solution at various grid points at once (see [
13]). For an extensive list of references on these methods, interested readers may refer to Lambert’s book and Brugnano’s work [
4,
5].
Here, we have used both approaches, namely hybrid and block, to develop a new method using interpolation and collocation techniques. An optimized version of the method can be obtained by following an appropriate optimization strategy. The obtained algorithm can be considered an extended version of the one presented in [
14]. Additionally, the technique is devised in an adaptive step size version, employing an embedded-type technique.
The article’s content is organized into the following sections:
Section 2 contains the derivation of the new algorithm. In
Section 3, the primary characteristics of the proposed method are examined. The formulation of the new method in adaptive step size mode is elaborated on in
Section 4. The performance of the proposed method, compared to some existing methods in the literature, is demonstrated through numerical experiments in
Section 5. In
Section 6, the efficiency curves have been plotted, which reflects the better performance of the proposed scheme. Finally,
Section 7 shows some conclusions drawn from the study.
2. Construction of the Proposed Scheme
For the sake of simplicity, the method is derived for solving the differential system (
1) with
, and then, by using the component-wise strategy it could be applied for solving problems with
. Let us consider a fixed step size
on a discrete grid with
points,
. To derive the method, begin by considering the theoretical solution of (
1) in the form of an interpolating polynomial expressed as:
Consider as the numbers of interpolation points and as that of collocation points, satisfying and , with k denoting the number of steps in the block, with . The terms are unknown constants to be calculated, and represents the polynomial basis functions. To introduce a hybrid nature to the proposed method, three values, , and , are chosen such that , , and represent three intra-step points. The development of the present method involves specifying the parameters , , and . The derivation of the method is detailed below:
- Step 1:
The unknown coefficients
in (
2) are determined by imposing the following conditions:
- (i)
- (ii)
- (iii)
Hence, a system of nine equations in nine unknowns
is obtained. This system can be solved by using a computer algebra system (CAS). After substituting the obtained
into (
2), we get
where
- Step 2:
In order to obtain optimized values for
and
, we evaluate expression (
3) at
and
. This allows us to approximate the true solution at the final point and at the midpoint of the interval
, denoted as
and
, respectively, in terms of
and
, as shown in [
15]. The evaluation of
and
can be readily obtained through a CAS, although the resulting expressions can be quite lengthy, and so they are not presented here. To determine the appropriate values of the unknown parameters
and
, the following optimization strategy is employed:
- (i)
By expanding the formulas for
and
using the Taylor series around
, we obtain the local truncation errors of these formulas, which are given by
and
- (ii)
By setting the leading terms of the truncation errors in (
4) and (
5) equal to zero, we arrive at the following system of nonlinear equations:
- (iii)
The implicit system of equations above represents two curves in the
-plane, exhibiting symmetry with respect to the diagonal
. A unique solution satisfying the condition
is obtained, and it is given by:
Substituting the optimal values of
and
in the expressions (
4) and (
5), we get
Note that using the optimized values of and , we gain at least an order of accuracy in the formulas to approximate and .
- Step 3:
Finally, we require approximations of the true solution at the remaining intra-step points, that is, at
. These are obtained by substituting the optimized values of
and
in the expression in (
3). The proposed hybrid method provides four approximations of the true solution at
. The coefficients of each of the formulas of the block method are listed in
Table 1.
This proposed method is implicit in nature, requiring the solution of a system of four equations at each iteration. The commonly employed approach to solve this system is Newton’s method or its variants, as outlined in [
12].
4. Adaptive Step-Size Formulation
The proposed method (
8) can be transformed into an adaptive step-size algorithm by adopting a procedure similar to the one described in [
11]. This procedure involves executing two approaches of different orders, denoted as
p and
q, with
simultaneously. The combination of formulas involves using the lower-order formula to estimate the local error at each integration step, while the higher-order method advances the solution. The careful selection of this pair is crucial. To maintain the computational cost at the same level, the formula of the lower order is chosen in a manner that it does not need new function evaluations. In this way, there will be no increase in the computational cost in terms of the number of function evaluations.
The detailed procedure is as follows:
In the present case, the following implicit block formula of a lower order (
) with local truncation error
is considered
The considered lower-order formula is firstly used to compute the approximate solution of the system at the final point of the block interval,
, denoted as
. It is worth noting that this computation will not involve any additional computational efforts concerning new function evaluations. We have established that the local error
obtained for the approximation
is
where
represents the continuous solution of the problem.
Secondly, the solution of the problem is also approximated at the same block interval with the proposed higher-order block method (
) of interest. The obtained numerical solution at the final point is denoted as
and the local error
obtained in this case is
In the next step, the unknown exact solution
can be eliminated by differencing Equation (
18) from (
19), and the obtained expression is denoted as
For sufficiently small values of
h, the term
dominates in expression (
20), hence it gets the computable estimate of the local error of the lower-order formula. The advancing of the integration process can be carried out with the more accurate available higher-order solution
. Therefore, it can be said that the procedure uses the lower-order formula to estimate the local error at each step of the integration, while the higher-order method advances the computation.
This local error estimate
helps in predicting the suitable step size for the forthcoming block. The solver will change the step size from
to
until the magnitude of the local error estimate
is less than the predefined user tolerance
. The value of new step size is given by
where
q denotes the order of the lower-order formula (
) and the introduced
represents a safety factor lying in
to avoid the failure of the integration steps.
To initiate the strategy, an initial step size
is needed, which can be selected using existing techniques from the literature (refer to [
9,
16]), or simply by choosing a very small starting step size as described in Sedgwick [
17]. Subsequently, the algorithm will adjust this value if required, based on the chosen step size modification strategy. The entire process yields an embedded-type approach, which can be summarized in the following steps:
Firstly, assume the step size
and solve the IVP (
1) numerically, using the developed method with that step size and denote the approximate solution as
.
Secondly, use the lower-order formula to obtain a different approximate solution .
As a next step, obtain the local error estimate by considering the difference of the obtained approximate solutions in Step 1 and Step 2, which is defined as .
Here, the user has to predefine the tolerance level such that
- (i)
If then the new step size can be taken as which makes the computation more efficient and it will reach the final stage in fewer steps and keep the obtained values and proceed the computations.
- (ii)
If
, then the current step size is revised to new step size, as in (
21), until it reaches
and carries out the computations using the new step size.
7. Conclusions
This paper introduces an optimized hybrid technique specially crafted for the integration of first-order initial value problems (IVPs), employing three intra-step points. To heighten accuracy, the method integrates using second-order derivatives in its formulas. The approach’s genesis lies in a fusion of two techniques: the hybrid and block approaches. This hybrid nature enables the method to surmount the Dahlquist barriers, while the embedded block approach concurrently evaluates the numerical solution at diverse grid points, including off-step points, yielding computational efficiency. The hybrid point values are computed using an optimization strategy, culminating in an accurate scheme. Moreover, an enhanced iteration of the proposed approach is presented, incorporating an adaptive step size capability. This adaptive approach allows the method to dynamically adjust the step size as required. Numerical experiments are conducted to evaluate the new scheme’s performance, revealing its potential as a promising and efficient solution for the addressed problem.