Strong convergence of an inertial extrapolation method for a split system of minimization problems Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access December 31, 2020

Strong convergence of an inertial extrapolation method for a split system of minimization problems

  • Anteneh Getachew Gebrie EMAIL logo and Rabian Wangkeeree
From the journal Demonstratio Mathematica

Abstract

In this article, we propose an inertial extrapolation-type algorithm for solving split system of minimization problems: finding a common minimizer point of a finite family of proper, lower semicontinuous convex functions and whose image under a linear transformation is also common minimizer point of another finite family of proper, lower semicontinuous convex functions. The strong convergence theorem is given in such a way that the step sizes of our algorithm are selected without the need for any prior information about the operator norm. The results obtained in this article improve and extend many recent ones in the literature. Finally, we give one numerical example to demonstrate the efficiency and implementation of our proposed algorithm.

MSC 2010: 65K10; 90C25; 49J52; 47H09

1 Introduction

Throughout this article, unless otherwise stated, we assume that H1, H2 and H are real Hilbert spaces, A:H1H2 is nonzero bounded linear operator and I denotes the identity operator on a Hilbert space.

Assume Ci (i=1,,N) and Qi (i=1,,M) are nonempty closed convex subsets of H1 and H2, respectively. The multiple-set split feasibility problem (MSSFP) which was introduced by Censor et al. [1] is formulated as finding a point

(1)x¯i=1NCisuchthatAx¯j=1MQj.

In particular, if N=M=1, then the MSSFP (1) is reduced to the problem known as the split feasibility problem (SFP) which was first introduced by Censor and Elfving [2] for modeling inverse problems in finite-dimensional Hilbert spaces. The SFP and MSSFP arise in many fields in the real world, and numerous methods have been proposed to solve the SFP, see for example [3,4,5] and references therein, and MSSFP, see for example [6,7,8] and references therein. Moreover, there are some studies of fixed point problems in the framework of the MSSFP, see for example [9,10,11,12,13,14].

One of the most important problems in optimization theory and nonlinear analysis is the problem of approximating a solution of the unconstrained minimization problem. This can be stated as follows. Find x¯H such that

(2)f(x¯)=minxHf(x),

where f:H{+} is proper, lower semicontinuous convex function. Our goal is to introduce a strong convergence iterative algorithm with inertial effect solving the MSSFP (1), where Ci and Qj are solution sets of minimization problems of the form (2) for proper, lower semicontinuous convex functions fi and gj, respectively. We denote by argminf the set of all minimizers of f on H, i.e.,

argminf={x¯H:f(x¯)f(x),xH}={x¯H:f(x¯)=minxHf(x)}.

If f is a smooth function (mostly if f is twice continuously differentiable), one of the numerical methods for finding approximate solutions of (2) is the Newton method, see [15,16]. Analogous method for solving (2) with better properties for the non-smooth case is based on the notion of proximal mapping introduced by Moreau [17], i.e., the proximal operator of the function f with scaling parameter λ>0 is a mapping proxλf:HH given by

proxλf(x)=argminyHf(y)+12λxy2.

The minimizers of f (points solving problem (2)) are precisely the fixed points of the proximal operator of f. Thus, solving the optimization problem (2) can be interpreted as finding fixed points of a proximal operator of f and proximal operators are firmly nonexpansive operators. This immediately suggests the most popular method

xn+1=proxλf(xn),

which is called the proximal minimization or the proximal point algorithm introduced by Martinet [18,19] and later by Rockafellar [20].

Let f:H1{+}, g:H2{+} be two proper, convex, lower-semicontinuous functions, where gλ is the Moreau-Yosida approximate [17] of the function g of parameter λ given by gλ(y)=minuH2{g(u)+12λyu2}. In [21], Moudafi and Thakur introduced a weakly convergent algorithm solving the following minimization problem:

(3)minxH1{f(x)+gλ(Ax)},

in case argminfA1(argming). It should be noted that (3) is equivalent to the split minimization problem (SMP): finding a point x¯H1 with the property

(4)x¯argminfsuchthatAx¯argming.

Operator norm is a global invariant and is often difficult to estimate, see for example the Theorem of Hendrickx and Olshevsky in [22]. However, in the several split inverse problem types in the literature, the implementation of the proposed iterative method requires the prior knowledge of operator norm to determine the step sizes. To overcome this difficulty, López et al. [4] introduced a new way of selecting the step sizes for solving the SFP such that the information of the operator norm is not necessary. Moudafi and Thakur [21] used the idea of López et al. [4] to introduce a new way of selecting the step sizes, given by

θλμ(x)=A(Iproxλg)Ax2+(Iproxλμf)x2

with hλ(x)=12(Iproxλg)Ax2 and lλμ(x)=12(Iproxλμf)x2, such that the implementation of the iterative algorithm they proposed for solving (4) does not need any prior information about the operator norm. They proposed the following split proximal algorithm, which generates, from an initial point x1H1 assume that xn has been constructed and θλ(xn)0, then compute xn+1 via the rule

(5)xn+1=proxλμnf(xnμnA(Iproxλg)Axn),

where step size μn=ρnhλ(xn)+lλμn(xn)θλμn2(xn) with 0<ρn<4 and if θλμn(xn)=0, then xn+1=xn is a solution of SMP (4) and the iterative process stops; otherwise, we set nn+1 and go to (5). Based on Moudafi and Thakur [21] many iterative algorithms are proposed for solving SMP (4), see for example those by Abbas et al. in [23], Shehu et al. in [24], Shehu and Iyiola in [25,26,27,28] and Shehu and Ogbuisi in [29].

An inertial term is a two-step iterative method, and the next iterate is defined by making use of the previous two iterates. An inertial extrapolation type algorithm, i.e., an algorithm combining an inertial term, was first introduced by Polyak [30] as an acceleration process in solving a smooth convex minimization problem. It is well known that combining an algorithm with inertial term speeds up or accelerates the rate of convergence of the sequence generated by the algorithm. Consequently, a lot of research interest is now devoted to the inertial extrapolation-type algorithm, see [31,32,33,34] and references therein. Very recently, Shehu and Iyiola [25] proposed an inertial extrapolation-type algorithm for solving the SMP (4) using the setting

  1. l(x)=12(Iproxλf)x2, l(x)=(Iproxλf)x,

  2. h(x)=12(Iproxλg)Ax2, h(x)=A(Iproxλg)Ax and θ(xn)=l(x)+h(x).

They proposed the following weak convergence result.

Theorem 1.1

Suppose the real parameters{αn}, {βn}and{ρn}satisfy the following conditions:

(c1){αn}is non-increasing sequence and0<δαn12,

(c2){βn}is non-increasing sequence and0βn1κ3<13for some,κ(0,1),

(c3)0<ρn<4, liminfnρn(4ρn)>0.

Then the sequence{xn}generated by the iterative algorithm

(6)x0,x1H1,zn=xn+βn(xnxn1),yn=znρnh(zn)+l(zn)θ2(zn)(l(zn)+hj(zn)),xn+1=(1αn)zn+αnyn,
weakly converges to a point x¯solving the SMP (4).

Note that the proximal operator is a natural extension of the notion of a metric projection onto a closed convex set, i.e., proxλf=PQ, where f=δQ (f is the indicator function of a closed convex subset Q of H), and this perspective suggests various properties that we expect proximal operators to obey. However, there is a property that holds for the case of projection operators but not for the case of proximal operators in general. For example, consider a function h defined on H2 given by h(x)=12(Iproxλf)Ax2, where H1 and H2 are real Hilbert spaces, and f:H2{+} is proper lower semicontinuous convex function. The function h is not differentiable at x=±λ for the case H1=H2=, A=I and f(x)=|x|, see [26]. However, if f is the indicator function of closed convex subset Q of H2 (f=δQ), then h is convex and weakly lower semicontinuous on H1, and h is always differentiable and h(x)=A(Iproxλf)Ax, see [35].

Motivated by the above theoretical views, and inspired by results in [1,21,25], in this article we introduce the strong convergence theorem of an inertial extrapolation-type algorithm that incorporates a proximal operator, a viscosity method and an inertial term to solve the so-called split system of minimization problem (SSMP), given as a task finding a point x¯H1 with the property

(7)x¯iΦ(argminfi)suchthatAx¯jΨ(argmingj),

where Φ={1,,N}, Ψ={1,,M}, fi:H1{+} and gj:H2{+} are proper, lower semicontinuous convex functions for iΦ, jΨ.

Let Γ be the solution set of SSMP (7), i.e.,

Γ=x¯iΦ(argminfi):Ax¯jΨ(argmingj).

Note that if fi=f for all iΦ and gj=g for all jΨ, then problem (7) reduces to the SMP (4) that is the problem considered in [21,23,24,25,26,27,28,29]. The aims of this study are twofold: to improve the weak convergence result of an inertial extrapolation-type algorithm proposed by Shehu and Iyiola [25] to a strong convergence result for an approximation of a solution of the SMP (4), and to accelerate and improve the results in [9,10] in solving the SSMP (7).

This article is organized in the following way. In Section 2, we collect some basic and useful definitions, lemmata, and theorems for further study. In Section 3, we propose an iterative method for the SSMP and analyze the strong convergence theorem of the proposed iterative method. In Section 4, we give a numerical example to discuss the performance of the proposed method. Finally, we give some conclusions.

2 Preliminary

In this section, in order to prove our result, we collect some facts and tools in a real Hilbert space H. The symbols “” and “” denote weak and strong convergence, respectively. Let C be a nonempty closed convex subset of H. The metric projection on C is a mapping PC:HC defined by

PC(x)=argmin{yx:yC},xH.

Lemma 2.1

Let C be a closed convex subset of H. GivenxHand a pointzC, thenz=PC(x)if and only ifxz,yz0,forallyC.

Definition 2.1

Let T:HH. Then,

  1. T is L-Lipschitz if there exists L>0 such that TxTyLxy,x,yH. If L(0,1), then we call T a contraction with constant L. If L=1, then T is called a nonexpansive mapping.

  2. T is firmly nonexpansive if

    TxTy2xy2(IT)x(IT)y2,forallx,yH,

    which is equivalent to TxTy2TxTy,xy,forallx,yH.

    If T is firmly nonexpansive, IT is also firmly nonexpansive.

  3. T is strongly monotone if there exists a constant α>0 such that

    TxTy,xyαxy2,forallx,yH.
  4. T is inverse strongly monotone if there exists a constant α>0 such that

TxTy,xyαTxTy2,forallx,yH.

Lemma 2.2

For a real Hilbert space H, we have

  1. x+y2=x2+y2+2x,y,forallx,yH;

  2. x+y2=x2+2y,x+y,forallx,yH;

  3. x,y12x2+12y212xy2,forallx,yH.

A set-valued mapping T:H2H is called monotone if, for all x,yH, zTx and wTy imply xy,zw0. A monotone mapping T:H2H is maximal if its graph G(F)={(x,y):yF(x),xH} is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping T is maximal if, and only if, for all (x,z)H×H, xy,zw0 for all (y,w)G(T), implies zTx. If T:H2H is a maximal monotone set-valued mapping, then we define the resolvent operator JλT associated with T and λ>0 as follows:

(8)JλT(x)=(I+λT)1(x),xH.

It is well known that JλT is single-valued, nonexpansive (see, for example [37,36]) and 1-inverse strongly monotone (firmly nonexpansive). Moreover, 0T(x¯) if and only if x¯ is a fixed point of the resolvent operator JλT for all λ>0; see [38].

Let f:H{+} be a proper lower semicontinuous convex function. The domain of f is denoted by dom f; that is, dom f={xH:f(x)<}. We denote the subdifferential of f at xH by f(x), and is given by f(x)={yH:f(z)f(x)+y,zx,zH}. If f(x), f is said to be subdifferentiable at x. It is notable that a point x¯H minimizes f if and only if 0f(x¯). It is the classical result in operator theory that the subdifferential f is a maximal monotone operator and proxλf=(I+λf)1, namely, for xH we have the following equivalence between the subdifferential and proximal operator:

proxλf(x)=yxyλf(y).

Consequently, a point x¯ minimizes f if and only if proxλf(x¯)=x¯. Hence, the convex minimization problem (2) can be formulated as finding fixed point of proximal operator.

Lemma 2.3

[39] Let{cn}and{γn}be sequences of nonnegative real numbers,{βn}be a sequence of real numbers such that

cn+1(1αn)cn+βn+γn,n1,
where0<αn<1andγn<.
  1. IfβnαnMfor someM0, then{cn}is a bounded sequence.

  2. Ifαn=andlimsupnβnαn0, thencn0asn.

Definition 2.2

Let {Γn} be a real sequence. Then we say {Γn} decrease at infinity if there exists n0 such that Γn+1Γn for nn0. In other words, the sequence {Γn} does not decrease at infinity if there exists a subsequence {Γnt}t1 of {Γn} such that Γnt<Γnt+1 for all t1.

Lemma 2.4

[40] Let{Γn}be a sequence of real numbers that does not decrease at infinity. Also consider the sequence of integers{φ(n)}nn0defined byφ(n)=max{k:kn,ΓkΓk+1}.Then{φ(n)}nn0is a nondecreasing sequence verifyinglimnφ(n)=, and for allnn0, the following two estimates hold:

Γφ(n)Γφ(n)+1andΓnΓφ(n)+1.

Let D be a nonempty closed convex subset of H. Then we say that the bifunction h:D×D satisfies Condition CO on D if the following assumptions are satisfied:

(A1) h(u,u)=0, for all uD;

(A2) h is monotone, i.e., h(u,v)+h(v,u)0, for all u,vD;

(A3) for each u,v,wD, limsupα0h(αw+(1α)u,v)h(u,v);

(A4) h(u,.) is convex and lower semicontinuous on D for each uD.

Lemma 2.5

[41] Let D be a nonempty closed convex subset of H and the bifunctionh:D×Dsatisfies Condition CO on D. Then, for eachr>0anduH, there existswDsuch that

h(w,v)+1rvw,wu0,forallvD.

The following lemma was given by Combettes and Hirstoaga in [42].

Lemma 2.6

[42] If D is a nonempty closed convex subset of H andh:D×Dis a bifunction satisfying Condition CO on D, then for eachr>0anduH, the mappingTrh:H(called the resolvent of h), given by

Trh(u)={wD:h(w,v)+1rvw,wu0,vD}
satisfies the following conditions:
  1. Trhis single-valued and firmly nonexpansive;

  2. Fix(Trh)={x¯D:h(x¯,y)0,yD}, where Fix(Trh)is the set of fixed points ofTrh;

  3. {x¯D:h(x¯,y)0,yD}is closed and convex.

3 Main result

First we extend the settings introduced by Moudafi and Thakur [21]. Let λ>0. Then, for xH1,

  1. for each iΦ, define li(x)=12(Iproxλfi)x2andli(x)=(Iproxλfi)x,

  2. l(x)=lix(x) and l(x)=lix(x), where ixargmax{li(x):iΦ}, i.e., l(x)=max{li(x):iΦ},

  3. for each jΨ, define hj(x)=12(Iproxλgj)Ax2andhj(x)=A(Iproxλgj)Ax,

  4. for each jΨ, define θj(x)=max{hj(x),l(x)}.

Remark

From (I)–(IV) given above, li(x)lix(x)=l(x),li(x)=12li(x)2,l(x)θj(x)andhj(x)θj(x)foralliΦ and foralljΨ.

Consider the parameter sequences satisfying the following conditions.

Assumption 1

Suppose {αn}, {εn}, {ρn}, {ξn(j)}(jΨ) be real sequences satisfying the following conditions:

(C1) 0<αn<1, limnαn=0 and n=1αn=;

(C2) εn>0 and εn=o(αn);

(C3) 0<ξξn(j)1 and jΨξn(j)=1 for each n1;

(C4) 0<ρn<2 and liminfnρn(2ρn)>0.

We have plenty of choices for αn and εn satisfying Conditions (C1) and (C2) of Assumption 1. For example, take αn=12n, εn=1n2. Thus, 0<αn<1, limnαn=0 and limnεnαn=0 (i.e., εn=o(αn)).

Using li, li, l, l, hj, hj, θj given in (I)–(IV) and step sizes given in Assumption 1, we are now in a position to state our inertial extrapolation-type algorithm and prove its strong convergence to the solution of the SSMP (7) assuming that solution set Γ is nonempty.

Algorithm 1

Initialization: Let V:H1H1 be a contraction mapping with constant γ. Choose x0, x1H1. Take arbitrary real numbers β and Θ^ such that 0β<1 and Θ^>0. Let {αn}, {εn}, {ρn}, {ξn(j)} (jΨ) be real sequences satisfying Assumption 1.

Step 1. Given the iterates xn1 and xn (n1), choose βn such that 0βnβ¯n, where

β¯nminβ,εnxn1xn,ifxn1xn,β,otherwise.

Step 2. Evaluate yn=xn+βn(xnxn1).

Step 3. For each jΨ find l(yn), hj(yn), θj(yn) and Ψn={jΨ:θj(yn)0}.

Step 4. For each jΨ evaluate μn(j)=ρnhj(yn)+l(yn)Θj2(yn), where

Θj(yn)=Θ^,ifjΨn,θj(yn),ifjΨn.

Step 5. Evaluate

zn=yn12jΨ{ξn(j)μn(j)(hj(yn)+l(yn))}.

Step 6. Evaluate xn+1=αnV(yn)+(1αn)zn.

Step 7. Set nn+1 and go to Step 1.

Remark

From Assumption 1 and Step 1 of Algorithm 1, we have that βnαnxnxn10,n. Since {αn} is bounded, we also have βnxnxn10,n.

Note that Step 1 of Algorithm 1 is easily implemented in numerical computation since the value of xnxn1 is a prior known before choosing βn.

Remark

If Ψn=, then

max{hj(yn),l(yn)}=0hj(yn)=0=l(yn),foralljΨ,hj(yn)=0=li(yn),foralliΦ,jΨ,A(Iproxλgj)Ayn=0=(Iproxλfi)yn,foralliΦ,jΨ.

Remark

The solution set Γ of problem (7) is closed convex set, because the set of minimizers of any proper, lower semicontinuous function is closed convex and A is bounded linear operator. Therefore, the metric projection PΓ is well defined as we also assume that Γ is nonempty.

Lemma 3.1

For the sequences{xn}, {yn}and{zn}generated by Algorithm 1, we have

znxˆ2ynxˆ2+ρn(ρn2)jΨξn(j)(hj(yn)+l(yn))2Θj2(yn)
for allxˆΓ. Moreover,{xn}, {yn}and{zn}are bounded sequences.

Proof

Let xˆΓ. From the definition of yn, we get

(9)ynxˆ=xn+βn(xnxn1)xˆxnxˆ+βnxnxn1.

Since proxλfi and proxλgj are firmly nonexpansive, Iproxλfi and Iproxλgj are also firmly nonexpansive, and since xˆ verifies (7) (since minimizers of any function are exactly fixed points of its proximal mapping), we have for all xH1

(10)l(x),xxˆ=lix(x),xxˆ=(Iproxλfix)x,xxˆIproxλfixx2=2lix(x)=2l(x)

and

(11)hj(x),xxˆ=A(Iproxλgj)Ax,xxˆ=(Iproxλgj)Ax,AxAxˆ(Iproxλgj)Ax2=2hj(x),foralljΨ.

Using the definition of zn and Lemma 2.2 (i), we have

(12)znxˆ2=yn12jΨ{ξn(j)μn(j)(hj(yn)+l(yn))}xˆ2ynxˆ2+12jΨ{ξn(j)μn(j)(hj(yn)+l(yn))}2jΨ{ξn(j)μn(j)(hj(yn)+l(yn))},ynxˆ.

Noting hj(yn)Θj(yn) and l(yn)Θj(yn) and using the convexity of .2, we have

(13)12jΨ{ξn(j)μn(j)(hj(yn)+l(yn))}212jΨξn(j)μn(j)l(yn)2+12jΨξn(j)μn(j)hj(yn)212jΨξn(j)μn(j)l(yn)2+12jΨξn(j)μn(j)hj(yn)2=12jΨξn(j)(μn(j))2l(yn)2+12jΨξn(j)(μn(j))2hj(yn)2=12jΨξn(j)ρnhj(yn)+l(yn)Θj2(yn)2(l(yn)2+hj(yn)2)jΨξn(j)ρnhj(yn)+l(yn)Θj2(yn)2Θj2(yn)=ρn2jΨξn(j)(hj(yn)+l(yn))2Θj2(yn).

From (10) and (11), we have

(14)jΨ{ξn(j)μn(j)(hj(yn)+l(yn))},ynxˆ=jΨ{ξn(j)μn(j)(l(yn),ynxˆ+hj(yn),ynxˆ)}jΨ{ξn(j)μn(j)(2l(yn)+2hj(yn))}=jΨ{ξn(j)ρnhj(yn)+l(yn)Θj2(yn)(2l(yn)+2hj(yn))}=2ρnjΨξn(j)(hj(yn)+l(yn))2Θj2(yn).

In view of (12), (13) and (14), we have

(15)znxˆ2ynxˆ2+ρn2jΨξn(j)(hj(yn)+l(sn))2Θj2(yn)2ρnjΨξn(j)(hj(yn)+l(yn))2Θj2(yn)=ynxˆ2+ρn(ρn2)jΨξn(j)(hj(yn)+l(yn))2Θj2(yn).

Next show that the sequences {xn}, {yn} and {zn} are bounded. From (15) and (C4) of Assumption 1, we have

(16)znxˆynxˆ.

Using (9), (16) and the definition of xn+1, we get

(17)xn+1xˆ=αnV(yn)+(1αn)znxˆ=(1αn)(znxˆ)+αn(V(yn)V(xˆ))+αn(V(xˆ)xˆ)(1αn)znxˆ+αnV(yn)V(xˆ)+αnV(xˆ)xˆ=(1αn)znxˆ+αnγynxˆ+αnV(xˆ)xˆ(1αn(1γ))ynxˆ+αnV(xˆ)xˆ(1αn(1γ))xnxˆ+(1αn(1γ))βnxnxn1+αnV(xˆ)xˆ(1αn(1γ))xnxˆ+αn(1γ)(1αn(1γ))1γβnαnxnxn1+V(xˆ)xˆ1γ.

Observe that by (C1) of Assumption 1 and Remark 3, we see that

limn(1αn(1γ))1γβnαnxnxn1=0.

Let

M=2maxV(xˆ)xˆ1γ,supn1(1αn(1γ))1γβnαnxnxn1.

Then (17) becomes

xn+1xˆ(1αn(1γ))xnxˆ+αn(1γ)M.

Thus, by Lemma 2.3 the sequence {xn} is bounded. As a consequence, {yn}, {V(yn)} and {zn} are also bounded.□

We now have the following strong convergence theorem for an approximation of solution of a Problem (7).

Theorem 3.2

The sequence{xn}generated by Algorithm 1 converges strongly tox¯Γ, wherex¯=PΓV(x¯).

Proof of Theorem 3.2

Claim 1: There exists a uniquex¯H1such thatx¯=PΓV(x¯).

As a result of

PΓV(x)PΓV(y)V(x)V(y)γxy,forallx,yH1,

the mapping PΓV is a contraction mapping of H1 into itself. Hence, by the Banach contraction principle there exists a unique element x¯H1 such that x¯=PΓV(x¯). Clearly, x¯Γ and we have

x¯=PΓV(x¯)x¯V(x¯),yx¯0,forallyΓ.

Claim 2:The sequence{xn}converges strongly tox¯Γ, wherex¯=PΓV(x¯).

Let x¯Γ, where x¯=PΓV(x¯). Now

(18)ynx¯2=xn+βn(xnxn1)x¯2=xnx¯2+βn2xnxn12+2βnxnx¯,xnxn1.

From Lemma 2.2(iii), we have

(19)xnx¯,xnxn1=12xnx¯212xn1x¯2+12xnxn12.

From (18) and (19) and since 0βn<1, we get

(20)ynx¯2=xnx¯2+βn2xnxn12+βn(xnx¯2xn1x¯2+xnxn12)xnx¯2+2βnxnxn12+βn(xnx¯2xn1x¯2).

Using the definition of xn+1 and Lemma 2.2(ii), we have

(21)xn+1x¯2=αnV(yn)+(1αn)znx¯2=αn(V(yn)x¯)+(1αn)(znx¯)2=(1αn)znx¯2+2αnV(yn)x¯,xn+1x¯.

Lemma 3.1 together with (20) and (21) gives

(22)xn+1x¯2=(1αn)znx¯2+2αnV(yn)x¯,xn+1x¯(1αn)ynx¯2+2αnV(yn)x¯,xn+1x¯+(1αn)ρn(ρn2)jΨξn(j)(hj(yn)+l(yn))2Θj2(yn)(1αn)xnx¯2+2(1αn)βnxnxn12+(1αn)βn(xnx¯2xn1x¯2)+2αnV(yn)x¯,xn+1x¯+(1αn)ρn(ρn2)jΨξn(j)(hj(yn)+l(yn))2Θj2(yn).

Since the sequences {xn} and {V(yn)} are bounded, there exists M1 such that 2V(yn)x¯,xn+1x¯M1 for all n1. Thus, from (22), we obtain

(23)xn+1x¯2(1αn)xnx¯2+2(1αn)βnxnxn12+(1αn)βn(xnx¯2xn1x¯2)+αnM1+(1αn)ρn(ρn2)jΨξn(j)(hj(yn)+l(yn))2Θj2(yn).

Let us distinguish the following two cases related to the behavior of the sequence {Γn}, where Γn=xnx¯2.

Case 1. Suppose the sequence {Γn} decreases at infinity. Thus, there exists n0 such that Γn+1Γn for nn0. Then, {Γn} converges and ΓnΓn+10 as n0.

From (23) we have

(24)(1αn)ρn(2ρn)jΨξn(j)(hj(yn)+l(yn))2Θj2(yn)(ΓnΓn+1)+αnM1+(1αn)βn(ΓnΓn1)+2(1αn)βnxnxn12.

Since ΓnΓn+10 alternatively Γn1Γn0 and using (C1) of Assumption 1 and Remark 3 (noting αn0, 0<αn<1, βnxnxn10 and {xn} is bounded), we have from (24)

(25)(1αn)ρn(2ρn)jΨξn(j)(hj(yn)+l(yn))2Θj2(yn)0,n.

Conditions (C1) and (C4) of Assumption 1 (i.e., 0<αn<1, αn0 and liminfnρn(2ρn)>0) together with (25) yield

(26)jΨξn(j)(hj(yn)+l(yn))2Θj2(yn)0,n.

In view of (26) and the restriction condition imposed on ξn(j) in (C3) of Assumption 1, we have

(27)(hj(yn)+l(yn))2Θj2(yn)0,n

for all jΨ.

Now, using the definition of zn and the convexity of .2, we have

(28)ynzn2=12jΨξn(j)μn(j)(hj(yn)+l(yn))212jΨξn(j)μn(j)hj(yn)2+12jΨξn(j)μn(j)l(yn)212jΨξn(j)(μn(j))2hj(yn)2+12jΨξn(j)(μn(j))2l(yn)2=12jΨ{ξn(j)(μn(j))2(hj(yn)2+l(yn)2)}jΨξn(j)(μn(j))2Θj2(yn)=jΨξn(j)ρnhj(yn)+l(yn)Θj2(yn)2Θj2(yn)=ρn2jΨξn(j)(hj(yn)+l(yn))2Θj2(yn)4jΨξn(j)(hj(yn)+l(yn))2Θj2(yn).

Thus, (28) together with (26) gives

(29)ynzn0,n.

Moreover, using the definition of yn and Remark 3, we have

(30)xnyn=xnxnβn(xnxn1)=βnxnxn10,n.

By (29) and (30), we get

(31)xnznxnyn+ynzn0,n.

Using the definition of xn+1, (C1) of Assumption 1 and noting that {V(yn)} and {zn} are bounded, we have The results from (31) and (32) give

(33)xn+1xnxn+1zn+znxn0,n.

For each iΦ and for each jΨ, hj(.) and li(.) are Lipschitz continuous with constant A2 and 1, respectively. Since the sequence {yn} is bounded and

hj(yn)=hj(yn)hj(x¯)A2ynx¯,foralljΨ,li(yn)=li(yn)li(x¯)ynx¯,foralliΦ,

we have the sequences {li(yn)}n=1 and {hj(yn)}n=1 are bounded for each iΦ and jΨ. Hence, the boundedness of {li(yn)}n=1 for all iΦ gives {l(yn)}n=1 is a bounded. Thus, we have {θj2(yn)}n=1 is bounded for each jΨ. Therefore, the sequence {Θj2(yn)}n=1 is bounded sequence for each jΨ. Consequently, using (27), we have

limn(hj(yn)+l(yn))=0limnhj(yn)=limnl(yn)=0,foralljΨ.

By definition of l(yn), we have li(yn)l(yn),foralliΦ. Therefore,

limnhj(yn)=limnli(yn)=0,iΦ,foralljΨ.

Let p be a weak cluster point of {xn}, there exists a subsequence {xnk} of {xn} such that xnkp as k. Then, in view of xnkp and (30), we also see that ynkp. The weak lower-semicontinuity of hj(.) implies that

0hj(p)liminfkhj(ynk)=limnhj(yn)=0,foralljΨ.

That is, hj(p)=12(Iproxλgj)Ap2=0 for all jΨ, i.e., Ap is a fixed point of the proximal mapping of each gj, or equivalently, 0gj(Ap) for all jΨ. In other words, Ap is a minimizer of each gj for all jΨ.

Likewise, the weak lower-semicontinuity of li(.) implies that

0li(p)liminfkli(ynk)=limnli(yn)=0,foralliΦ.

That is, li(p)=12(Iproxλfi)p2=0 for all iΦ, i.e., p is a fixed point of the proximal mapping of each fi or equivalently, 0fi(p) for all iΦ. In other words, p is a minimizer of each fi for all iΦ. Thus, pΓ.

Next, we show that limsupn(IV)x¯,x¯xn0. Indeed, since x¯=PΓV(x¯) and pΓ we obtain

(34)limsupn(IV)x¯,x¯xn=limk(IV)x¯,x¯xnk=(IV)x¯,x¯p0.

Since xn+1xn0 from (33), by (34) we have

limsupn(IV)x¯,x¯xn+10.

Using Lemma 3.1 (the fact that znx¯ynx¯, i.e., from (16)), we have

(35)xn+1x¯2=αnV(yn)+(1αn)znx¯,xn+1x¯=αnV(yn)V(x¯),xn+1x¯+(1αn)znx¯,xn+1x¯+αnV(x¯)x¯,xn+1x¯γαnynx¯xn+1x¯+(1αn)znx¯xn+1x¯+αnV(x¯)x¯,xn+1x¯(1αn(1γ))ynx¯xn+1x¯+αnV(x¯)x¯,xn+1x¯(1αn(1γ))ynx¯22+xn+1x¯22+αnV(x¯)x¯,xn+1x¯.

Therefore, from (35), we have

(36)xn+1x¯21αn(1γ)1+αn(1γ)ynx¯2+2αn1+αn(1γ)V(x¯)x¯,xn+1x¯=12αn(1γ)1+αn(1γ)ynx¯2+2αn1+αn(1γ)V(x¯)x¯,xn+1x¯.

Combining (36) and

ynx¯=xn+βn(xnxn1)x¯xnx¯+βnxnxn1,

it follows that

(37)xn+1x¯212αn(1γ)1+αn(1γ)(xnx¯+βnxnxn1)2+2αn1+αn(1γ)V(x¯)x¯,xn+1x¯=12αn(1γ)1+αn(1γ)(xnx¯2+βn2xnxn12+2βnxnx¯xnxn1)+2αn1+αn(1γ)V(x¯)x¯,xn+1x¯12αn(1γ)1+αn(1γ)xnx¯2+βn2xnxn12+2βnxnx¯xnxn1+2αn1+αn(1γ)V(x¯)x¯,xn+1x¯.

Since {xn} is bounded there exists M2>0 such that xnx¯M2 for all n1. Thus, in view of (37), we have

(38)Γn+112αn(1γ)1+αn(1γ)Γn+βnxnxn1(βnxnxn1+2M2)+2αn1+αn(1γ)V(x¯)x¯,xn+1x¯=(1δn)Γn+δnϑn,

where δn=2αn(1γ)1+αn(1γ) and

ϑn=1+αn(1γ)2(1γ)βnαnxnxn1{βnxnxn1+2M2}+11γV(x¯)x¯,xn+1x¯.

From (35), (C1) of Assumption 1 and Remark 3, we have n=1δn= and limsupnϑn0. Thus, using Lemma 2.3 and (38), we get Γn0 as n. Hence, xnx¯ as n.

Case 2. Assume that {Γn} does not decrease at infinity. Let φ: be a mapping for all nn0 (for some n0 large enough) defined by

φ(n)=max{k:kn,ΓkΓk+1}.

By Lemma 2.4, {φ(n)}n=n0 is a nondecreasing sequence, φ(n) as n and

(39)Γφ(n)Γφ(n)+1andΓnΓφ(n)+1,forallnn0.

In view of xφ(n)x¯2xφ(n)+1x¯2=Γφ(n)Γφ(n)+10 for all nn0 and (23), we have for all nn0

(40)(1αφ(n))ρφ(n)(2ρφ(n))jΨξφ(n)(j)(hj(yφ(n))+l(yφ(n)))2Θj2(yφ(n))(Γφ(n)Γφ(n)+1)+αφ(n)M1+(1αφ(n))βφ(n)(Γφ(n)Γφ(n)1)+2(1αφ(n))βφ(n)xφ(n)xφ(n)12αφ(n)M1+(1αφ(n))βφ(n)(Γφ(n)Γφ(n)1)+2(1αφ(n))βφ(n)xφ(n)xφ(n)12αφ(n)M1+(1αφ(n))βφ(n)xφ(n)xφ(n)1(Γφ(n)+Γφ(n)1+2(1αφ(n))βφ(n)xφ(n)xφ(n)12.

Thus, from (40) together with (C1) and (C2) from Assumption 1 and Remark 3, we have for each jΨ,

(41)(hj(yφ(n))+l(yφ(n)))2Θj2(yφ(n))0,n.

Using a similar procedure to that in Case 1, we have

limnxφ(n)yφ(n)=limnxφ(n)+1xφ(n)=0.

Since {xφ(n)} is bounded, there exists a subsequence of {xφ(n)}, still denoted by {xφ(n)} which converges weakly to p. By a similar argument to that in Case 1, we conclude immediately that pΓ. In addition, by the similar argument to that in Case 1, we have limsupn(IV)x¯,x¯xφ(n)0. Since limnxφ(n)+1xφ(n)=0, we get limsupn(IV)x¯,x¯xφ(n)+10. From (38), we have

(42)Γφ(n)+1(1δφ(n))Γφ(n)+δφ(n)ϑφ(n),

where δφ(n)=2αφ(n)(1γ)1+αφ(n)(1γ) and

ϑφ(n)=1+αφ(n)(1γ)2(1γ)βφ(n)αφ(n)xφ(n)xφ(n)1{βφ(n)xφ(n)xφ(n)1+2M2}+11γV(x¯)x¯,xφ(n)+1x¯.

Using Γφ(n)Γφ(n)+10 for all nn0 and ϑφ(n)>0, the last inequality gives

0δφ(n)Γφ(n)+δφ(n)ϑφ(n).

Since δφ(n)>0, we obtain xφ(n)x¯2=Γφ(n)ϑφ(n). Moreover, since limsupnϑφ(n)0, we have limnxφ(n)x¯=0. Thus, limnxφ(n)x¯=0 together with limnxφ(n)+1xφ(n)=0 gives limnΓφ(n)+1=0. Therefore, from (39), we obtain limnΓn=0, that is, xnx¯ as n.

This completes the proof.□

Remark

  1. Our iterative scheme has relatively low computational complexity compared to iterative schemes in [9] and [10].

  2. The implementation of our algorithm does not need any prior information about the operator norm.

  3. We can also use θj(x)=hj(x)2+l(x)2 instead of θj(x)=max{hj(x),l(x)} perhaps the proof for the strong convergence theorem is almost similar. It is clear to see that max{hj(x),l(x)}hj(x)2+l(x)2, for jΨ.

Remark

One of the main advantages of our algorithm is that the algorithm can be used to solve problems that can be converted to the fixed point problem of firmly nonexpansive mapping. The following are some examples.

  1. The split system of inclusion problem: Let Ti:H12H1, Uj:H22H2 be maximal monotone mappings for all iΦ and jΨ. The split system of inclusion problem is to find x¯H1 such that

    (43)0Ti(x¯),foralliΦ,0Uj(Ax¯),foralljΨ.

    Replacing the proximal mappings of the convex functions fi and gj in Algorithm 1 by the resolvent operators JλTi and JλUj to the maximal monotone operators, and following the method of proof in Theorem 3.2, we obtain an inertial extrapolation-type algorithm with a strong convergence result for approximation of solution of a consistent split system of inclusion problem (43); see the resolvent operator defined in (8).

  2. The MSSFP: By taking fi=δCi and gj=δQj (the indicator functions) for iΦ, jΨ, and replacing proxλfi by projection mapping PCi, and proxλgj by the projection mapping PQj in Algorithm 1, we obtain an inertial extrapolation-type algorithm with strong convergence for an approximation of solution of the MSSFP (1).

  3. The split system of equilibrium problem: Let fi:H1×H1 and gj:H2×H2 be bifunctions, where iΦ, jΨ. Assume each bifunction fi and gj satisfy Condition CO on H1 and H2, respectively. The split system of equilibrium problem involves finding x¯H1 such that

(44)fi(x¯,x)0,forallxH1,iΦ,gj(Ax¯,u)0,foralluH2,jΨ.

Our result solves (44) by replacing the proximal mappings by the resolvent operators Tλfi and Tλgj in Algorithm 1 and then following the method of proof in Theorem 3.2; see the resolvent operator defined in Lemma 2.6.

It is worth mentioning that our approach also works for approximation of solution of SMP (4). Let Ω denote the solution set of (4), i.e., Ω={x¯argminf:Ax¯argming}, and assume that Ω is nonempty. Set l(x)=12(Iproxλf)x2, l(x)=(Iproxλf)x, h(x)=12(Iproxλg)Ax2, h(x)=A(Iproxλg)Ax and θ(x)=max{h(x),l(x)}. The following is an immediate consequence of our result.

Algorithm 2

Initialization: Let V:H1H1 be a contraction with constant γ. Choose x0, x1H1 and take arbitrary real numbers β and Θ^ such that 0β<1 and Θ^>0. Let {αn}, {εn}, {ρn} be real sequences satisfying the following conditions:

  1. 0<αn<1, limnαn=0 and n=1αn=;

  2. εn>0 and εn=o(αn);

  3. 0<ρn<2 and liminfnρn(2ρn)>0.

Step 1. Given the iterates xn1 and xn (n1), choose βn such that 0βnβ¯n, where

β¯nminβ,εnxn1xn,ifxn1xn;β,otherwise.

Step 2. Evaluate yn=xn+βn(xnxn1).

Step 3. Evaluate μn=ρnh(yn)+l(yn)Θ2(yn), where Θ(yn)=Θ^,ifθ(yn)=0;θ(yn),otherwise.

Step 4. Evaluate zn=yn12μn(h(yn)+l(yn)).

Step 5. Evaluate xn+1=αnV(yn)+(1αn)zn.

Step 6. Set nn+1 and go to Step 1.

Corollary 1

The sequence{xn}generated by Algorithm 2 converges strongly tox¯Ω, wherex¯=PΩV(x¯).

Proof

Setting fi=f for all iΦ and gj=g for all jΨ in Theorem 3.2, we obtain the desired result.□

Remark

The result in Corollary 1 is an improvement on inertial extrapolation-type algorithms in the sense that instead of weak convergence result proposed by Shehu and Iyiola in [25] we get strong convergence result.

4 Numerical results

In this section, we consider an example of the SSMP involving quadratic optimization problems. We study the behavior of our algorithm and compare with the proximal-type algorithms of [9] and [10]. The algorithm has been coded in MATLAB and is performed on a HP laptop with Intel(R) Core(TM) i5-7200U CPU @ 250GHz 2.70GHz and RAM 4.00GB.

Example

Consider the problem (7) for H1=p, H2=q, a linear transformation A:pq and functions

fi(x)=12xTBix+xTDi(iΦ={1,,N}),g1(u)=uqandg2(u)=k=1qh(uk),

where A is q×p non-zero matrix, Bi is an invertible symmetric positive semidefinite p×p matrix and Di is the zero vector in p for all iΦ, u=(u1,,uq)q, .q is the Euclidean norm in q and h(uk)=max{|uk|1,0} for k=1,,q.

In this example, it is clear to see that Γ={0}. Now for λ=1, the proximal operators are given by

proxλfi(x)=(I+Bi)1(xDi)=(I+Bi)1(x),iΦ,proxλg1(u)=11uqu,uq1,0,otherwise

and proxλg2(u)=(proxλh(u1),proxλh(u2),,proxλh(uq)), where

proxλh(uk)=uk,if|uk|<1,sign(uk),if1|uk|2,sign(uk1),if|uk|>2.

In this numerical experiment, we use N=3, p=q, A is identity p×p matrix and B1, B2 and B3 are randomly generated invertible symmetric positive semidefinite p×p matrices.

Experiment 1 (Studying numerical behavior of Algorithm 1):Figures 1, 2 and 3 and Tables 1 and 2 describe the numerical results of our algorithm for this example, where V:pp given by Vx=γx and γ=0.5, αn=12nt, εn=1nr, ξn(j)=j3, β=0.9 and βn=β¯n for 0<t1, r>t, jΨ={1,2}.

Figure 1 For p=q=6p=q=6 and for randomly generated starting points x0{x}_{0} and x1{x}_{1}.
Figure 1

For p=q=6 and for randomly generated starting points x0 and x1.

Figure 2 For t=0.1t=0.1, r=0.4r=0.4 and for randomly generated starting points x0{x}_{0} and x1{x}_{1}.
Figure 2

For t=0.1, r=0.4 and for randomly generated starting points x0 and x1.

Figure 3 For t=0.2t=0.2, r=0.5r=0.5, p=q=500p=q=500, x1=δx0{x}_{1}=\delta {x}_{0}, where δ∈ℝ\delta \in {\mathbb{R}} and x0{x}_{0} is randomly generated starting point.
Figure 3

For t=0.2, r=0.5, p=q=500, x1=δx0, where δ and x0 is randomly generated starting point.

Table 1

For randomly generated starting points x0 and x1

p=q=3p=q=20p=q=80
Iter(n)CPU(s)Iter(n)CPU(s)Iter(n)CPU(s)
t=0.1, r=0.2170.0127140.0169160.0269
t=0.95, r=3210.0136170.0185210.0304
t=1, r=10210.0129160.0164240.0299
Table 2

For t=0.55, r=8, p=q=4

x0=(1,2,3,4), x1=(4,7,5,10)x0=(5,6,7,8), x1=(4,7,5,10)
Iter(n)CPU(s)xnxn1Iter(n)CPU(s)xnxn1
112.2474123.7486
25.657029.0177
33.520334.5572
41.931642.3349
51.191551.8997
..
..
..
242.5606×104245.1148×104
250.04381.3608×104253.6416×104
263.1116×104
272.4828×104
282.0318×104
290.06131.1041×104

Tables 1 and 2 illustrate the execution time in second (CPU(s)) and the number of iterations (Iter(n)) of our algorithm when applied to this particular example. The stopping criterion in Tables 1 and 2 is defined as xnxn1x2x1TOL=104.

Experiment 2 (Comparison): We now compare our result with non-inertial extrapolated (non-accelerated) proximal-type algorithms [9] (ProxAL-A) and [10] (ProxAL-B). For this purpose, we use the following data:

Algorithm 1:V:pp given by Vx=γx and γ=0.5, αn=1n, εn=1n2, ξn(j)=j3 (jΨ={1,2}), β=0.8, βn=β¯n and ρn=110.

ProxAL-A:V=F=I, μ=1, γ=0.5, δn(i)=i6 (iΦ={1,2,3}), ξn(j)=j3 (jΨ={1,2}), αn=1n+1 and ρn=110, see [9].

ProxAL-B:δn(i)=i6 (iΦ={1,2,3}), ξn(j)=j3 (jΨ={1,2}) and ρn=110, see [10].

Figures 4 and 5 along with Table 3 present the numerical results of our algorithm (Algorithm 1) in comparison with ProxAL-A and ProxAL-B. Figures 4 and 5 show the error=xn versus number of iterations, while Table 2 shows the CPU time exclusion (CPU(s)) and the number of iterations (Iter(n)) of Algorithm 1, ProxAL-A and ProxAL-B for the stopping criteria xnxn1x2x1TOL=103.

Figure 4 For p=q=4p=q=4 and x0=(1,1,1,1){x}_{0}=(1,1,1,1), x1=10x0{x}_{1}=10{x}_{0}.
Figure 4

For p=q=4 and x0=(1,1,1,1), x1=10x0.

Figure 5 For p=q=80p=q=80 and x0=(10,…,10){x}_{0}=(10,\ldots ,10), x1=10x0{x}_{1}=10{x}_{0}.
Figure 5

For p=q=80 and x0=(10,,10), x1=10x0.

Table 3

For x0,x1p with x0=(100,100,,100) and x1=2x0

Algorithm 1ProxAlg-AProxAlg-B
DimensionIter(n)CPU(s)Iter(n)CPU(s)Iter(n)CPU(s)
p=q=270.035470.040190.0379
p=q=10120.0441150.0616210.1775
p=q=50270.1908290.2168290.1921

From this preliminary numerical experiment, we observe that our algorithm crucially depends on step sizes, starting points and dimensions. Moreover, our proposed algorithm is efficient and easy to implement and outperforms the proposed algorithms in [9] and [10].

5 Conclusions

In this article, we introduce a strong convergence theorem for an inertial extrapolation-type algorithm for solving a SSMP (7). The problem we considered in this article is general for many of the problems considered in the literature concerning approximation of an unconstrained minimization problem, see for example [25,26,27,28,24,23]. Our result can also be applied to find a solution of the split system of inclusion problem, the MSSFP, and the split system of equilibrium problem. Furthermore, our result improves an inertial extrapolation-type algorithm proposed in [25] and also improves and accelerates algorithms in [9,10].

References

[1] Y. Censor , T. Elfving , N. Kopf , and T. Bortfeld , The multiple-sets split feasibility problem and its applications for inverse problems, Inverse Problems 21 (2005), no. 6, 2071–2084, https://doi.org/10.1088/0266-5611/21/6/017 .10.1088/0266-5611/21/6/017Search in Google Scholar

[2] Y. Censor and T. Elfving , A multiprojection algorithm using Bregman projections in a product space, Numer. Algorithms 8 (1994), no. 2, 221–239, https://doi.org/10.1007/BF02142692 .10.1007/BF02142692Search in Google Scholar

[3] Y. Dang , Y. Gao , and L. Li , Inertial projection algorithms for convex feasibility problem, J. Syst. Eng. Electron. 23 (2012), no. 5, 734–740, https://doi.org/10.1109/JSEE.2012.00090 .10.1109/JSEE.2012.00090Search in Google Scholar

[4] G. López , V. Martín-Márquez , F. Wang , and H. K. Xu , Solving the split feasibility problem without prior knowledge of matrix norms, Inverse Problems 28 (2012), no. 8, 85–104, https://doi.org/10.1088/0266-5611/28/8/085004 .10.1088/0266-5611/28/8/085004Search in Google Scholar

[5] B. Qu and N. Xiu , A note on the CQ algorithm for the split feasibility problem, Inverse Problems 21 (2005), no. 5, 1655–1665, https://doi.org/10.1088/0266-5611/21/5/009 .10.1088/0266-5611/21/5/009Search in Google Scholar

[6] A. Latif , J. Vahidi , and M. Eslamian , Strong convergence for generalized multiple-set split feasibility problem, Filomat 30 (2016), no. 2, 459–467, https://doi.org/10.2298/FIL1602459L .10.2298/FIL1602459LSearch in Google Scholar

[7] E. Masad and S. Reich , A note on the multiple-set split convex feasibility problem in Hilbert space, J. Nonlinear. Convex Anal. 8 (2007), no. 3, 367–371.Search in Google Scholar

[8] H. K. Xu , A variable Krasnosel’skii–Mann algorithm and the multiple-set split feasibility problem, Inverse Problems 22 (2006), no. 6, 2021–2034, https://doi.org/10.1088/0266-5611/22/6/007 .10.1088/0266-5611/22/6/007Search in Google Scholar

[9] A. G. Gebrie and R. Wangkeeree , Proximal method of solving split system of minimization problem, J. Appl. Math. Comput. 63 (2020), 107–132, https://doi.org/10.1007/s12190-019-01310-w .10.1007/s12190-019-01310-wSearch in Google Scholar

[10] A. G. Gebrie and R. Wangkeeree , An iterative scheme for solving split system of minimization problems, J. Comput. Anal. Appl. 28 (2020), no. 6, 968–980.Search in Google Scholar

[11] A. G. Gebrie and R. Wangkeeree , Parallel proximal method of solving split system of fixed point set constraint minimization problems, Rev. R. Acad. Cienc. Exactas Fıs. Nat. Ser. A Mat. RACSAM 114 (2020), no. 1, https://doi.org/10.1007/s13398-019-00758-6 .10.1007/s13398-019-00758-6Search in Google Scholar

[12] Y. C. Tang and L. W. Liu , Several iterative algorithms for solving the split common fixed point problem of directed operators with applications, Optimization 65 (2016), no. 1, 53–65, https://doi.org/10.1080/02331934.2014.984708 .10.1080/02331934.2014.984708Search in Google Scholar

[13] Y. Censor and A. Segal , The split common fixed point problem for directed operators, J. Convex Anal. 16 (2009), no. 2, 587–600.Search in Google Scholar

[14] A. Abkar and E. Shahrosvand , The split common fixed point problem of two infinite families of demicontractive mappings and the split common null point problem, Filomat 31 (2017), no. 12, 3859–3874, https://doi.org/10.2298/FIL1712859A .10.2298/FIL1712859ASearch in Google Scholar

[15] M. M. Alves and B. F. Svaiter , A proximal-Newton method for unconstrained convex optimization in Hilbert spaces, Optimization 67 (2018), no. 1, 67–82, https://doi.org/10.1080/02331934.2017.1389942 .10.1080/02331934.2017.1389942Search in Google Scholar

[16] P. E. Gill and W. Murray , Newton-type methods for unconstrained and linearly constrained optimization, Math. Program. 7 (1974), no. 1, 311–350, https://doi.org/10.1007/BF01585529 .10.1007/BF01585529Search in Google Scholar

[17] R. T. Rockafellar and R. J. Wets , Variational Analysis, Springer-Verlag, Berlin, Heidelberg, 2009.Search in Google Scholar

[18] B. Martinet , Régularisation d’inéquations variationnelles par approximations successives, R.I.R.O. 4 (1970), no. R3, 154–158, https://doi.org/10.1051/m2an/197004R301541 .10.1051/m2an/197004R301541Search in Google Scholar

[19] B. Martinet , Détermination approchée d’un point fixe d’une application pseudo-contractante, CR Acad. Sci. Paris. 274 (1972), no. 2, 163–165.Search in Google Scholar

[20] R. T. Rockafellar , Monotone operators and the proximal point algorithm, SIAM J. Control Optim. 14 (1976), no. 5, 877–898, https://doi.org/10.1137/0314056 .10.1137/0314056Search in Google Scholar

[21] A. Moudafi and B. S. Thakur , Solving proximal split feasibility problems without prior knowledge of operator norms, Optim. Lett. 8 (2014), no. 7, 2099–2110, https://doi.org/10.1007/s11590-013-0708-4 .10.1007/s11590-013-0708-4Search in Google Scholar

[22] J. M. Hendrickx and A. Olshevsky , Matrix p-norms are NP-hard to approximate if p ≠ 1,2,∞, SIAM J. Matrix Anal. Appl. 31 (2010), no. 5, 2802–2812.10.1137/09076773XSearch in Google Scholar

[23] M. Abbas , M. AlShahrani , Q. H. Ansari , O. S. Iyiola , and Y. Shehu , Iterative methods for solving proximal split minimization problems, Numer. Algorithms 78 (2018), no. 1, 193–215, https://doi.org/10.1007/s11075-017-0372-3 .10.1007/s11075-017-0372-3Search in Google Scholar

[24] Y. Shehu , G. Cai , and O. S. Iyiola , Iterative approximation of solutions for proximal split feasibility problems. Fixed Point Theory Appl. 2015 (2015), art. 123, https://doi.org/10.1186/s13663-015-0375-5 .10.1186/s13663-015-0375-5Search in Google Scholar

[25] Y. Shehu and O. S. Iyiola , Convergence analysis for the proximal split feasibility problem using an inertial extrapolation term method, J. Fixed Point Theory Appl. 19 (2017), no. 4, 2483–2510, https://doi.org/10.1007/s11784-017-0435-z .10.1007/s11784-017-0435-zSearch in Google Scholar

[26] Y. Shehu and O. S. Iyiola , Strong convergence result for proximal split feasibility problem in Hilbert spaces, Optimization 66 (2017), no. 12, 2275–2290, https://doi.org/10.1080/02331934.2017.1370648 .10.1080/02331934.2017.1370648Search in Google Scholar

[27] Y. Shehu and O. S. Iyiola , Accelerated hybrid viscosity and steepest-descent method for proximal split feasibility problems, Optimization 67 (2018), no. 4, 475–492, https://doi.org/10.1080/02331934.2017.1405955 .10.1080/02331934.2017.1405955Search in Google Scholar

[28] Y. Shehu and O. S. Iyiola , Nonlinear iteration method for proximal split feasibility problems, Math. Meth. Appl. Sci. 41 (2018), no. 2, 781–802.10.1002/mma.4644Search in Google Scholar

[29] Y. Shehu and F. U. Ogbuisi , Convergence analysis for proximal split feasibility problems and fixed point problems, J. Appl. Math. Comp. 48 (2015), no. 1–2, 221–239, https://doi.org/10.1007/s12190-014-0800-7 .10.1007/s12190-014-0800-7Search in Google Scholar

[30] B. T. Polyak , Some methods of speeding up the convergence of iteration methods, USSR Comput. Math. Math. Phys. 4 (1964), no. 5, 1–17, https://doi.org/10.1016/0041-5553(64)90137-5 .10.1016/0041-5553(64)90137-5Search in Google Scholar

[31] H. Attouch , J. Peypouquet , and P. Redont , A dynamical approach to an inertial forward-backward algorithm for convex minimization, SSIAM J. Optim. 24 (2014), no. 1, 232–256, https://doi.org/10.1137/130910294 .10.1137/130910294Search in Google Scholar

[32] A. Beck and M. Teboulle , A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci. 2 (2009), no. 1, 183–202, https://doi.org/10.1137/080716542 .10.1137/080716542Search in Google Scholar

[33] C. Chen , R. H. Chan , S. Ma , and J. Yang , Inertial proximal ADMM for linearly constrained separable convex optimization, SIAM J. Imaging Sci. 8 (2015), no. 4, 2239–2267, https://doi.org/10.1137/15100463X .10.1137/15100463XSearch in Google Scholar

[34] P. Ochs , T. Brox , and T. Pock , iPiasco: Inertial proximal algorithm for strongly convex optimization, J. Math. Imaging Vision. 53 (2015), no. 2, 171–181, https://doi.org/10.1007/s10851-015-0565-0 .10.1007/s10851-015-0565-0Search in Google Scholar

[35] H. H. Bauschke and P. L. Combettes , Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Springer International, New York, 2011.10.1007/978-1-4419-9467-7Search in Google Scholar

[36] H. Brezis , Operateurs Maximaux Monotones: Et Semi-Groupes De Contractions Dans Les Espaces De Hilbert, North-Holland Mathematics Studies, vol. 5 , American Elsevier Publishing Company, Inc., New York, 1973.Search in Google Scholar

[37] R. T. Rockafellar , On the maximality of sums of nonlinear monotone operators, Trans. Amer. Math. Soc. 149 (1970), no. 1, 75–88, https://doi.org/10.2307/1995660 .10.1090/S0002-9947-1970-0282272-5Search in Google Scholar

[38] B. Lemaire , Which fixed point does the iteration method select? , in: P. Gritzmann , R. Horst , E. Sachs , R. Tichatschke , (eds), Recent Advances in Optimization, Lecture Notes in Economics and Mathematical Systems, vol. 452 , Springer, Berlin, Heidelberg, pp. 154–167, https://doi.org/10.1007/978-3-642-59073-3_11 10.1007/978-3-642-59073-3_11Search in Google Scholar

[39] H. K. Xu , Iterative algorithms for nonlinear operators, Lond. Math. Soc. 66 (2002), no. 1, 240–256.10.1112/S0024610702003332Search in Google Scholar

[40] P. E. Maingé , Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal. 16 (2008), no. 7–8, 899–912, https://doi.org/10.1007/s11228-008-0102-z .10.1007/s11228-008-0102-zSearch in Google Scholar

[41] E. Blum and W. Oettli , From optimization and variational inequalities to equilibrium problems, Math. Student 63 (1994), 123–145.Search in Google Scholar

[42] P. L. Combettes and S. A. Hirstoaga , Equilibrium programming in Hilbert spaces, J. Nonlinear Convex Anal. 6 (2005), no. 1, 117–136.Search in Google Scholar

Received: 2020-07-18
Revised: 2020-10-10
Accepted: 2020-10-26
Published Online: 2020-12-31

© 2020 Anteneh Getachew Gebrie and Rabian Wangkeeree, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 14.12.2024 from https://www.degruyter.com/document/doi/10.1515/dema-2020-0025/html
Scroll to top button