Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152-8552, Japan

On the Finite Convergence of Successive SDP Relaxation Methods?

Masakazu Kojima Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152-8552, Japan e-mail:kojima@is.titech.ac.jp Levent Tuncely Department of Combinatorics and Optimization Faculty of Mathematics University of Waterloo Waterloo, Ontario N2L 3G1, Canada e-mail:ltuncel@math.uwaterloo.ca August 1999, B-354

fundamental properties related to the nite convergence of the successive SDP (semide nite programming) relaxation method proposed by the authors for approximating the convex hull of F . Keywords: Nonconvex Quadratic Program, Finite Convergence, Complexity, Semide nite Programming, Global Optimization, SDP Relaxation, Convex Relaxation, Lift-and-Project Procedure. ? This manuscript was also issued as Research Report 99-36, Department of Combinatorics and Optimization, University of Waterloo, Ontario, Canada, August 1999. y Research of this author was supported in part by a grant from NSERC and a PREA from Ontario, Canada.

Abstract. Let F be a subset of the n-dimensional Euclidean space Rn represented in terms of a compact convex subset C0 and a set P F of nitely or in nitely many quadratic functions on Rn such that F = fx 2 C0 : p(x) 0 (8p( ) 2 P F )g. In this paper, we investigate some

1 Introduction.

Let F be a compact subset of the n-dimensional Euclidean space Rn which is described in terms a nonempty compact convex subset C0 of Rn and a set of quadratic inequalities such that F = fx 2 C0 : p(x) 0 (8p( ) 2 P F )g; where P F denotes a set of nitely or in nitely many quadratic functions on Rn. P F can contain any nonconvex quadratic function, so that the set F is nonconvex in general. This paper studies theoretical convergence properties of the SCRM (successive convex relaxation method) proposed by Kojima-Tuncel 3] for approximating c.hull(F ), the convex hull of F . Assuming that a compact convex relaxation Ck of F (i.e., a compact convex set Ck F ) has been computed at the previous iteration (or given initially when k = 0), we will outline one iteration of the SCRM. We rst prepare a set P k = fp( )g of quadratic functions which induce a set of valid inequalities for Ck :

Ck fx 2 Rn : p(x) 0 (8p( ) 2 P k )g:

Since Ck was constructed so as to include F , we can add these valid inequalities to the representation of F :

F = F (C0 ; P F P k ) = fx 2 C0 : p(x) 0 (8p( ) 2 P F P k )g:

(We often write F (C0 ; P ) instead of F to designate a representation of F in terms of the compact convex set C0 and a set P of quadratic functions.) We then apply the SDP (semidefinite programming) relaxation or the SILP (semi-in nite linear programming) relaxation to the set F with the above representation to generate a new compact convex relaxation Ck+1 of F . (The latter relaxation is also called the reformulation-linearization technique in the literature 6, 7].) It should be noted that although the inequalities p(x) 0 (p( ) 2 P k ) added are redundant to represent F itself, they play an essential role in tightening the SDP (or SILP) relaxation Ck+1 of F . At each iteration of the SCRM, we need to provide a set P k of quadratic functions which induce valid inequalities for Ck . Kojima-Tuncel 3] presented several models for P k . Among others, the continuous rank-2 SDP and SILP models are signi cant in theory and practice. In these models, each p( ) 2 P k is generated as the negation of the product of two linear functions p1( ) and p2( ) on Rn which induce valid inequalities for Ck ; p( ) = ?p1( )p2 ( ) forms a rank-2 quadratic function and the inequality p(x) 0 is valid for Ck since both p1(x) 0 and p2(x) 0 are valid for Ck . (We can compute such a linear function by maximizing a linear function dT x over the compact convex set Ck .) For the continuous rank-2 SDP and SILP models, we take P k equal to the set of all rank-2 quadratic valid inequalities for Ck . Kojima-Tuncel 3] established that the generated sequence of compact convex relaxations Ck of F (k = 0; 1; 2; : : : ) converges to c.hull(F ) in the following sense. (a) Ck

Ck+1 F (k = 0; 1; 2; : : : ) (monotonicity). (b) \1 Ck = c.hull(F ) (asymptotic convergence). k=0

1

The continuous rank-2 SDP and SILP models themselves are mainly of theoretical interest, but neither implementable nor practical because, at each iteration, we need not only to provide a continuum of rank-2 quadratic functions in P k but also to solve a continuum of maximization problems with linear objective functions over the convex set Ck . In their succeeding paper 3], Kojima and Tuncel proposed two techniques, discretization and localization of a continuum of those maximization problems with linear objective functions over the convex set Ck . Based on the paper 3], Takeda, Day, Fukuda and Kojima 8] implemented a discretized-localized rank-2 SILP model of the SCRM and reported some numerical results. The continuous rank-2 SDP and SILP models are extensions of the Lovasz-Schrijver lift-and-project procedures 5] with the use of the SDP relaxation and the SILP relaxation for 0-1 integer convex optimization problems, where we take a compact convex subset of 0; 1]n as C0 and F = fx 2 C0 : x2 ? xj = 0 (j = 1; 2; : : : ; n)g; j to a general quadratic optimization problem. When the continuous rank-2 SDP and SILP models are applied to F given above, they work essentially in the same way as the LovaszSchrijver lift-and-project procedures, so that they terminate in at most n iterations generating the exact convex hull of F (Section 6 of 3] and Section 7 of 4]). It is known by an example in Section 8 of 3], however, that the convergence rate of the sequence of convex relaxations Ck C0 of F (k = 0; 1; 2; : : : ) to the convex hull of F is slower than linear in the worst case of general nonconvex quadratic optimization problems. More recently, Kojima-Takeda 2] discussed computational complexity of the continuous rank-2 SDP and SILP models. They estimated (by giving an upper bound) the number of iterations which the continuous rank-2 SDP and SILP models require to attain a convex relaxation of F with a given accuracy , in terms of , the diameter of C0, the diameter of F, and some other quantities characterizing the Lipschitz continuity, the nonlinearity and the nonconvexity of the set P F of quadratic functions. At the time of this writing, little is known about the nite convergence of the SCRM except for the nite convergence result of the continuous rank-2 SDP and SILP models mentioned above for 0-1 integer optimization problems. The purpose of the current paper is to derive some fundamental properties which characterize the nite convergence of the continuous rank-2 SDP model. Let C C0 be a compact convex relaxation of F . For every point x of C , we use the symbol E (x; C ) for the minimal face of C containing x, and A(x; C ) for the a ne subspace spanned by E (x; C ). Let fCk C0 : k = 0; 1; : : : g be a sequence of convex relaxations of F generated by the continuous rank-2 SDP model of the SCRM. Then

Ck = c.hull(F ) if and only if A(x; Ck ) = A(x; c.hull(F )) for every x 2 c.hull(F ):

We will utilize this relation to present some fundamental properties on the nite convergence of the continuous rank-2 SDP model of the SCRM. Let x 2 c.hull(F ). Then we see from the monotonicity property (a) that

A(x ; C0) dim A(x ; C0)

A(x ; C1) dim A(x ; C1)

A(x ; Ck ) dim A(x ; Ck )

2

A(x ; c.hull(F )); dim A(x ; c.hull(F )):

)

(1)

Since each A(x ; Ck ) is an a ne subspace of Rn, A(x ; Ck ) and A(x ; Ck+1) coincide with each other if and only if their dimensions dim A(x ; Ck ) and dim A(x ; Ck+1) coincide with each other. Hence, at most dim A(x ; C0) ? dim A(x ; c.hull(F ))+1 distinct a ne subspaces appear along the sequence fA(x ; Ck ) : k = 0; 1; : : : g, at most dim A(x ; C0) ? dim A(x ; c.hull(F )) strict inequalities occur in (1), if the sequence fCk C0 : k = 0; 1; : : : g converges to c.hull(F ) in a nite number of iterations and if dim A(x ; C0) > dim A(x ; c.hull(F )), then dim A(x ; Ck ) > dim A(x ; Ck+1) for at least one k 0. We present some fundamental properties (Lemmas 2.4 and 2.5) on when and how the strict inequality occurs in (1), and derive some necessary conditions for the nite convergence of the continuous rank-2 SDP model of the SCRM from those properties. In Section 2, we state the fundamental properties after describing the general successive SDP relaxation method and its continuous rank-2 model. Sections 3, 4 and 5 are devoted to the proofs of the fundamental properties.

2 Main Results.

We rst describe the general successive SDP relaxation method in Subsection 2.1 and the continuous rank-2 model as its special case in Subsection 2.2. We then present some fundamental lemmas on one iteration of the continuous rank-2 model together with an illustrative example in Subsection 2.3. Based on these lemmas, we discuss the nite convergence of the continuous rank-2 model in Subsection 2.4.

2.1 The General Successive SDP Relaxation Method.

Let

Sn +

S n : the set of n n symmetric matrices ; S n : the set of n n symmetric positive semide nite matrices:

qf (x; ; q; Q) = + 2qT x + xT Qx for every x 2 Rn:

We often use the notation qf ( ; ; q ; Q) for a quadratic function p( ) on Rn having a constant term , a linear term 2qT x and a quadratic term xT Qx; Then the set Q of quadratic functions on Rn can be written as

Q = fqf ( ; ; q; Q) : 2 R; q 2 Rn; Q 2 S ng:

3

Suppose that the compact subset F of Rn under consideration is represented as in

F = F (C0; P ) = fx 2 C0 : qf (x; ; q; Q) 0 (8qf ( ; ; q ; Q) 2 P )g

for some P

(2)

9 8 T ! > > 1+n n; 1 x = < b (C0; P ) = x 2 C0 : 9X 2 S x X 2 S + ; F >: > : + 2qT x + Q X 0 (8qf ( ; ; q ; Q) 2 P ) ;

Q. Then the SDP relaxation of F (C0; P ) is de ned as

Algorithm 2.1. (The general successive SDP relaxation method 3])

Step 0: Let k = 0. Step 1: If Ck = c.hull(F ), then stop. Step 2: Let Ck C0 be a compact convex relaxation of F obtained at the previous iteration (or given initially when k = 0). Choose a set P k of quadratic functions such that each qf ( ; ; q ; Q) 2 P k induces a valid inequality for Ck ;

qf (x; ; q; Q) 0 for every x 2 Ck :

Add the inequalities

qf (x; ; q ; Q) 0 (qf ( ; ; q ; Q) 2 P k )

to the representation of F ;

F = F (C0; P F P k ) = fx 2 C0 : qf (x; ; q; Q) 0 (8qf ( ; ; q ; Q) 2 P F P k )g:

Step 3: Apply the SDP relaxation to F with the representation above to obtain a new b compact convex relaxation Ck+1 C0 of F (C0; P F P k ); that is, let Ck+1 = F (C0; P k P F ). Step 4: Let k = k + 1 and go to Step 1.

2.2 A Continuous Rank-2 Model of the Successive SDP Relaxation Method.

Let

f D = fd 2 Rn : d 6= 0g; f (d; C ) = maxfdT x : d 2 C g for every d 2 D and C Rn; r2sf (x; d; d0; C ) = ?(dT x ? (d; C ))(dT x ? (d0; C0)) 0 f f for every d 2 D; d0 2 D and C C0:

4

The two terms (dT x ? (d0; C0)) and (dT x ? (d; C )) which appeared in the de nition of 0 r2sf (x; d0; d; C ) are linear supporting functions for C0 and C C0, respectively, and induce linear valid inequalities for C0 and C C0, respectively, i.e.,

dT x ? (d0 ; C0) 0 for every x 2 C0; 0 dT x ? (d0 ; C ) 0 for every x 2 C:

Hence r2sf ( ; d0 ; d; C ), the negation of the product of these two linear supporting functions forms a rank-2 quadratic supporting function, and induces a rank-2 quadratic valid inequality for C , i.e.,

r2sf (x; d0; d; Ck ) = ? dT x ? (d0; C0) dT x ? (d0; C ) 0

Let

0 for every x 2 C:

R2SF (D; D0 ; C ) = fr2sf ( ; d; d0; C ) : d 2 D; d0 2 D0g f f for every D D, D0 D and C C0; v1 ; v2; : : : ; vn : a basis of Rn; DV = fv1; v2 ; : : : ; vn ; ?v1; ?v2; : : : ; ?vng:

Now we are ready to describe a continuous rank-2 model of Algorithm 2.1.

Algorithm 2.2. (A continuous rank-2 model of the successive SDP relaxation method,

3, 4]) Step 0: Let k = 0. Step 1: If Ck = c.hull(F ), then stop. f Step 2: Let P k = R2SF (D; DV ; Ck )

b Step 3: Let Ck+1 = F (C0; P k P F ).

Step 4: Let k = k + 1 and go to Step 1.

f f In Step 2 we may replace D in R2SF (D; DV ; Ck ) by D = fd 2 Rn : kdk = 1g since the identity b f b F C0; R2SF (D; DV ; Ck ) P F = F C0; R2SF (D; DV ; Ck ) P F holds. Actually, D was employed in the previous continuous rank-2 models of the successive SDP relaxation method in the papers 2, 3, 4]. This di erence is not essential but we will f use D instead of D for simplicity of notation in the succeeding discussions.

5

2.3 Some Fundamental Lemmas.

For every P Q, let c:cone(P ) denote the closure of the convex cone generated by P . For each a ne subspace A of Rn, let Q+(A) denote the set of quadratic functions on Rn which are convex on A. Suppose that the compact subset F of Rn under consideration is represented as in (2) for some P Q. A relaxation using convex-quadratic valid inequalities e 1] (also called a semi-in nite convex quadratic relaxation in the papers 3, 4]) F (C0 ; P ) of F (C0; P ) is de ned as

e F (C0; P ) = fx 2 C0 : qf (x; ; q; Q) 0 (8qf ( ; ; q ; Q) 2 c:cone(P ) \ Q+(Rn))g:

b e Lemma 2.3. (Theorem 2.1 of 1], Theorem 4.2 of 3]) F (C0; P F ) = F (C0; P F ).

This lemma has been playing a key role in the convergence analysis of the successive SDP relaxation method in the papers 3, 4, 2]. We will also fully utilize the lemma in the succeeding discussions. In the remainder of this subsection, let C C0 denote a compact convex relaxation of F , which stands for some iterate Ck of Algorithm 2.2. We say that a face E of C is exposed if E is the intersection of C with some supporting hyperplane of C . The lemma below ^ provides a su cient condition for one iteration of Algorithm 2.2 to cut a point x 2 C . The proof of the lemma will be given in Section 4.

Lemma 2.4. Let E be an exposed face of C , A the a ne subspace of Rn spanned by E , ^ and let x 2 E . Suppose that there is a quadratic function qf ( ; 0; q 0; Q0) 2 c:cone(P F )

qf (^ ; 0; q 0; Q0) > 0 and qf ( ; 0 ; q0 ; Q0) 2 Q+(A): x f ^ b Then x 62 F C; P F R2SF (D; DV ; C ) .

For every a ne subspace A of Rn, let ~ P +(A) = fqf ( ; ; q; Q) 2 P F : qf (~ ; ; q; Q) > 0 for some x 2 Ag : x (3) F We say that a quadratic function qf ( ; ; q ; Q) 2 Q is nonzero (on an a ne subspace A

satisfying

of Rn, respectively) if it is not identically zero on Rn (on A, respectively). The lemma below provides a necessary condition for one iteration of Algorithm 2.2 to cut every open neighborhood of x 2 F . The proof of the lemma will be given in Section 5.

Lemma 2.5. Suppose that x 2 F lies in the relative boundary of b f F C; P F R2SF (D; DV ; C ) \ A(x ; C )

with respect to A(x ; C ) ( every open neighborhood of x with respect to A(x ; C ) is cut b f o by the SDP relaxation F C; P F R2SF (D; DV ; C ) ). Then there exists a nonzero qf ( ; ; q; Q) 2 c:cone P +(A(x ; C )) \ Q+ (A(x ; C )) such that qf (x ; ; q; Q) = 0. F

6

x4 = (0,2+ κ 5 ) 2 x6 = ( 5 ,2) ) x3 = (3,2-κ(3- 5 ) 2 x1 0 x5 1 x2 3

x2

x1

Figure 1: C0( ), F ( ) and c.hull(F ( )) To illustrate the assertions of the two lemmas above, we give a small example. See Figure 1. For every 2 0; 4], let

C0( ) =

x = (x1 ; x2) : 0 x1 3; 0 x2 ; 2 x1 + x2 2 + 2 5 P F = fp( )g; where p(x1; x2) = x2 ? x2 ? 1; 2 1 F ( ) = fx = (x1; x2 ) 2 C0( ) : p(x) 0 (8p( ) 2 P F )g:

(

p !)

;

Then C0( ) turns out to be a polygon having the four vertices

x1 = (0; 0); x2 = (3; 0); x3 = 3; 2 ? (3 ? 5) ; x4 = 0; 2 + 5 ; 2 2

and the convex hull of F ( ) turns out to be a polygon having the four vertices

p !

p!

x1; x5 = (1; 0); x6 = ( 5; 2); x4:

We observe that

p

E = f(x1 ; 0) 2 R2 : 0 x1 3g is a 1-dimensional exposed face of C0 = C0( ), the quadratic function p(x1; x2 ) = x2 ? x2 ? 1 in P F is convex on the line A = f(x1; 0) 2 1 2 R2 : x1 2 Rg spanned by E , p(x1; x2) = x2 ? x2 ? 1 > 0 if (x1; x2) 2 E and x1 > 1. 1 2

7

Hence, by Lemma 2.4, we know that b f if (x1 ; x2) 2 E and x1 > 1, then (x1; x2 ) 62 C1 = F C0; P F R2SF (D; DV ; C0) : Therefore, we obtain that 1 = dim A(x5 ; C0) > dim A(x5; C1) = dim A(x5; c.hull(F )) = 0: When 2 0; 2], we can apply a similar argument to the point x6 2 c.hull(F ( )) \ F to derive 1 = dim A(x6 ; C0) > dim A(x6; C1) = dim A(x6; c.hull(F )) = 0: Now suppose that 2 (2; 4]. In this case, the quadratic function (x2 ? x2 ? 1) in P F 1 2 is strictly concave on the line A(x6; C0) spanned by the line segment E (x6; C0) joining x3 and x4 . Applying Lemma 2.5, we see that x6 remains in the relative interior of b f C1 \ A(x6; C0) = F C0; P F R2SF (D; DV ; C0) \ A(x6; C0) with respect to A(x6; C0). By induction, we can further prove that for any k 1, x6 lies in the relative interior of Ck \ A(x6 ; C0) with respect to A(x6; C0). Therefore 1 = dim A(x6; C0) = dim A(x6 ; Ck ) > dim A(x6; c.hull(F )) = 0 for k = 1; 2; : : : : The lemma below is also used in the next subsection.

Lemma 2.6. Let x 2 F . Assume that qf ( ; ; q; Q) 2 c:cone(P +(A)) \ Q+ (A(x ; C )), F

qf (x ; ; q; Q) = 0 and that qf ( ; ; q ; Q) is nonzero on A(x ; C ). Let U be an arbitrary open ^ neighborhood of x with respect to A(x ; C ). Then there is a x 2 U such that qf (^ ; ; q; Q) > x ^ 0; hence x 62 F .

Proof: For simplicity of notation, we may assume that x = 0. Then = 0. If q T x 6= 0 for some x 2 A(x ; C ); then, for every su ciently small > 0, we have either qf ( x; ; q; Q) > 0 or qf (? x; ; q; Q) > 0; therefore, the desired result follows. Now assume that q T x = 0 for every x 2 A(x ; C ): Then, xT Qx 6= 0 for some x 2 A(x ; C ). Since Q is positive semide nite on A(x ; C ), we see that xT Qx > 0. Hence, for any su ciently small > 0, we have qf ( x; ; q; Q) > 0; hence the desired result follows.

2.4 On Finite Convergence of Algorithm 2.2.

Throughout this subsection, let fCk C0 : k = 0; 1; : : : g denote a sequence of convex relaxations of F generated by Algorithm 2.2. As we have observed in the Introduction, the relations in (1) hold, and the sequence converges to c.hull(F ) in k iterations if and only if dim A(x; Ck ) = dim A(x; c.hull(F )) for every x 2 c.hull(F ): Let x 2 c.hull(F ). Assume that dim A(x; C0) > dim A(x; c.hull(F )). If the sequence converges to c.hull(F ) in k iterations, then dim A(x; Ck ) needs to decrease strictly at some of the iterations from 0 through k. We focus our attention to the case where x 2 c.hull(F ) lies in F , and discuss when and how the decrease in dim A(x; Ck ) occurs. 8

Theorem 2.7. Assume that

(i) x 2 F . (ii) n = dim A(x ; C0) > dim A(x ; Ck ) for some k Then n > dim A(x ; C1). Proof: By assumption (ii), there is an `

1.

0 such that

n = dim A(x ; C` ) > dim A(x ; C`+1): By Lemma 2.5, there exists a nonzero qf ( ; ; q ; Q) 2 c:cone(P + (Rn)) \Q+ (Rn) such that F ^ qf (x ; ; q; Q) = 0:By Lemma 2.6, for any open neighborhood U of x , there is a x 2 U ^ such that qf (^ ; ; q; Q) > 0: Hence, by Lemma 2.3, such an x 2 U is not contained in x b (C0; P F R2SF (D; D0; C1)). Thus we have shown that n > dim A(x ; C1). C1 = F

As a corollary, we obtain a necessary condition for the nite convergence of Algorithm 2.2.

Corollary 2.8. Assume that the sequence fCk C0 : k = 0; 1; : : : g converges to c.hull(F ) in a nite number iterations. Then every x 2 F which lies on the boundary of c.hull(F )

must lie on the boundary of C1.

Note that the necessary condition can be easily checked in the case of 0 ? 1 integer convex optimization problems, the setting of 5]; since in that case they always have C0 0; 1]n. This clearly implies that any 0 ? 1 vector lying in C0 is an extreme point of C0.

Theorem 2.9. Assume that:

(i) x 2 F . (ii) n > dim A(x ; Ck ) > dim A(x ; Ck ) for some k, k0 such that 0 k < k0 .

0

(iii) If qf ( ; ; q ; Q) 2 c:cone P +(A(x ; Ck ) \ Q+ (A(x ; Ck )) is nonzero on Rn and F qf (x ; ; q; Q) = 0, then it is nonzero on A(x ; Ck ). (iv) E (x ; Ck ) is an exposed face of Ck . Then dim A(x ; Ck ) > dim A(x ; Ck+1 ). Proof: By assumption (ii), there exists an `

k such that dim A(x ; Ck ) = dim A(x ; C` ) > dim A(x ; C`+1):

By Lemma 2.5, there exists a nonzero

qf ( ; ; q ; Q) 2 c:cone(P +(A(x ; C` ))) \ Q+ (A(x ; C`)) F

9

such that qf (x ; ; q; Q) = 0. By assumption (iii), the quadratic function qf ( ; ; q ; Q) is nonzero on A(x ; Ck ) = A(x ; C` ). Hence, by Lemma 2.6, for any open neighbor^ hood U of x with respect to A(x ; Ck ), there is an x 2 U such that qf (^ ; ; q ; Q) > 0: x ^ Applying Lemma 2.4, we then see that such an x 2 U is not contained in Ck+1 = b f F Ck ; P F R2SF (D; DV ; Ck . Thus the inequality dim A(x ; Ck ) > dim A(x ; Ck+1 ) follows. We can replace assumption (iii) by the stronger assumption below. (iii)' If a qf ( ; ; q ; Q) 2 c:cone (P F ) is nonzero on Rn and qf (x ; ; q; Q) = 0, then it is nonzero on A(x ; c.hull(F )). When P F consists of a nite number of quadratic functions, the cone generated by P F is closed; hence it coincides with its closure c:cone(P F ). In this case, we can restate assumption (iii)' as (iii)" i = 0 for every i = 1; 2; : : : ; m whenever i 0; qf (x ; i ; q i; Qi ) = 0; qf ( ; i ; q i; Qi ) 2 P F for every i = 1; 2; : : : ; m;

m X i=1 i qf (x; i ; qi ; Qi ) = 0 for

every x 2 A(x ; c.hull(F )):

We may regard this assumption as a Mangasarian-Fromovitz type constraint quali cation. As a corollary, we obtain: Corollary 2.10. Assume that the sequence fCk C0 : k = 0; 1; : : : g converges to c.hull(F ) in a nite number iterations. Also assume (i), (ii) of Theorem 2.9, (iii)' above, (v) E (x ; C0) = Rn or E (x ; C0) is an exposed face of C0, and (vi) E (x ; Ck ) is an exposed face of Ck (k = 1; 2; : : : ). Then there exists a k 2 f0; 1; : : : ; ng such that dim A(x ; C0) > dim A(x ; C1) > dim A(x ; Ck ) = dim A(x ; c.hull(F )): (4)

3 Cones of Rank-2 Quadratic Supporting Functions for a Compact Convex Relaxation C C0 of F .

Let C C0 be a compact convex relaxation of F . In this section, we investigate some f fundamental properties of the closed convex cone c:cone R2SF (D; DV ; C ) generated by f ^ R2SF (D; DV ; C ). Let x be an exposed vertex of C , and H = fx 2 Rn : cT x ? (c; C ) = 0g ^ be a supporting hyperplane that intersects with C at the single point x; ^ fxg = H \ C: (5) We assume that c 6= 0. 10

Lemma 3.1. Let M be an n n arbitrary symmetric matrix, and let > 0 be arbitrary. f Then there is a quadratic function p( ) 2 c:cone R2SF (D; DV ; C ) such that the Hessian matrix of p( ) coincides with M and 0 p(^ ) ? . x

This lemma is a variation of Lemma 5.3 of 3] which dealt with the case where C is the unit ball in Rn. See also Section 4.3 of 4] and Section 4.1 of 2] for similar results. The remainder of this section is devoted to a proof of the lemma. For every x 2 Rn, > 0, nonzero u 2 Rn and nonzero v 2 Rn, de ne )+ f (x; ; u; v; C ) = r2sf (x; b( ; u); ?v; C sin r2sf (x; b( ; u); v; C ) 2 b( ; u)T x ? (b( ; u); C ) ?vT x ? (?v ; C0) = ? 2 sin b( ; u)T x ? (b( ; u); C ) vT x ? (v ; C0) ? : 2 sin

9 > > > > = > > > > ;

(6)

b( ; u) = c cos + u sin and b( ; u) = c cos ? u sin : Since c 6= 0, we can take a positive number such that b( ; u) 6= 0 and b( ; u) 6= 0 for every 2 (0; ].

Here

Lemma 3.2. Let 2 (0; ].

f (i) f ( ; ; u; v; C ) 2 c:cone R2SF (D; DV ; C ) .

(ii) The Hessian matrix of the quadratic function f ( ; ; u; v; C ) : Rn ! R coincides with uvT + vuT . the n n matrix 2 (iii) f (^ ; ; u; v; C ) ! 0 as (0; ] 3 ! 0. x Proof: Assertion (i) follows directly from the de nitions (6). (ii). The Hessian matrix of the quadratic function f ( ; ; u; v; C ) : Rn ! R turns out to be the symmetric part of the matrix

? ?b( ; u)vT + b( ; u)vT

2 sin ?(c cos + u sin )vT + (c cos ? u sin )vT = ? 2 sin T: = uv

Thus we have shown (ii). 11

(iii) We will show that ^ b( ; u)T x ? (b( ; u); C ) ! 0 as (0; ] 3 ! 0; 2 sin T x ? (b( ; u); C ) b( ; u) ^ ! 0 as (0; ] 3 ! 0: 2 sin Then f (^ ; ; u; v; C ) ! 0 as (0; ] 3 ! 0 follows. We will only prove the rst relation x above since we can prove the second relation similarly. By de nition, we know that ^ 0 b( ; u)T x ? (b( ; u); C ) for every 2 (0; ]. Hence, it su ces to prove that for every small positive number there is a 2 (0; ] for which ^ b( ; u)T x ? (b( ; u); C ) > ? if 2 (0; ] holds. Let > 0. >>From assumption (5), we can take a 2 (0; ] such that ^ juT (x( ) ? x)j < 2 whenever 2 (0; ] and x( ) 2 argmaxfb( ; u)T x : x 2 C g: Let 2 (0; ] and x( ) 2 argmaxfb( ; u)T x : x 2 C g. Then ^ ^ b( ; u)T x( ) b( ; u)T x and b(0; u)T x b(0; u)T x( ); which we can rewrite as ^ ^ (c cos + u sin )T x( ) (c cos + u sin )T x and cT x cT x( ): Hence ^ ^ 0 cos cT (x( ) ? x) ?uT (x( ) ? x) > ?2 : sin Therefore we can derive that ^ b( ; u)T x ? (b( ; u); C ) 0 2 sin T x ? b( ; u)T x( ) = b( ; u) ^ 2 sin ^ (c cos + u sin )T x ? (c cos + u sin )T x( ) = 2 sin ? cos cT (x( ) ? x) ? 1 uT (x( ) ? x) ^ ^ = 2 sin 2 ^ ? 1 uT (x( ) ? x) 2

> ?:

Now we are ready to prove Lemma 3.1. Let V denote the n n matrix (v 1; v2 ; : : : ; vn ), and let U = MV ?T . Then

M

= UV T

UV T + (UV T )T = = 2 12

0n 1 0n 1 X T A @X T AT @ uj vj + uj v j

j =1

2

j =1

:

Here uj denotes the j th column of U . For each pair of u = uj and v = vj (j = 1; 2; : : : ; n), we construct the quadratic function f ( ; ; uj ; vj ; C ) with a parameter > 0 by (6). By Lemma 3.2, there is a positive number ~ such that 0 f (^ ; ~; uj ; vj ; C ) ? =n (j = 1; 2; : : : ; n): x De ne the quadratic function p( ) by

p(x) =

n X j =1

f (x; ~; uj ; vj ; C ) for every x 2 Rn:

f Then p( ) 2 c:cone R2SF (D; DV ; C ) , the Hessian matrix of p( ) coincides with M and the inequality 0 p(^ ) ? holds. This completes the proof of Lemma 3.1. x

4 Proof of Lemma 2.4.

Since E is an exposed face of C and A is the a ne subspace of Rn spanned by E , there is a supporting hyperplane H = fx 2 Rn : cT x ? (c; C ) = 0g such that

C \ H = E; A H; fcg = (H ? x)? (A ? x)? for every x 2 E:

For simplicity of notation, we assume that ^ x = (; 0 A = x= ( ? = x= A

c = q0 = Q0 =

! c1 2 Rn; c = 0 2 R`; c 2 Rn?`; kc k = 1; 1 2 2 c2 ! q 0 2 Rn; q 0 2 R`; q0 2 Rn?`; 1 2 1 q0 2 ! Q0 : Q0 12 11 (Q0 )T Q0 12 22

! ) x1 2 Rn : x 2 R`; x = 0 2 Rn?` ; 1 2 x2 ! ) x1 2 Rn : x = 0 2 R`; x 2 Rn?` ; 1 2 x2

Here, Q0 2 Rl l. Let = qf (0; 0; q 0; Q0). 11 f By Lemma 3.1, there exists a quadratic function p1( ) 2 c:cone R2SF (D; DV ; C ) whose Hessian matrix coincides with the n n identity matrix. If > 0 is su ciently small then

qf (0; 0; q 0; Q0) + p1 (0) > 2 =3; qf ( ; 0; q 0; Q0) + p1 ( ) is strictly convex on A, i:e:; Q0 + I is positive de nite. 11

13

Let be the minimum eigenvalue of the matrix Q0 + I ; 11 = minfxT (Q0 + I )x1 : x1 2 R`; kx1k = 1g: 1 11 Let and be nonnegative numbers such that

xT Q0

1

12 2

x

xT Q0 x2 2 22

? kx1kkx2k for every x = x1 x2 2 n?`: ? kx2k for every x2 2 R

+ ? ? 2= 0:

!

2 Rn ;

Choose a nonnegative number such that

! x1 2 Rn, we then see that For every x = x 2 !! O O x T Q0 + I + x O I = xT (Q0 + I )x1 + 2xT Q0 x2 + xT Q0 + ( + )I x2 22 2 1 12 11 1 2 kx1k ? 2 kx1kkx2k + ( + ? )kxk2 = (kx1 k ? ( = )kx2k)2 + ( + ? ? 2 = )kx2 k2 0:

Hence the n n symmetric matrix

Q=Q + I+

0

O O O I

!

is positive semide nite. Now we observe that

f0g = E \ A? = (C \ H ) \ A? = C \ A? \ H \ A? ; dim H \ A? = dim H ? dim A = n ? 1 ? dim A = dim A? ? 1:

Thus 0 is an exposed vertex of the compact subset (C \ A? ) in the linear subspace A?;

0 2 (C \ A?); c 2 A? and cT x < 0 for every nonzero x 2 C \ A?:

Let P be the orthogonal projection matrix from Rn onto A?. Since v1; v2; : : : ; vn form a basis of Rn, P v1; P v2; : : : ; P vn contain a basis of A?, say P v1 ; P v2 ; : : : ; P vn?`. Choose u1 ; u2; : : : ; un?` in A? such that

uT P vi = uT vi = (8i = 1; 2; : : : ; n ? `) and uT P vj = uT vj = 0 (i 6= j ): i i i i

14

Now, for each pair of u = uT and v = vj (j = 1; 2; : : : ; n ? `), construct the quadratic j function fj ( ; ; uj ; vj ; C \ A?) with a parameter > 0 as in Section 3:

fj (x; ; uj ; vj ; C \ A?) r2sf (x; b( ; uj ); ?vj ; C \ A?) + r2sf (x; b( ; uj ); v j ; C \ A? ) =

(b( ; uj ); C \ A?) ?vT x ? (?v j ; C0) j = ? 2 sin T x ? (b( ; uj ); C \ A?) vT x ? (v j ; C0) b( ; uj ) j : ? 2 sin

b(

; uj )T x ?

2 sin

Here De ne

b( ; uj ) = c cos + uj sin 2 A? and b( ; uj ) = c cos ? uj sin 2 A?:

f (x;

n?` ?) = X f ;C \ A j =1

j (x;

; uj ; vj ; C \ A? ) for every x 2 Rn:

f By Lemma 3.2, f ( ; ; C \ A?) 2 c:cone R2SF D; DV ; C \ A? , the Hessian matrix of ! f ( ; ; C \ A?) coincides with O O , and there exists a positive number such that O I

0 f (0; ; C \ A?) ? =3: On the other hand, since b( ; uj ) 2 A? and b( ; uj ) 2 A?, we see that (b( ; uj ); C \ A?) = (b( ; uj ); C \ A?)) = Hence (b( ; uj ); C ); (b( ; uj ); C ):

f fj ( ; ; C \ A?)) 2 c:cone R2SF (D; DV ; C ) (8j = 1; 2; : : : ; n ? `); f f ( ; ; C \ A?)) 2 c:cone R2SF (D; DV ; C ) :

Therefore, we obtain that

f qf ( ; 0 ; q0 ; Q0) + p1 ( ) + f ( ; ; C \ A?) 2 c:cone(P F ) R2SF (D; DV ; C ) \ Q+(Rn); qf (0; 0; q0 ; Q0 ) + p1 (0) + f (0; ; C \ A?) =3:

By Lemma 2.3, we conclude that

b f 0 62 F (C; P F R2SF (D; DV ; C )):

This completes the proof of Lemma 2.4.

15

5 Proof of Lemma 2.5.

For simplicity of notation, let

A = A(x ; C ) and E = E (x ; C ) = C \ A:

Let

the a ne subspace A (or Rn) turns out to be a linear subspace of Rn. By assumption, there exists a sequence fxj g in the relative interior of E with respect to A such that

P F (A) = qf ( ; ; q; Q) 2 P F : qf (x; ; q; Q) 0 for every x 2 A ; Then P F = P +(A) P F (A) and P + (A) \ P F (A) = ;. (P +(A) was de ned in (3).) F F F For simplicity of discussion, we assume, without loss of generality, that x = 0. Then

b f xj 62 F (C; P F R2SF (D; DV ; C )) for every j = 1; 2; : : : ; xj ! x = 0 as j ! 1:

n

o

Hence, by Lemma 2.3, there exists f ^ ^ qf ( ; ^j ; q j ; Qj ) 2 c:cone P F R2SF (D; DV ; C ) \ Q+ (Rn) such that ^ ^ qf (xj ; ^j ; q j ; Qj ) > 0 (j = 1; 2; : : : ): On the other hand, by Lemma 3.1, there exists a quadratic function p( ) 2 f c:cone R2SF (D; DV ; C ) whose Hessian matrix coincides with the n n identity matrix. Hence if we take su ciently small j > 0 for each j = 1; 2; : : : ; then f ^ ^ qf ( ; ^j ; q j ; Qj ) + j p( ) 2 c:cone P F R2SF (D; DV ; C ) \ Q++(Rn); ^ ^ qf (xj ; ^j ; q j ; Qj ) + j p(xj ) > 0

(j = 1; 2; : : : ). Here Q++(Rn) denotes the set of strictly convex quadratic functions on ~ ~ Rn. Therefore, we can take convex quadratic functions qf ( ; ~j ; q j ; Qj ) (j = 1; 2; : : : ) in the f convex cone generated by P F R2SF (D; DV ; C ) which satisfy the same property above, i.e., ~ ~ qf (xj ; ~j ; q j ; Qj ) + j p(xj ) > 0: ~ ~ (j = 1; 2; : : : ): Now, since each qf ( ; ~j ; qj ; Qj ) is in the the convex cone generated by f P F R2SF (D; DV ; C ), we can nd

qf ( ; j ; qj ; Qj ) 2 c:cone(P +(A)); F f qf ( ; j ; pj ; P j ) 2 c:cone(P F (A)) R2SF (D; DV ; C )

such that ~ ~ qf ( ; ~j ; qj ; Qj ) = qf ( ; j ; q j ; Qj ) + qf ( ; j ; pj ; P j ) 16

nius norm of a matrix. Therefore, by choosing an appropriate subsequence if necessary, we may assume that as j ! 1, qf ( ; j ; q j ; Qj ) ! 9qf ( ; ; q; Q) 2 c:cone P +(A) ; F 0 < qf (xj ; j ; q j ; Qj ) ! qf (x ; ; q ; Q) = 0: We also see that 0 qf (xj ; j ; pj ; P j ) > ?qf (xj ; j ; q j ; Qj ) ! 0 as j ! 1: (7)

qf (xj ; j ; qj ; Qj ) + qf (xj ; j ; pj ; P j ) > 0; qf (xj ; j ; qj ; Qj ) > 0; qf (xj ; j ; pj ; P j ) 0; qf ( ; j ; q j ; Qj ) + qf ( ; j ; pj ; P j ) 2 Q+ (Rn): ~ ~ (j = 1; 2; : : : ). We can adjust the magnitude of each qf ( ; ~j ; q j ; Qj ) so that ! j qT j qj Qj F = 1 without destroying any of the relations above (j = 1; 2; : : : ). Here k kF denotes the Frobe-

(j = 1; 2; : : : ). Then

Let's x j arbitrarily. Since the quadratic function qf ( ; j ; qj ; Qj ) + qf ( ; j ; pj ; P j ) is convex on Rn, it is convex on the linear subspace A of Rn. Hence Qj + P j is positive semide nite on A, or equivalently vT Qj v ?vT P j v for every v 2 A: (8) We will show that Q is positive semide nite on A. Let v 2 A and kvk = 1. Now we denote each quadratic function qf ( ; j ; pj ; P j ) as qf (x; j ; pj ; P j ) = ? j + 2tT (x ? xj ) + (x ? xj )T P j (x ? xj ); 8x 2 Rn: j It follows from (7) that 0 j ! 0 as j ! 0. Since x = 0 lies in the relative interior of E = A \ C with respect to A, there is an > 0 such that (xj + v) and (xj ? v ) 2 E: Hence, 0 qf (xj + v ; j ; pj ; P j ) = ? j + 2 tT v + 2 vT P j v; j j ? v ; ; p ; P ) = ? ? 2 tT v + 2 vT P j v; 0 qf (x j j j j j which imply that 0 ? j + 2 vT P j v: By the inequality above and (8), vT Qj v ? j = 2 . Taking the limit as j ! 1, we obtain that vT Qv 0. Consequently we have shown that ! qT + = 1: qf ( ; ; q ; Q) 2 c:cone P F (A) \ Q+ (A); qf (x ; ; q ; Q) = 0; q Q F This completes the proof of Lemma 2.5. 17

6 Concluding Remarks.

If the relation (4) in Corollary 2.10 occurs for every boundary point x 2 c.hull(F ) and some k 2 f0; 1; 2; : : : ; ng, then the sequence fCk C0 : k = 0; 1; : : : g converges to c.hull(F ) in at most n iterations as in the case of the Lovasz-Schrijver lift-and-project procedures 5] applied to 0-1 integer convex optimization problems. Thus Corollary 2.10 raises an interesting question whether the nite convergence implies the n-step convergence. However, what we have established in this paper are too restrictive to answer this question. We have only dealt with the points of c.hull(F ) which lie in F ; we need to investigate further how dim A(x; Ck ) decreases for every x 2 c.hull(F ) which does not lie in F . Also we used assumptions (iii)', (v) and (vi) in Corollary 2.10 to derive (4). Assumption (iii)' may be acceptable, because it would only restrict the quadratic optimization problems to which we could apply the n-step convergence, but assumptions (v) and (vi) imposed on the generated sequence fCk C0 : k = 0; 1; : : : g need to be removed.

References

1] T. Fujie and M. Kojima, \Semide nite relaxation of nonconvex programs," Journal of Global Optimization 10 (1997) 367{380. 2] M. Kojima and A. Takeda, \Complexity analysis of successive convex relaxation methods for nonconvex sets," Technical Report B-350, Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology, Meguro, Tokyo, Japan, April 1999. Revised July 1999. 3] M. Kojima and L. Tuncel, \Cones of matrices and successive convex relaxations of nonconvex sets," Technical Report B-338, Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology, Meguro, Tokyo, Japan, March 1998. Revised June 1999. Also issued as CORR 98-6, Dept. of Combinatorics and Optimization, Faculty of Mathematics, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada. To appear in SIAM Journal on Optimization. 4] M. Kojima and L. Tuncel, \Discretization and localization in successive convex relaxation methods for nonconvex quadratic optimization problems," Technical Report B-341, Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology, Meguro, Tokyo, Japan, July 1998. Revised July 1999. Also issued as CORR 98-34, Dept. of Combinatorics and Optimization, Faculty of Mathematics, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada. 5] L. Lovasz and A. Schrijver, \Cones of matrices and set functions and 0-1 optimization," SIAM J. on Optimization 1 (1991) 166{190. 6] H. D. Sherali and A. Alameddine, (1992), \A new reformulation-linearization technique for bilinear programming problems," Journal of Global Optimization 2, 379{410. 18

7] H. D. Sherali and C. H. Tuncbilek, \A reformulation-convexi cation approach for solving nonconvex quadratic programming problems, " Journal of Global Optimization 7 (1995) 1{31. 8] A. Takeda, Y Dai, M. Fukuda and M. Kojima, \Towards Implementations of Successive Convex Relaxation Methods for Nonconvex Quadratic Optimization Problems," Research Report B-347, Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology, Meguro, Tokyo 152-8552, Japan, March 1999. Revised August 1999. To appear in Approximation and Complexity in Numerical Optimization: Continuous and Discrete Problems (P. M. Pardalos, Editor), Kluwer Academic Press.

19

- The asymptotic convergence rates of Fourier path integral methods
- Comparison of finite volume and finite difference methods and convergence results for finite volume
- On the convergence of pattern search methods
- 2005a), ‘On the convergence of a general class of finite volume methods
- Adaptive finite element methods__ with convergence rates
- On the Convergence of Basic Iterative Methods for Convection-Diffusion Equations
- On the Formulation of Hybrid Finite-Element and Boundary-Integral Methods for 3-D Scattering
- On the convergence rate of Newton interior-point methods in the absence of strict complemen
- ON THE CONVERGENCE OF MULTIGRID METHODS FOR FLOW PROBLEMS
- I. Multigrid on the interface for mortar mixed finite element methods for elliptic problems

- On the finite convergence of successive SDP relaxation methods
- 吡啶硫酮铜悬浮剂的研制及其防霉效果_论文
- 新媒体环境下广播电视的发展路径分析_论文
- 浅析网络新媒体影响下电视购物的发展_论文
- 爱心树读后感作文250字
- 常柜是什么意思
- 被狗狗咬伤怎么办_论文
- 新媒体环境下电视台的转型发展研究_论文
- 福建省莆田市第二十五中学2015_2016学年八年级物理下学期第一次月考试题北师大版
- 度高三语文一轮复* 基础知识 病名一单元测试卷(扫描版,含答案)
- 新媒体环境下广播电视发展前景探析_论文
- (写动物的作文)我喜爱的一种小动物作文300字：可爱的小乌龟
- 新媒体环境下电视新闻的发展探讨_论文
- 大尺寸硅基体633nm和3.5～4.1μm双波段高反射膜研制_论文
- 小学开学第一课升旗仪式主持稿
- 新媒体环境下电视栏目的创新与发展_论文
- 宁波靓行贸易有限公司鄞州东部新城银泰城第二分公司企业信息报告-天眼查
- 关于诚信无价作文
- 女星最爱五大瘦脸法脸比锥子还尖
- 气压测量器项目可行性研究报告(目录)
- 乐鑫 ESP-IDF VS Code 插件快速操作指南

- 【发展战略】寒地国际性城市哈尔滨市可持续发展的战略选择
- 恩施自治州好又多商贸股份有限公司享买乐上官田超市企业信用报告-天眼查
- 公司各部门组织架构与岗位职责讲解
- 医疗信息化竞争分析ppt模板
- 中日文-贸易用语
- 联络线-基床表层-试验段施工方案
- 中秋节作文800字左右
- 我会孵鸭子
- 初一叙事作文《面对古树,我流泪了》750字(共10页PPT)
- LoRaWAN协议解析 配套文件 地区参数(物理层)
- 地理-甘肃省张掖市2017-2018学年高二上学期期末质量检测试题(解析版)
- 充满阳光正能量早安心语
- 东营市2019年度专业技术人员礼仪知识学*题库-判断题
- 拜城县鸿瑞水电开发有限公司(企业信用报告)- 天眼查
- 作文预测2019高考语文作文热点话题素材中国文化让世界看见
- 晚上睡不着觉怎么办有什么妙招
- 四年级日记做家务推荐
- 初中八年级(初二)物理第2节_第1课时_液体压强的特点与计算(2)
- 鞍钢节能减排达全国重点钢铁企业先进水*
- 银川默根服装有限公司(企业信用报告)- 天眼查
- 上海磊寰实业有限公司企业信用报告-天眼查
- 大学生职业生涯规划教育研究[论文]
- 煤矿个人工作总结范文 煤矿安全培训个人总结 精品
- 广州市博顺货物运输代理有限公司企业信用报告-天眼查
- 保险在私人财富管理中的功能和价值
- 未来10年最吃香的行业 SIMOTION在制造行业的应用
- 2019年春八年级物理下册第七章力测评B新版新人教版201904091120
- 如果有孩子去领结婚证怎么办
- 鲁科版高中物理选修3-1第3章恒定电流课同步练习第2节电阻
- 【地址水利】09第九章工程地质勘察_修改ppt模版课件
- 2018-2024年中国血红蛋白仪行业市场调查及“十三五”投资战略预测报告
- 2019教育二年级上册数学课件46 乘加 乘减｜人教新课标(秋) 共11张PPT数学
- 井冈山团干培训心得-最新文档资料
- 火灾避险 自护自救
- 什么原因导致脑出血
- 叶荣*-飞机铝合金结构的修理方法和应用
- 初一想象作文《假如我会七十二变》900字(共12页PPT)
- 第3章 有线电视系统设计
- 形容工作很累的句子12条
- FRP工艺培训成型
- 广饶县尚林苗木有限公司企业信用报告-天眼查
- 2019年整理1211小学英语语法_图文.ppt资料