sdpt3r: Semidefinite Quadratic Linear Programming in R

Abstract:

We present the package sdpt3r, an R implementation of the Matlab package SDPT3 . The purpose of the software is to solve semidefinite quadratic linear programming (SQLP) problems, which encompasses problems such as D-optimal experimental design, the nearest correlation matrix problem, and distance weighted discrimination, as well as problems in graph theory such as finding the maximum cut or Lovasz number of a graph.
Current optimization packages in R include Rdsdp, Rcsdp, scs, cccp, and Rmosek. Of these, scs and Rmosek solve a similar suite of problems. In addition to these solvers, the R packages CXVR and ROI provide sophisticated modelling interfaces to these solvers. As a point of difference from the current solvers in R, sdpt3r allows for log-barrier terms in the objective function, which allows for problems such as the D-optimal design of experiments to be solved with minimal modifications . The sdpt3r package also provides helper functions, which formulate the required input for several well-known problems, an additional perk not present in the other R packages.

Cite PDF Tweet

Author

Affiliation

Adam Rahman

University of Waterloo

Published

Dec. 8, 2018

Received

May 1, 2018

Citation

Rahman, 2018

Volume

Pages

10/2

371 - 394


1 Introduction

Convex optimization is a well traversed field with far reaching applications. While perhaps unfamiliar to those in the statistical sciences, many problems important to statisticians can be formulated as a convex optimization, perhaps the most well known of which would be the least squares problem. More specifically, many problems in statistics can be formulated as a subset of these convex optimization problems, known as conic linear optimization problems.

One such example would be the nearest correlation matrix problem , which was first considered when attempting to find correlations between stocks, where incomplete data on daily stock returns are not unusual. Pairwise correlations are only computed when data is available for both pairs of stocks under consideration, resulting in a correlation matrix that contains pairwise correlations, but is not necessarily positive semidefinite - an approximate correlation matrix. The goal is to then find the correlation matrix that is nearest to the approximate correlation matrix in some way.

Other examples of problems that can be formulated in terms of a conic linear optimization problem include D-optimal experimental design , classification using distance weighted discrimination , minimum volume ellipsoids , and problems in educational testing .

Problems in related fields can also be solved, including finding the maximum cut (or maximum k-cut) of a graph, finding the upper bound of the Shannon entropy of a graph, also known as the Lovasz number , as well as problems in control theory, Toeplitz matrix approximation, and Chebyshev approximation.

For the purpose of solving these conic linear optimization problems, we introduce the R package sdpt3r, an implementation of the Matlab package SDPT3 by . Of the R packages available to perform conic optimization, sdpt3r is among the most general. Rdsdp & Rcsdp are capable of solving semidefinite conic optimization problems, while cccp solves linear and quadratic conic optimization problems. The sdpt3r package allows for all of linear, quadratic, and semidefinite conic optimization to be solved simultaneously (i.e., a problem with any combination of semidefinite, quadratic, or linear cones can be solved). Two comparable packages, scs and Rmosek , solve a similar suite of problems. Additionally, the R packages CXVR and ROI provide sophisticated modelling interfaces to these solvers.

As a point of difference, scs and Rmosek allow for the exponential and power cones to be included in the constraints, while sdpt3r handles log-barrier terms in the objective function directly. The inclusion of log-barrier terms allows for the D-optimal design of experiments and minimum volume ellipsoid problems to be solved with minimal modifications. In addition, sdpt3r provides helper functions which directly solve a number of well known problems (such as the “max cut” or nearest correlation matrix problems) with minimal input burden on the user. This additional functionality is not found in either scs or Rmosek (although scs can be used with the CVXR package).

This paper is structured as follows. In Section 2 we discuss in greater detail the mathematical formulation of the linear conic optimization problem, and introduce three examples to explore the increasing generality of the problem to be solved. Section 3 discusses the R implementation of sdpt3r, and the main function by which conic linear optimization problems are solved, sqlp, including the required input, and the output generated. The same examples used in Section 2 will be used to demonstrate how a standard conic linear optimization problem can be converted to a form solvable by sqlp. Section 4 presents the classic form of several other well known problems that can be solved using sdpt3r, as well as the helper functions available to convert them to the appropriate form. Finally Section 5 provides some closing remarks.

2 Conic linear optimization

At its simplest, a conic linear optimization problem has the following standard form :

(1)minimizeXC, Xsubject toAk, X=  bk,k=1,...,mX  K

where K is a cone. Generally, K is either a

Here, x~=[x1,,xn1], and , represents the standard inner product in the appropriate space. In the semidefinite cone the inner product is X, Y=vec(X)Tvec(Y), where the operator vec is the by-column vector version of the matrix X, that is, for the n×n matrix X=[xij], vec(X) is the n2×1 vector [x11,x12,x13,,x(n1)n,xnn]T. Note that vec does not require a square matrix in general.

One of the simplest problems that can be formulated in terms of a conic linear optimization problem is finding the maximum cut of a graph. Let G=[V,E] be a graph with vertices V and edges E. A cut of the graph G is a partition of the vertices of G into two disjoint subsets G1=[V1,E1], G2=[V2,E2], with V1V2=. The size of the cut is defined to be the number of edges connecting the two subsets. The maximum cut is defined to be the cut of a graph G whose size is at least as large as any other cut. For a weighted graph object, we can also define the maximum cut to be the cut with weight at least as large as any other cut.

Finding the maximum cut is referred to as the Max-Cut Problem, and was one of the first problems found to be NP-complete, and is also one of the 21 algorithms on Karp’s 21 NP-complete problems . The Max-Cut problem is also known to be APX hard , meaning in addition to there being no polynomial time solution, there is also no polynomial time approximation.

Using the semidefinite programming approximation formulation of , the Max-Cut problem can be approximated to within an approximation constant. For a weighted adjacency matrix B, the objective function can be stated as

minimizeXC,Xsubject todiag(X)=  1X  Sn

where Sn is the cone of symmetric positive semidefinite matrices of size n, and C=(diag(B1)B)/4. Here, we define diag(a) for an n×1 vector a to be the diagonal matrix A=[Aij] of size n×n with Aii=ai,  i=1,,n. For a matrix X, diag(X) extracts the diagonal elements from X and places them in a column-vector.

To see that the Max-Cut problem is a conic linear optimization problem it needs to be written in the same form as Equation (1). The objective function is already in a form identical to that of Equation (1), with minimization occurring over X of its inner product with a constant matrix C=(diag(B1)B)/4. There are n equality constraints of the form xkk=1,  k=1,...,n, where xkk is the kth diagonal element of X, and bk=1 in Equation (1). To represent this in the form Ak, X=xkk, take Ak to be

Ak=[aij]={1,i=j=k0,otherwise

Now Ak, X=vec(Ak)Tvec(X)=xkk as required, and the Max-Cut problem is specified as a conic linear optimization problem.

Allowing for optimization to occur over only one variable at a time is quite restrictive, as only a small number of problems can be formulated in this form. Allowing optimization to occur over multiple variables simultaneously would allow for a broader range of problems to be solved.

A separable set of variables

The conic linear optimization problem actually covers a much wider class of problems than those expressible as in Equation (1). Variables can be separated into those which are constrained to a semidefinite cone, S, a quadratic cone, Q, or a linear cone, L. The objective function is a sum of the corresponding inner products of each set of variables. The linear constraint is simply a sum of variables of linear functions of each set. This more general version of the conic linear optimization problem is

(2)minimizeXs,Xq,Xlj=1nsCjs, Xjs+i=1nqCiq, Xiq+Cl, Xlsubject toj=1ns(Ajs)Tsvec(Xjs)+i=1nq(Aiq)TXiq+(Al)TXl=  bXjs Ssj   jXiq Qqi   i

Here, svec takes the upper triangular elements of a matrix (including the diagonal) in a column-wise fashion and vectorizes them. In general for an n×p matrix X=[xij], svec(X) will have the following form [x11,x12,x22,x13,...,x(n1)p,xnp]T. Recall that matrices in S are symmetric, so it is sufficient to constrain only the upper triangular elements of the matrix Xs. For this formulation, Ajs, Aiq and Al are the constraint matrices of the appropriate size.

Some important problems in statistics can be formulated to fit this form of the optimization problem.

The nearest correlation matrix

First addressed by in dealing with correlations between stock prices, difficulty arises when data is not available for all stocks on each day, which is unfortunately a common occurrence. To help address this situation, correlations are calculated for pairs of stocks only when data is available for both stocks on any given day. The resulting correlation matrix is only approximate in that it is not necessarily positive semidefinite.

This problem was cast by as

minimizeX||RX||Fsubject todiag(X)=  1X  Sn

where R is the approximate correlation matrix and ||||F denotes the Frobenius norm. Unfortunately, the Frobenius norm in the objective function prevents the problem being formatted as a conic linear optimization problem.

Since the matrix X is constrained to have unit diagonal and be symmetric, and the matrix R is an approximate correlation matrix, meaning it will also have unit diagonal and be symmetric, we can re-write the objective function as

||RX||F=2||svec(R)svec(X)||=2||e||

Now, introduce a variable e0 such that e0||e||, and define e=[e0; e]. The vector e is now restricted to be in the quadratic cone Qn(n+1)/2+1. This work leads to the formulation of

minimizee, Xe0subject tosvec(R)svec(X)=  [0,In(n+1)/2] ediag(X)=  1X  Sne  Qn(n+1)/2+1

Here, [X,Y] denotes column binding of the two matrices Xn×p and Yn×m to form a matrix of size n×(p+m). By minimizing e0, we indirectly minimize e=svec(R)svec(X), since recall we have e0||e||, which is the goal of the original objective function.

To see this as a conic linear optimization problem, notice that e0 can be written as Cq, Xq by letting Cq=[1;0n(n+1)/2] and Xq=e. Since the matrix X (i.e., Xs) does not appear in the objective function, the matrix Cs is an n×n matrix of zeros.

Re-writing the first constraint as

svec(X)+[0,In(n+1)/2] e=  svec(R)

we can easily define the constraint matrices and right hand side of the first constraint as

A1s=In(n+1)/2A1q=[0,In(n+1)/2]b1=svec(R)

The second constraint is identical to the constraint from the Max-Cut problem, where each diagonal element of X is constrained to be equal to 1. Define b2=1, and for the kth diagonal element of X, define the matrix Ak as

Ak=[aij]={1,i=j=k0,otherwise

yielding Ak,X=xkk. To write this as (A2s)TXs, define

A2s=[svec(A1),...,svec(An)]

Since e does not appear in the second constraint, A2q=0n(n+1)/2+1.

The final step is to combine the individual constraint matrices from each constraint to form one constraint matrix for each variable, which is done by defining As=[A1s, A2s], Aq=[A1q, A2q]. We also concatenate both right hand side vectors to form a single vector by defining b=[b1; b2]. Here, the notation [X;Y] is used to denote two matrices Xp×m and Yq×m bound vertically to form a matrix of size (p+q)×m. With this, the nearest correlation matrix problem is written as a conic linear optimization.

Semidefinite quadratic linear programming

While Equation (2) allows for additional variables to be present, it can be made more general still to allow even more problems to be solved. We will refer to this general form as a semidefinite quadratic linear programming (SQLP) problem.

The first generality afforded by an SQLP is the addition of an unconstrained variable Xu, which, as the name suggests, is not bound to a cone, but instead, it is “constrained” to the reals in the appropriate dimension. The second generalization is to allow for what are known as log-barrier terms to exist in the objective function. In general, a barrier function in an optimization problem is a term that approaches infinity as the point approaches the boundary of the feasible region. As we will see, these log-barrier terms appear as log terms in the objective function.

Recall that for any linear optimization problem, there exists two formulations - the primal formulation and the dual formulation. For the purposes of a semidefinite quadratic linear programming problem, the primal problem will always be defined as a minimization, and the associated dual problem will therefore be a maximization

The primal problem

The primal formulation of the SQLP problem is

(3)minimizeXjs,Xiq,Xl,Xuj=1ns[Cjs, Xjsvjs log det Xjs] + i=1nq[Ciq, Xiqviq log γ(Xiq)]+ Cl, Xl  k=1nlvkl log Xkl + Cu, Xusubject toj=1nsAjs(Xjs)+i=1nqAiqXiq+AlXl+AuXu= bXjs Ssj jXiq Qqi iXl LnlXu Rnu

For each j, Cjs and Xjs are symmetric matrices of dimension sj, restricted to the cone of positive semidefinite matrices of the same dimension. Similarly, for all i, Ciq and Xiq are real vectors of dimension qi, restricted to the quadratic cone of dimension qi. For a vector u=[u0;u~] in a second order cone, define γ(u)=u02u~Tu~. Finally, Cl and Xl are vectors of dimension nl, restricted to linear cone of the same dimension, and Cu and Xu are unrestricted real vectors of dimension nu.

As before, the matrices Aiq, Al, and Au are constraint matrices in qi, nl, and nu dimensions respectively, each corresponding to their respective quadratic, linear, or unrestricted block. Ajs is defined to be a linear map from Ssj to Rm defined by

Ajsj(Xjs)=[Aj,1s,Xjs;;Aj,ms,Xjs]

where Aj,1sAj,msSsj are constraint matrices associated with the jth semidefinite variable Xjs.

The dual problem

The dual problem associated with the semidefinite quadratic linear programming formulation is

(4)maximizeZjs,Ziq,Zl,ybTy+ j=1ns[vjs log det Zjs + sj vjs (1log vjs)]+ i=1nq[viq log γ(Ziq) + viq (1log viq)]+ k=1nl[vkl log Zkl + vkl (1log vkl)]subject to(Ajs)Ty+  Zjs= Cjs,Zjs Ssj,j=1,,ns(Aiq)Ty+  Ziq= Ciq,Ziq Qqi,i=1,,nq(Al)Ty+  Zl= Cl,Zl Lnl(Au)Ty= Cu,y Rm

where (Ajs)T is defined to be the adjoint operator of Ajs, where (Ajs)Ty=k=1mykAj,ks. Equations (3) and (4) represent the most general form of the linear conic optimization problem that can be solved using sdpt3r.

Optimal design of experiments

Consider the problem of estimating a vector x from measurements y given by the relationship

y=Ax+ϵ,ϵN(0,1).

The variance-covariance matrix of such an estimator is proportional to (ATA)1. A reasonable goal during the design phase of an experiment would therefore be to minimize (ATA)1 in some way.

There are many different ways in which (ATA)1 might be made minimal. For example, minimization of the trace of (ATA)1 (A-Optimality), minimization of the maximum eigenvalue of (ATA)1 (E-Optimality), minimization of the determinant of (ATA)1 (D-Optimality), and maximization of the trace of ATA (T-Optimality) all have their merits.

Perhaps the most commonly used of these optimality criteria is D-Optimality, which is equivalent to maximizing the determinant of ATA. Typically, the rows of A=[a1,...,aq]T are chosen from M possible test vectors uiRp, i=1,...M, which are known in advance. That is,

ai{u1,...,uM},i=1,...,q

Given that the matrix A is made up of these test vectors ui, write the matrix ATA as

(5)ATA=qi=1MλiuiuiT

where λi is the fraction of rows in A that are equal to the vector ui. Then, write the D-optimal experimental design problem as a minimum determinant problem

minimizeλlog det (i=1MλiuiuiT)1subject toλi  0,i=1,...,mi=1Mλi=  1

Due to the inequality constraint, this primal formulation cannot be interpreted as an SQLP of the form of Equation (3). By defining Z=u diag(λ) uT, the dual problem is

maximizeZ, zl, λlog det (Z)subject toi=1pλi(uiuiT)+Z=  0,ZSnλ+zl=  0,zlR+p1Tλ=  1,λRp

Keeping in mind that this is a dual configuration, and thus follows Equation (4), we proceed with writing the D-Optimal design problem as an SQLP by first considering the objective function. The objective function depends only on the determinant of the matrix variable Z, which is the log-barrier. This indicates that the variable vs in Equation (4) is equal to 1 in this formulation, while vq and vl are both zero. Since λ does not appear in the objective function, the vector b is equal to 0.

The constraint matrices A are easy to define in the case of the dual formulation, as they multiply the vector y in Equation (4), so therefore multiply λ in our case. In the first constraint, each λi is multiplied by the matrix formed by uiuiT, so define Ai to be

Ai=uiuiT,  i=1,...,p.

Then, the constraint matrix is As=[svec(A1),...,svec(Ap)]. In the second constraint containing the linear variable zl, the constraint matrix is Al=Ip, and in the third constraint containing only the unconstrained variable λ, the constraint matrix is Au=1T. Since there is no quadratic variable, Aq=0.

Finally, define the right hand side of each constraint Cs=0n×nCl=0p×1Cu=1

which fully specifies the D-Optimal design problem as an SQLP.

In the next section, we will demonstrate using R how these definitions can be translated for use in the main function of sdpt3r so an SQLP problem can be solved.

3 Solving a conic linear optimization problem with sdpt3r

Each of the problems presented in Section 2 can be solved using the sdpt3r package, an R implementation of the Matlab program SDPT3. The algorithm is an infeasible primal-dual predictor-corrector path-following method, utilizing either an HKM or NT search direction. The interested reader is directed to for further details surrounding the implementation.

The main function available in sdpt3r is sqlp, which takes a number of inputs (or an sqlp_input object) specifying the problem to be solved, and executes the optimization, returning both the primal and dual solution to the problem. This function will be thoroughly discussed in Section 3.1, and examples will be provided. In addition to sqlp, a prospective user will also have access to a number of helper functions for well known problems that can be solved using sdpt3r. For example, the function maxcut takes as input an adjacency matrix B, and produces an S3 object containing all the input variables necessary to solve the problem using sqlp. These functions will be discussed in Sections 3.3, 3.4, 3.4.2, and 4.

For sdpt3r, each optimization variable will be referred to as a block in the space in which it is restricted. For instance, if we have an optimization variable XSn, we will refer to this as a semidefinite block of size n. It is important to note that it is possible to have multiple blocks from the same space, that is, it is possible to have both XSn as well as YSm in the same problem.

Input variables

The main function call in sdpt3r is sqlp, which takes the following input variables

blk A named-list object describing the block structure of the optimization variables.
At A list object containing constraint matrices As, Aq, Al, and Au
for the primal-dual problem.
b A vector containing the right hand side of the equality constraints, b,
in the primal problem, or equivalently the constant vector in the dual.
C A list object containing the constant C matrices in the primal objective
function or equivalently the corresponding right hand side of the equality
constraints in the dual problem.
X0, y0, Z0 Matrix objects containing an initial iterate for the X, y, and Z variables for
the SQLP problem. If not provided, an initial iterate is computed internally.
control A list object providing additional parameters for use in sqlp.
If not provided, default values are used.

The input variable blk describes the block structure of the problem. Letting L be the total number of semidefinite, quadratic, linear, and unrestricted blocks in the SQLP problem, define blk to be a named-vector object of length L, with names describing the type of block, and values denoting the size of the optimization variable, summarized in Table 1.

Table 1: Structure of blk.
Block type Name Value
Semidefinite s sj
Quadratic q qi
Linear l nl
Unrestricted u nu

The input variable At corresponds to the constraint matrices in Equation (3), and C the constant matrices in the objective function. The size of these input variables depends on the block they are representing, summarized in Table 2 for each block type.

Table 2: Size of At and C for each block type.
Block type
Semidefinite Quadratic Linear Unrestricted
At s¯j×m qj×m nl×m nu×m
C sj×sj qj×1 nl×1 nu×1

Note that in Table 2, s¯j=sj(sj+1)/2. The size of At in the semidefinite block reflects the upper-triangular input format that has been discussed previously. In a semidefinite block, the optimization variable X is necessarily symmetric and positive semidefinite, it is therefore more efficient to consider only the upper-triangular portion of the corresponding constraint matrix.

It is important to note that both input variables At and C are lists containing constraint and constant matrices for each optimization variable. In general, the user need not supply initial iterates X0, y0, and Z0 for a solution to be found using sqlp. The infeasible starting point generated internally by sqlp tends to be sufficient to find a solution. If the user wishes to provide a starting point however, the size parameters in Table 3 must be met for each block.

Table 3: Required size for initial iterates X0, y0, and Z0.
Block type
Semidefinite Quadratic Linear Unrestricted
X0 sj×sj qj×1 nl×1 nu×1
y0 sj×1 qj×1 nl×1 nu×1
Z0 sj×sj qj×1 nl×1 nu×1

The user may choose to depart from the default values of several parameters which could affect the optimization by specifying alternative values in the control list. A complete list of all parameters that can be altered can be found in Appendix 6.

An important example is the specification of the parbarrier parameter in control, which specifies the presence of a log-barrier in the objective function. The default case in control assumes that the parameters vjs, viq, vkl in Equation (3) are all 0. If this, however, is not the case, then the user must specify an L×1 matrix object in controlparbarrier to store the values of these parameters (including zeros). If the jth block is a semidefinite block containing p variables, parbarrierj=[vj1s,...,vjns]. If the jth block is a quadratic block containing p variables, parbarrierj=[vj1q,...,vjnq]. If the jth block is a linear block parbarrierj=[v1l,...,vnll]. Finally, if the jth block is the unrestricted block, then parbarrierj=[0,...,0], where 0 is repeated nu times.

When executed, sqlp simultaneously solves both the primal and dual problems, meaning solutions for both problems are returned. The relevance of each output therefore depends on the problem being solved. The following object of class sqlp_output is returned upon completion

pobj the value of the primary objective function
dobj the value of the dual objective function
X A list object containing the optimal matrix X for the primary problem
y A vector object containing the optimal vector y for the dual problem
Z A list object containing the optimal matrix Z for the dual problem

The examples in subsequent subsections will demonstrate the output provided by sqlp.

Toy Examples

Before moving on to more complex problems, consider first some very simple example to illustrate the functionality of the sdpt3r package. First, consider the following simple linear programming problem:

Minimizex1+x2subject tox1+4x2=  123x1x2=  10

This problem can be solved using sdpt3r in very straightforward fashion. First, this is a linear programming problem with two variables, x1 and x2. This implies that blk = c("l" = 2). Next the objective function can be written as 1x1+1x2, so C = matrix(c(1,1),nrow=1). The constraints can be summarized in matrix form as:

A=[1431]

so A = matrix(c(1,3,4,-1), nrow=2)) and At = t(A). Finally the right hand side can be written in vector form as [12,10], so b = c(12,10). Pulling these all together, the problem is solved using sqlp:

blk = c("l" = 2)
C = matrix(c(1,1),nrow=1)
A = matrix(c(1,3,4,-1), nrow=2)
At = t(A)
b = c(12,10)

out = sqlp(blk,list(At),list(C),b)
out

$X
$X[[1]]
2 x 1 Matrix of class "dgeMatrix"
     [,1]
[1,]    4
[2,]    2


$y
          [,1]
[1,] 0.3076923
[2,] 0.2307692

$Z
$Z[[1]]
2 x 1 Matrix of class "dgeMatrix"
             [,1]
[1,] 6.494441e-10
[2,] 1.234448e-09


$pobj
[1] 6

$dobj
[1] 6

which returns the solution x1=4 and x2=2, and the optimal primal solution of 6. Second, consider the following simple quadratic programming problem:

Minimize12x12x22subject to2x1x2=  5x1+x2=  4

This problem can be solved using sdpt3r by formulating the input variables in a similar fashion as the linear programming problem:

blk = c("q" = 2)
C = matrix(c(0.5,-1),nrow=1)
A = matrix(c(2,1,-1,1), nrow=2)
At = t(A)
b = c(5,4)

out = sqlp(blk,list(At),list(C),b)
out

$X
$X[[1]]
2 x 1 Matrix of class "dgeMatrix"
     [,1]
[1,]    3
[2,]    1


$y
     [,1]
[1,]  0.5
[2,] -0.5

$Z
$Z[[1]]
2 x 1 Matrix of class "dgeMatrix"
              [,1]
[1,]  2.186180e-09
[2,] -3.522956e-10


$pobj
[1] 0.5

$dobj
[1] 0.5

which returns the solution x1=3 and x2=1, with optimal primal solution of 0.5. Finally, consider the following simple semidefinite programming problem (taken from ):

Minimize[123290307][x1x2x3x4x5x6x7x8x9]subject to[101037175][x1x2x3x4x5x6x7x8x9]=11[028260804][x1x2x3x4x5x6x7x8x9]=9

This problem is written almost exactly in the language used by sdpt3, and so can be easily solved by taking:

blk = c("s" = 3)
C = list(matrix(c(1,2,3,2,9,0,3,0,7), nrow=3))
A1 = matrix(c(1,0,1,0,3,7,1,7,5), nrow=3)
A2 = matrix(c(0,2,8,2,6,0,8,0,4), nrow=3)
At = svec(blk,list(A1,A2))
b = c(11,9)

out = sqlp(blk,At,C,b)
out

$X
$X[[1]]
3 x 3 Matrix of class "dgeMatrix"
           [,1]      [,2]      [,3]
[1,] 0.08928297 0.1606827 0.2453417
[2,] 0.16068265 0.2891815 0.4415426
[3,] 0.24534167 0.4415426 0.6741785


$y
          [,1]
[1,] 0.5172462
[2,] 0.4262486

$Z
$Z[[1]]
3 x 3 Matrix of class "dsyMatrix"
           [,1]      [,2]       [,3]
[1,]  0.4827538  1.147503 -0.9272352
[2,]  1.1475028  4.890770 -3.6207235
[3,] -0.9272352 -3.620723  2.7087744


$pobj
[1] 9.525946

$dobj
[1] 9.525946

which provides the optimal matrix solution X, and the optimal value of the objective function 9.53. Note that the function svec is used since the problem is a semidefinite programming problem, and thus each A matrix is necessarily symmetric.

The Max-Cut problem

Recall that the maximum cut of a graph G with adjacency matrix B can be found as the solution to

MinimizeC,Xsubject todiag(X)=  1X  Sn

where C=(diag(B1)B)/4. In Section 2, we wrote this in the form of an SQLP

MinimizeC,Xsubject toAk,X=  1,k = 1,,nX  Sn

where we defined Ak as

Ak=[aij]={1,i=j=k0,otherwise

To convert this to a form usable by sqlp, we begin by noting that we have one optimization variable, X, and therefore L=1. For an adjacency matrix B of dimension n for which we would like to determine the Max-Cut, X is constrained to the space of semidefinite matrices of size n. Therefore, for a 10×10 matrix B (as in Figure 3.1), blk is specified as

B <- rbind(c(0, 0, 0, 1, 0, 0, 1, 1, 0, 0),
           c(0, 0, 0, 1, 0, 0, 1, 0, 1, 1),
           c(0, 0, 0, 0, 0, 0, 0, 1, 0, 0),
           c(1, 1, 0, 0, 0, 0, 0, 1, 0, 1),
           c(0, 0, 0, 0, 0, 0, 1, 1, 1, 1),
           c(0, 0, 0, 0, 0, 0, 0, 0, 1, 0),
           c(1, 1, 0, 0, 1, 0, 0, 1, 1, 1),
           c(1, 0, 1, 1, 1, 0, 1, 0, 0, 0),
           c(0, 1, 0, 0, 1, 1, 1, 0, 0, 1),
           c(0, 1, 0, 1, 1, 0, 1, 0, 1, 0))

n <- max(dim(B))

blk <- c("s" = n)

With the objective function in the form C,X, we define the input C as

 one <- matrix(1, nrow = n, ncol = 1)
 C <- -(diag(c(B %*% one)) - B) / 4

where, again, B is the adjacency matrix for a graph on which we would like to find the maximum cut, such as the one in Figure 3.1.

graphic without alt textB=[0001001100000100101100000001001100000101000000111100000000101100100111101110100001001110010101101010]

Figure 1: A graph object and associated adjacency matrix for which we would like to find the maximum cut.

The matrix At is constructed using the upper triangular portion of the Ak matrices. To do this in R, the function svec is made available in sdpt3r.

  A <- list()
  for(k in 1:n){
    A[[k]] <- Matrix(0,n,n)
    A[[k]][k,k] <- 1
  }

 At <- svec(blk[1],A,1)

Having each of the diagonal elements of X constrained to be 1, b is a n×1 matrix of ones

 b <- matrix(1, nrow = n, ncol = 1)

With all the input variables now defined, we can now call sqlp to solve the Max-Cut problem

 sqlp(blk, At, list(C), b)

A numerical example and the maxcut function

The built-in function maxcut takes as input a (weighted) adjacency matrix B and returns the optimal solution directly. If we wish to find to the maximum cut of the graph in Figure 3.1, given the adjacency matrix B we can compute using maxcut as

out <- maxcut(B)
out

$pobj

[1] -14.67622

$X

      [,1]   [,2]   [,3]   [,4]   [,5]   [,6]   [,7]   [,8]   [,9]  [,10]
V1   1.000  0.987 -0.136 -0.858  0.480  0.857 -0.879  0.136 -0.857  0.597
V2   0.987  1.000  0.026 -0.763  0.616  0.929 -0.791 -0.026 -0.929  0.459
V3  -0.136  0.026  1.000  0.626  0.804  0.394  0.592 -1.000 -0.394 -0.876
V4  -0.858 -0.763  0.626  1.000  0.039 -0.469  0.999 -0.626  0.470 -0.925
V5   0.480  0.616  0.804  0.039  1.000  0.864 -0.004 -0.804 -0.864 -0.417
V6   0.857  0.929  0.394 -0.469  0.864  1.000 -0.508 -0.394 -1.000  0.098
V7  -0.879 -0.791  0.592  0.999 -0.004 -0.508  1.000 -0.592  0.508 -0.907
V8   0.136 -0.026 -1.000 -0.626 -0.804 -0.394 -0.592  1.000  0.394  0.876
V9  -0.857 -0.929 -0.394  0.470 -0.864 -1.000  0.508  0.394  1.000 -0.098
V10  0.597  0.459 -0.876 -0.925 -0.417  0.098 -0.907  0.876 -0.098  1.000

Note that the value of the primary objective function is negative as we have defined C=(diag(B1)B)/4 since we require the primal formulation to be a minimization problem. The original formulation given in frames the Max-Cut problem as a maximization problem with C=(diag(B1)B)/4. Therefore, the approximate value of the maximum cut for the graph in Figure 3.1 is 14.68 (recall we are solving a relaxation).

As an interesting aside, we can show that the matrix X is actually a correlation matrix by considering its eigenvalues - we can see it clearly is symmetric, with unit diagonal and all elements in [-1,1].

eigen(out$X[[1]])

$values

 [1] 5.59e+00 4.41e+00 2.07e-07 1.08e-07 4.92e-08 3.62e-08 3.22e-08
 [8] 1.90e-08 1.66e-08 9.38e-09

The fact that X is indeed a correlation matrix comes as no surprise. show that the set of feasible solutions for the Max-Cut problem is in fact the set of correlation matrices. So while we may not be interested in X as an output for solving the Max-Cut problem, it is nonetheless interesting to see that it is in fact in the set of feasible solutions.

Nearest correlation matrix

Recall that the nearest correlation matrix is found as the solution to

minimizee, Xe0subject tosvec(R)svec(X)=  [0,In(n+1)/2] ediag(X)=  1X  Sne  Qn(n+1)/2+1

In Section 2.1 we wrote this as the following SQLP

minimizee, XC,esubject to(As)Tsvec(X)+(Aq)Te=  bX  Sne  Qn(n+1)/2+1

for C=[1,0n(n+1)/2], and

As=  [A1s, A2s]                    Aq=  [A1q, A2q]                    

b= [b1; b2]


where

A1s=  In2A1q=  [0,In2]A2s=  [svec(A1),,svec(An)]A2q=  0n2

b1= svec(R)b2= 1T


and A1,,An are given by

Ak=[aij]={1,i=j=k0,otherwise

To be solved using sqlp, we first define blk. There are two optimization variables in the formulation of the nearest correlation matrix - X is an n×n matrix constrained to be in a semidefinite cone, and y is an n(n+1)/2+1 length vector constrained to be in a quadratic cone, so

 data(Hnearcorr)

 X = Hnearcorr
 n = max(dim(X))
 n2 = n * (n + 1) / 2

 blk <- c("s" = n, "q" = n2+1)

Note that X does not appear in the objective function, so the C entry corresponding to the block variable X is an n×n matrix of zeros, which defines C as

 C1 <- matrix(0, nrow = n, ncol = n)
 C2 <- rbind(1, matrix(0, nrow = n2, ncol = 1))
 C <- list(C1,C2)

Next comes the constraint matrix for X

 Aks <- matrix(list(), nrow = 1, ncol = n)
 for(k in 1:n){
   Aks[[k]] <- matrix(0, nrow = n, ncol = n)
   diag(Aks[[k]])[k] <- 1
 }

 A1s <- svec(blk[1], Aks)[[1]]
 A2s <- diag(1, nrow = n2, ncol = n2)

 At1 <- cbind(A1s,A2s)

then the constraint matrix for e.

 A1q <- matrix(0, nrow = n, ncol = n2 + 1)

 A2q1 <- matrix(0, nrow = n2, ncol = 1)
 A2q2 <- diag(1, nrow = n2, ncol = n2)
 A2q <- cbind(A2q1, A2q2)

 At2 <- rbind(A1q, A2q)

and the right hand side vector b

 b <- rbind(matrix(1, n, 1),svec(blk[1], X))

The nearest correlation matrix problem is now solved by

 sqlp(blk, list(At1,At2), C, b)

A numerical example and the nearcorr function

To demonstrate the nearest correlation matrix problem, we will modify an existing correlation matrix by exploring the effect of changing the sign of just one of the pairwise correlations. In the context of stock correlations, we make use of tools available in the R package quantmod to access stock data from five tech firms (Microsoft, Apple, Amazon, Alphabet/Google, and IBM) beginning in 2007.

 library("quantmod")

 getSymbols(c("MSFT", "AAPL", "AMZN", "GOOGL", "IBM"))
 stock.close <- as.xts(merge(MSFT, AAPL, AMZN,
    GOOGL, IBM))[, c(4, 10, 16, 22, 28)]

The correlation matrix for these five stocks is

 stock.corr <- cor(stock.close)
 stock.corr

            MSFT.Close AAPL.Close AMZN.Close GOOGL.Close IBM.Close
MSFT.Close   1.0000000 -0.2990463  0.9301085   0.5480033 0.2825698
AAPL.Close  -0.2990463  1.0000000 -0.1514348   0.3908624 0.6887127
AMZN.Close   0.9301085 -0.1514348  1.0000000   0.6228299 0.3870390
GOOGL.Close  0.5480033  0.3908624  0.6228299   1.0000000 0.5885146
IBM.Close    0.2825698  0.6887127  0.3870390   0.5885146 1.0000000

Next, consider the effect of having a positive correlation between Microsoft and Apple

 stock.corr[1, 2] <- -1 * stock.corr[1, 2]
 stock.corr[2, 1] <- stock.corr[1, 2]
 stock.corr

            MSFT.Close AAPL.Close AMZN.Close GOOGL.Close IBM.Close
MSFT.Close   1.0000000  0.2990463  0.9301085   0.5480033 0.2825698
AAPL.Close   0.2990463  1.0000000 -0.1514348   0.3908624 0.6887127
AMZN.Close   0.9301085 -0.1514348  1.0000000   0.6228299 0.3870390
GOOGL.Close  0.5480033  0.3908624  0.6228299   1.0000000 0.5885146
IBM.Close    0.2825698  0.6887127  0.3870390   0.5885146 1.0000000

Unfortunately, this correlation matrix is not positive semidefinite

 eigen(stock.corr)$values

[1]  2.8850790  1.4306393  0.4902211  0.3294150 -0.1353544

Given the approximate correlation matrix stock.corr, the built-in function nearcorr solves the nearest correlation matrix problem using sqlp

 out <- nearcorr(stock.corr)

Since this is a minimization problem, and thus a primal formulation of the SQLP, the output X from sqlp will provide the optimal solution to the problem - that is, X will be the nearest correlation matrix to stock.corr.

out$X

          [,1]        [,2]        [,3]      [,4]      [,5]
[1,] 1.0000000  0.25388359  0.86150833 0.5600734 0.3126420
[2,] 0.2538836  1.00000000 -0.09611382 0.3808981 0.6643566
[3,] 0.8615083 -0.09611382  1.00000000 0.6115212 0.3480430
[4,] 0.5600734  0.38089811  0.61152116 1.0000000 0.5935021
[5,] 0.3126420  0.66435657  0.34804303 0.5935021 1.0000000

The matrix above is symmetric with unit diagonal and all entries in [1,1]. By checking the eigenvalues,

 eigen(out$X)

 $values

 [1] 2.846016e+00 1.384062e+00 4.570408e-01 3.128807e-01 9.680507e-11

we can see that X is indeed a correlation matrix.

D-optimal experimental design

Recall from Section 2.2 that the D-Optimal experimental design problem was stated as the following dual SQLP

maximizeZ, zl, λlog det (Z)subject toi=1pλi(uiuiT)+Z=  0,ZSnλ+zl=  0,zlR+p1Tλ=  1,λRp

which we wrote as

maximizeZ, zl, λlog det (Z)subject to(As)Tλ+  Z=  Cs,ZSn(Al)Tλ+  zl=  Cq,zlR+p(Au)Tλ=  Cu,λRp

where b=0, and

As=[svec(A1),,svec(Ap)]Al=IpAu=1T

Cs=0n×nCl=0p×1Cu=1


Here, A1,,Ap are given by

Ai=uiuiT,i=1,,p

To convert this to a form usable by sdpt3r, we first declare the three blocks in blk. For a matrix The first block is semidefinite containing the matrix Z, the second a linear block containing zl, and the third an unrestricted block containing λ

 data(DoptDesign)
 V = DoptDesign
 n = nrow(V)
 p = ncol(V)

 blk = c("s" = n, "l" = p, "u" = 1)

Next, by noting the variable λ does not appear in the objective function, we specify b as a vector of zeros

 b <- matrix(0, nrow = p, ncol = 1)

Next, looking at the right-hand side of the constraints, we define the matrices C

 C1 <- matrix(0, nrow = n, ncol = n)
 C2 <- matrix(0, nrow = p, ncol = 1)
 C3 <- 1

 C = list(C1,C2,C3)

Finally, we construct At for each variable

 A <- matrix(list(), nrow = p, ncol = 1)

 for(k in 1:p){
   A[[k]] <- -V[,k] %*% t(V[,k])
 }

 At1 <- svec(blk[1], A)[[1]]
 At2 <- diag(-1, nrow = p, ncol = p)
 At3 <- matrix(1, nrow = 1, ncol = p)

 At = list(At1,At2,At3)

The final hurdle necessary to address in this problem is the existence of the log-barrier. Recall that it is assumed that vs,vq, and vl in Equation (4) are all zero in control. In this case, we can see that is not true, as we have a log term containing Z in the objective function, meaning vs is equal to one. To pass this to sqlp, we define the controlparbarrier variable as

 control <- list(parbarrier = matrix(list(),3,1))
 control$parbarrier[[1]] <- 1
 control$parbarrier[[2]] <- 0
 control$parbarrier[[3]] <- 0

The D-Optimal experimental design problem can now be solved using sqlp

 sqlp(blk, At, C, b, control)

A numerical example and the doptimal function

To demonstrate the output generated from a D-optimal experimental design problem, we consider a simple 3×25 matrix containing the known test vectors u1,...,u25 (the data is available in the sqlp package). To solve the problem usingsqlp, we use the function doptimal, which takes as input an n×p matrix U containing the known test vectors, and returns the optimal solution. The output we are interested in is y, corresponding to λ in our formulation, the percentage of each ui necessary to achieve maximum information in the experiment.

 data("DoptDesign")

 out <- doptimal(DoptDesign)

out$y
       [,1]
 [1,] 0.000
 [2,] 0.000
 [3,] 0.000
 [4,] 0.000
 [5,] 0.000
 [6,] 0.000
 [7,] 0.154
 [8,] 0.000
 [9,] 0.000
[10,] 0.000
[11,] 0.000
[12,] 0.000
[13,] 0.319
[14,] 0.000
[15,] 0.000
[16,] 0.240
[17,] 0.000
[18,] 0.000
[19,] 0.000
[20,] 0.000
[21,] 0.000
[22,] 0.000
[23,] 0.287
[24,] 0.000
[25,] 0.000

The information matrix ATA is a linear combination of the test vectors ui, weighted by the optimal vector y above.

4 Additional problems

The sdpt3r package considerably broadens the set of optimization problems that can be solved in R. In addition to those problems presented in detail in Section 3, there are a large number of well known problems that can also be formulated as an SQLP.

Each problem presented will be described briefly, with appropriate references for the interested reader, and presented mathematically in its classical form, not as an SQLP as in Equation (3) or (4). Accompanying each problem will be an R helper function, which will solve the corresponding problem using sqlp. Each helper function in sdpt3r (including those for the max-cut, D-optimal experimental design, and nearest correlation matrix) is an R implementation of the helper functions that are available to the user in the Matlab SDPT3 package .

Minimum volume ellipsoids

The problem of finding the ellipsoid of minimum volume containing a set of points v1,...,vn is stated as the following optimization problem

maximizeB, dlog det(B)subject to||Bx+d||  1, ]vex  [v1,...,vn]

The function minelips takes as input an n×p matrix V containing the points around which we would like to find the minimum volume ellipsoid, and returns the optimal solution using sqlp.

 data(Vminelips)
 out <- minelips(Vminelips)

Distance weighted discrimination

Given two sets of points in a matrix XRn with associated class variables [-1,1] in Y=diag(y), distance weighted discrimination seeks to classify the points into two distinct subsets by finding a hyperplane between the two sets of points. Mathematically, the distance weighted discrimination problem seeks a hyperplane defined by a normal vector, ω, and position, β, such that each element in the residual vector r¯=YXTω+βy is positive and large. Since the class labels are either 1 or -1, having the residuals be positive is equivalent to having the points on the proper side of the hyperplane.

Of course, it may be impossible to have a perfect separation of points using a linear hyperplane, so an error term ξ is introduced. Thus, the perturbed residuals are defined to be

r=YXTω+βy+ξ

Distance weighted discrimination solves the following optimization problem to find the optimal hyperplane.

minimizer, ω, β, ξi=1n(1/ri)+C1Tξsubject tor=  YXTω+βy+ξωTω  1r  0ξ  0

where C>0 is a penalty parameter to be chosen.

The function dwd takes as input two n×p matrices X1 and X2 containing the points to be separated, as well as a penalty term C 0 penalizing the movement of a point on the wrong side of the hyperplane to the proper side, and returns the optimal solution using sqlp.

 data(Andwd)
 data(Apdwd)
 C <- 0.5

 out <- dwd(Apdwd,Andwd,penalty)

Max-kCut

Similar to the Max-Cut problem, the Max-kCut problem asks, given a graph G=(V,E) and an integer k, does a cut exist of at least size k. For a given (weighted) adjacency matrix B and integer k, the Max-kCut problem is formulated as the following primal problem

minimizeXC, Xsubject todiag(X)=  1Xij  1/(k1) ijX  Sn

Here, C=(11/k)/2(diag(B1)B). The Max-kCut problem is slightly more complex than the Max-Cut problem due to the inequality constraint. In order to turn this into a standard SQLP, we must replace the inequality constraints with equality constraints, which we do by introducing a slack variable xl, allowing the problem to be restated as

minimizeXC, Xsubject todiag(X)=  1Xijxl=  1/(k1) ijX  Snxl  Ln(n+1)/2

The function maxkcut takes as input an adjacency matrix B and an integer k, and returns the optimal solution using sqlp.

 data(Bmaxkcut)
 k = 2

 out <- maxkcut(Bmaxkcut,k)

Graph partitioning problem

The graph partitioning problem can be formulated as the following primal optimization problem

minimizeXtr(CX)subject totr(11TX)=  αdiag(X)=  1

Here, C=(diag(B1)B), for an adjacency matrix B, and α is any real number.

The function gpp, takes as input a weighted adjacency matrix B and a real number alpha and returns the optimal solution using sqlp.

 data(Bgpp)
 alpha <- nrow(Bgpp)

 out <- gpp(Bgpp, alpha)

The Lovasz number

The Lovasz Number of a graph G, denoted ϑ(G), is the upper bound on the Shannon capacity of the graph. For an adjacency matrix B=[Bij] the problem of finding the Lovasz number is given by the following primal SQLP problem

minimizeXtr(CX)subject totr(X)=  1Xij=  0if Bij = 1X  Sn

The function lovasz takes as input an adjacency matrix B, and returns the optimal solution using sqlp.

 data(Glovasz)

 out <- lovasz(Glovasz)

Toeplitz approximation

Given a symmetric matrix F, the Toeplitz approximation problem seeks to find the nearest symmetric positive definite Toeplitz matrix. In general, a Toeplitz matrix is one with constant descending diagonals, i.e.,

T=[abcdefabcdgfabchgfabihgfa]

is a general Toeplitz matrix. The problem is formulated as the following optimization problem

maximizeXyn+1subject to[I00β]  +  k=1nyk[0γkekγkekT2qk]  +  yn+1B  0[y1,...,yn]T+yn+1B  0

where B is an (n+1)×(n+1) matrix of zeros, and B(n+1)(n+1)=1, q1=tr(F), qk= sum of kth diagonal upper and lower triangular matrix, γ1=n, γk=2(nk+1), k=2,...,n, and β=||F||F2.

The function toep takes as input a symmetric matrix F for which we would like to find the nearest Toeplitz matrix, and returns the optimal solution using sqlp.

 data(Ftoep)

 out <- toep(Ftoep)

The educational testing problem

The educational testing problem arises in measuring the reliability of a student’s total score in an examination consisting of a number of sub-tests . In terms of formulation as an optimization problem, the problem is to determine how much can be subtracted from the diagonal of a given symmetric positive definite matrix S such that the resulting matrix remains positive semidefinite .

The Educational Testing Problem (ETP) is formulated as the following dual problem

maximized1Tdsubject toAdiag(d)  0d  0

where d=[d1, d2,..., dn] is a vector of size n and diag(d) denotes the corresponding n×n diagonal matrix. In the second constraint, having each element in d be greater than or equal to 0 is equivalent to having diag(d)0.

The corresponding primal problem is

minimizeXtr(AX)subject todiag(X)  1X  0

The function etp takes as input an n×n positive definite matrix A, and returns the optimal solution using sqlp.

 data(Betp)

 out <- etp(Betp)

Logarithmic Chebyshev approximation

For a p×n (p>n) matrix B and p×1 vector f, the Logarithmic Chebyshev Approximation problem is stated as the following optimization problem

minimizex, ttsubject to1/t  (xTBi)/fi    t,i=1,...,p

where Bi denotes the ith row of the matrix B. Note that we require each element of Bj/f to be greater than or equal to 0 for all j.

The function logcheby takes as input a matrix B and vector f, and returns the optimal solution to the Logarithmic Chebyshev Approximation problem using sqlp.

 data(Blogcheby)
 data(flogcheby)

 out <- logcheby(Blogcheby, flogcheby)

Linear matrix inequality problems

We consider three distinct linear matrix inequality problems, all written in the form of a dual optimization problem. The first linear matrix inequality problem we will consider is defined by the following optimization equation for some n×p matrix B known in advance

maximizeη, Yηsubject toBY+YBT  0Y  IYηI  0Y11=  1,YSn

The function lmi1 takes as input a matrix B, and returns the optimal solution using sqlp.

 B <- matrix(c(-1,5,1,0,-2,1,0,0,-1), nrow=3)

 out <- lmi1(B)

The second linear matrix inequality problem is

maximizeP, dtr(P)subject toA1P+PA1T+Bdiag(d)BT  0A2P+PA2T+Bdiag(d)BT  0d  0ipdi=  1

Here, the matrices B, A1, and A2 are known in advance.

The function lmi2 takes the matrices A1, A2, and B as input, and returns the optimal solution using sqlp.

 A1 <- matrix(c(-1,0,1,0,-2,1,0,0,-1),3,3)
 A2 <- A1 + 0.1*t(A1)
 B  <- matrix(c(1,3,5,2,4,6),3,2)

 out <- lmi2(A1,A2,B)

The final linear matrix inequality problem originates from a problem in control theory and requires three matrices be known in advance, A, B, and G

maximizeη, Pηsubject to[AP+PAT0BP0]+η[000I][G000]

The function lmi3 takes as input the matrices A, B, and G, and returns the optimal solution using sqlp.

 A <- matrix(c(-1,0,1,0,-2,1,0,0,-1),3,3)
 B <- matrix(c(1,2,3,4,5,6), 2, 3)
 G <- matrix(1,3,3)

 out <- lmi3(A,B,G)

5 Summary

In Section 2, we introduced the problem of conic linear optimization. Using the Max-Cut, Nearest Correlation Matrix, and D-Optimal Experimental Design problems as examples, we demonstrated the increasing generality of the problem, culminating in a general form of the conic linear optimization problem, known as the semidefinite quadratic linear program, in Section 2.2.

In Section 3, we introduced the R package sdpt3r, and the main function call available in the package, sqlp. The specifics of the necessary input variables, the optional input variables, and the output variables provided by sqlp were presented. Using the examples from Section 2, we showed how a problem written as a semidefinite quadratic linear program could be solved in R using sdpt3r.

Finally, in Section 4, we presented a number of additional problems that can be solved using the sdpt3r package, and presented the helper functions available so these problems could be easily solved using sqlp.

The sdpt3r package broadens the range of problems that can be solved using R. Here, we discussed a number of problems that can be solved using sdpt3r, including problems in the statistical sciences, graph theory, classification, control theory, and general matrix theory. The sqlp function in sdpt3r is in fact even more general, and users may apply it to any other conic linear optimization problem that can be written in the form of Equation (3) or (4) by specifying the input variables blk, At, C, and b for their particular problem.

6 control

vers specifies the search direction
0, HKM if semidefinite blocks present, NT otherwise (default)
1, HKM direction
2, NT direction
predcorr TRUE, use Mehrotra prediction-correction (default)
FALSE, otherwise
gam step-length (default 0)
expon exponent used to decrease sigma (default 1)
gaptol tolerance for duality gap as a fraction of the objective function (default 1e8)
inftol tolerance for stopping due to infeasibility (default 1e-8)
steptol tolerance for stopping due to small steps (default 1e-6)
maxit maximum number of iterations (default 100)
stoplevel 0, continue until successful completion, maximum iteration, or numerical failure
1, automatically detect termination, restart if small steps is cause (default)
2, automatically detect termination
scale_data TRUE, scale data prior to solving
FALSE, otherwise (default)
rmdepconstr TRUE, remove nearly dependent constraints
FALSE, otherwise (default)
parbarrier declare the existence of a log barrier term
default value is 0 (i.e., no log barrier)

CRAN packages used

sdpt3r, Rdsdp, Rcsdp, cccp, scs, Rmosek, quantmod

CRAN Task Views implied by cited packages

Finance, Optimization

Note

This article is converted from a Legacy LaTeX article using the texor package. The pdf version is the official version. To report a problem with the html, refer to CONTRIBUTE on the R Journal homepage.

Footnotes

    References

    S. Boyd, L. El Ghaoui, E. Feron and V. Balakrishnan. Linear matrix inequalities in system and control theory. SIAM, 1994.
    H. C. Bravo. Rcsdp: R interface to the CSDP semidefinite programming library. 2016. URL http://CRAN.R-project.org/package=Rcsdp. R package version 0.1.55.
    M. T. Chu and J. W. Wright. The educational testing problem revisited. IMA journal of numerical analysis, 15(1): 141–160, 1995.
    R. Fletcher. A nonlinear programming problem in statistics (educational testing). SIAM Journal on Scientific and Statistical Computing, 2(3): 257–267, 1981.
    R. M. Freund. Introduction to semidefinite programming (SDP). Massachusetts Institute of Technology, 2004.
    H. Friberg. Rmosek: The r-to-MOSEK optimization interface. R package version, 1(3): 2012.
    A. Fu, B. Narasimhan and S. Boyd. CVXR: An R Package for Disciplined Convex Optimization. arXiv, 2017. URL https://arxiv.org/abs/1711.07582v1.
    M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6): 1115–1145, 1995.
    C. Helmberg, F. Rendl, R. J. Vanderbei and H. Wolkowicz. An interior-point method for semidefinite programming. SIAM Journal on Optimization, 6(2): 342–361, 1996.
    N. J. Higham. Computing the nearest correlation matrix-a problem from finance. IMA Journal of Numerical Analysis, 22(3): 329–343, 2002.
    F. John. Extremum problems with inequalities as subsidiary conditions. In Traces and emergence of nonlinear programming, pages. 197–215 2014. Springer-Verlag.
    R. M. Karp. Reducibility among combinatorial problems. In Complexity of computer computations, pages. 85–103 1972. Springer-Verlag.
    J. S. Marron, M. J. Todd and J. Ahn. Distance-weighted discrimination. Journal of the American Statistical Association, 102(480): 1267–1271, 2007.
    Y. E. Nesterov and M. J. Todd. Self-scaled barriers and interior-point methods for convex programming. Mathematics of Operations research, 22(1): 1–42, 1997.
    B. O’Donoghue, E. Chu, N. Parikh and S. Boyd. SCS: Splitting conic solver, version 2.0.2. 2017.
    C. H. Papadimitriou and M. Yannakakis. Optimization, approximation, and complexity classes. Journal of computer and system sciences, 43(3): 425–440, 1991.
    B. Pfaff. Cccp: Cone constrained convex problems. 2015. URL http://CRAN.R-project.org/package=cccp. R package version 0.2-4.
    J. A. Ryan and J. M. Ulrich. Quantmod: Quantitative financial modelling framework. 2017. URL http://CRAN.R-project.org/package=quantmod. R package version 0.4-9.
    K. Smith. On the standard deviations of adjusted and interpolated values of an observed polynomial function and its constants and the guidance they give towards a proper choice of the distribution of observations. Biometrika, 12(1-2): 1–85, 1918.
    S. Theußl, F. Schwendinger and K. Hornik. ROI: The r optimization infrastructure package. 133. Vienna: WU Vienna University of Economics; Business. 2017. URL http://epub.wu.ac.at/5858/.
    K.-C. Toh, M. J. Todd and R. H. Tütüncü. SDPT3 - a MATLAB software package for semidefinite programming, version 1.3. Optimization methods and software, 11(1-4): 545–581, 1999.
    R. H. Tütüncü, K.-C. Toh and M. J. Todd. Solving semidefinite-quadratic-linear programs using SDPT3. Mathematical programming, 95(2): 189–217, 2003.
    L. Vandenberghe, S. Boyd and S.-P. Wu. Determinant maximization with linear matrix inequality constraints. SIAM journal on matrix analysis and applications, 19(2): 499–533, 1998.
    Z. Zhu and Y. Ye. Rdsdp: R interface to DSDP semidefinite programming library. 2016. URL http://CRAN.R-project.org/package=Rdsdp. R package version 1.0.4-2.

    Reuse

    Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".

    Citation

    For attribution, please cite this work as

    Rahman, "sdpt3r: Semidefinite Quadratic Linear Programming in R", The R Journal, 2018

    BibTeX citation

    @article{RJ-2018-063,
      author = {Rahman, Adam},
      title = {sdpt3r: Semidefinite Quadratic Linear Programming in R},
      journal = {The R Journal},
      year = {2018},
      note = {https://rjournal.github.io/},
      volume = {10},
      issue = {2},
      issn = {2073-4859},
      pages = {371-394}
    }