资源描述
Final work for CVX optimization
Barrier Method and Primal-dual Interior-point Method for Linear Programming
Name:
****
Student ID:
********
Major:
Communication and Information System
2015/7/16 星期四
Barrier Method and Primal-dual Interior-point Method
for Linear Programming
School of Information Science and Engineering,
Shandong University, P.R. China,250100
*@
Abstract:Interior-point methods (IMP) for solving convex optimization problems that include inequality constraints. The most attractive features of IMP are its speed of convergence and its robust capability of handling equality and inequality constraints; making itself a practical tool for solving large-scale optimization problems. In this paper, two kinds of interior-point methods—barrier method and primal-dual interior-point method, are discussed. And Comparison of these two methods is given, based on solving the standard LP.
Keywords: Interior-point method, Barrier method, Primal-dual interior-point method, LP.
1 Introduction
The interior-point method is almost used to solve convex optimization problems that include inequality constraints. Interior-point methods solve the problem (or the KKT conditions) by applying Newton’s method to a sequence of equality constrained problems, or to a sequence of modified versions of the KKT conditions. We can view interior-point methods as another level in the hierarchy of convex optimization algorithms. Linear equality constrained quadratic problems are the simplest. For these problems the KKT conditions are a set of linear equations, which can be solved analytically. Newton’s method is the next level in the hierarchy. We can think of Newton’s method as a technique for solving a linear equality constrained optimization problem, with twice differentiable objective, by reducing it to a sequence of linear equality constrained quadratic problems. Interior-point methods form the next level in the hierarchy: They solve an optimization problem with linear equality and inequality constraints by reducing it to a sequence of linear equality constrained problems.
2 Theory basis
In this chapter we discuss interior-point methods for solving convex optimization problems that include inequality constraints,
minimizef0xs.t.fix≤0, i=1,2⋯mAx=b (2.1)
where f0,… fm:Rn→R are convex and twice continuously differentiable, and A∈Rp×n with rankA=p<n. We assume that the problem is solvable, i.e., an optimal x⋆ exists. We denote the optimal value f0x⋆ as p⋆.
We also assume that the problem is strictly feasible, this means that Slater’s constraint qualification holds, so there exist dual optimal λ⋆∈Rm,ν⋆∈Rp , which together with x⋆ satisfy the KKT conditions
Ax⋆=b,fix⋆≤0i=1,2,…,mλ⋆≽0∇f0x⋆+i=1mλi⋆∇fix⋆+ATν⋆=0i=1,2,…mλi⋆fix⋆=0i=1,2,…,m (2.2)
Interior-point methods solve the problem (2.1) (or the KKT conditions (2.2)) by applying Newton’s method to a sequence of equality constrained problems, orto a sequence of modified versions of the KKT conditions.
2.1 Barrier Method
To approximately formulate the inequality constrained problem (2.1) as an equality constrained problem to which Newton’s method can be applied. Our first step is to rewrite the problem (2.1), making the inequality constraint simplicity in the objective:
minimizef0x+i=1mI-(fix)Ax=b (2.3)
where I-:R→R is the indicator function for the nonpositive reals,
I-u=0u≤0∞u>0. (2.4)
Figure 1 The dashed lines show the function I-(u), and the solid curves show I-u=-1tlogu, for t=0.5,1,2. The curve for t=2 gives the best approximation.
The basic idea of the barrier method is to approximate the indicator function I-:
I-u=-1tlogu,domI-=-R++
where t > 0 is a parameter that sets the accuracy of the approximation. Substituting I- for I- in (2.3) gives the approximation
minimizef0x+i=1m-1tlog(-fix)Ax=b (2.5)
The objective here is convex, since -1/tlogu is convex and increasing in u,and differentiable. Assuming an appropriate closedness condition holds, Newton’s method can be used to solve it.we multiply the objective by t, and consider the equivalent problem
minimizetf0x+i=1m-log(-fix)Ax=b (2.6)
which has the same minimizers.
The algorithm is as following:
2.2 Primal-dual interior-point methods
Primal-dual interior-point methods are very similar to the barrier method, with some differences.
l There is only one loop or iteration, i.e., there is no distinction between inner and outer iterations as in the barrier method. At each iteration, both the primal and dual variables are updated.
l The search directions in a primal-dual interior-point method are obtained from Newton’s method, applied to modified KKT equations (i.e., the optimality conditions for the logarithmic barrier centering problem). The primaldual search directions are similar to, but not quite the same as, the search directions that arise in the barrier method.
l In a primal-dual interior-point method, the primal and dual iterates are not necessarily feasible.
Primal-dual interior-point methods are often more efficient than the barrier method, especially when high accuracy is required, since they can exhibit better than linear convergence.
The basic primal-dual interior-point algorithm is as following:
3 Problems for example
In this section, we give two examples to examine the performance of the two methods, and give the simulation result.
3.1 A family of Standard LP by Barrier Method
3.1.1 Problem analysis
In this section we examine the performance of the barrier method as a function of the problem dimensions. We consider LPs in standard form,
minimizecTxs.t.Ax=b x≽0 (3.1)
with A∈Rm×n , and explore the total number of Newton steps required as a function of the number of variables n and number of equality constraints m , for a family of randomly generated problem instances. We take n=2m, i.e., twice as many variables as constraints.
The problems were generated as follows. The elements of A are independent and identically distributed, with zero mean, unit variance normal distribution N0,1. We take b=Ax0 where the elements of x0 are independent, and uniformly distributed in [0, 1]. This ensures that the problem is strictly primal feasible, since x0≻0 is feasible. To construct the cost vector c, we first compute a vector z∈Rm with elements distributed according to N (0, 1) and a vector s∈Rn with elements from a uniform distribution on [0, 1]. We then take c=ATz+s. This guarantees that the problem is strictly dual feasible, since ATz≺c.The algorithm parameters we use are μ=20, and the same parameters for the centering steps in the examples above: backtracking parameters α=0.01, β=0.5, and stopping criterion λx2/2≤10-5. The initial point is on the central path with t0=1(i.e., gap n). The algorithm is terminated when the initial duality gap is reduced by a factor 103.
Figure 2 the program chart of Barrier method, using Newton’s method for central path
In this part, the problem will be discussed in two parts: one is the relationship between duality gaps versus iteration number; the other is the effect of problem size on the number of Newton steps required.
l To discuss the relationship between duality gap versus iteration number, this paper computes the duality gap versus iteration number for three problem instances, with m=50, m=500, m=1000.
l To examine the effect of problem size on the number of Newton steps required, this paper generate 100 problem instances for each of 20 values of m, ranging from m = 10 to m = 1000. We solve each of these 2000 problems using the barrier method, nothing but the number of Newton steps required.
3.1.2 Simulation result and analysis
The Figure 3 shows the duality gap versus iteration number for three problem instances. The plots show approximately linear convergence of the duality gap. The plots show a small increase in the number of Newton steps required as the problem size grows from 50 constraints (100 variables) to 1000 constraints (2000variables).
Figure 3 Progress of barrier method for a small LP, showing duality gap versus cumulative number of Newton steps. Three plots are shown, corresponding to three values of the parameter µ: 2, 50, and 150. In each case, we have approximately linear convergence of duality gap.
Figure 4 Average number of Newton steps required to solve 100 randomly
generated LPs of different dimensions, with n = 2m.
The Figure 4 shows the mean and standard deviation in the number of Newton steps, for each value of m. From the computing conclusion, we can get that the standard deviation is around 2 iterations, and appears to be approximately independent of problem size. Since the average number of steps required is near 23(arranged from 17 to 29), this means that the number of Newton steps required varies only around ±10%. This behavior is typical for the barrier method in general: The number of Newton steps required grows very slowly with problem dimensions, and is almost always around a few tens. Of course, the computational effort to carry out one Newton step grows with the problem dimensions.
3.1.3 Main Code for Standard LP
function [ gaps,inniters ] = logbarrier( A,x0,b,c,m,n )
MaxCalTime = 1000;
x = x0;
u=20;
alpha = 0.01;
beta = 0.5;
t =1;
ErrNT = 1e-5;
ErrGap = 1e-3;
gaps = [];
inniters = [];
for inter1 = 1:MaxCalTime
grad = t*c - (1./(x)); %
w = -(A*diag(x.^2)*A') \ (A*(grad.*(x.^2)) );
Xnt = -(A'*w+grad).*(x.^2); %
Lamd2 = -(grad+A'*w)'*Xnt; %
if Lamd2 <=2*ErrNT %
z = -w/t;
gap = x'*(c-A'*z);
inniters = [inniters, inter1];
gaps = [gaps,gap];
if(gap < ErrGap) %
break;
end
t = min(t*u, (n+1)/ErrGap); %
continue;
end
%
step = 1;
while(min(x+step*Xnt)<0)
step = beta*step;
end
while(step*(t*c+A'*w)'*Xnt-sum(log(1+step*Xnt./x)) > alpha*step*Lamd2)
step = beta*step;
end
%
x = x + step*Xnt;
end
end
3.2 Small LP for Primal-dual interior-point methods
3.2.1 Problem analysis
In this section we illustrate the performance of the primal-dual interior-point method as a function of the problem dimensions. We consider a small linear programming in equality form,
minimizecTxsubject toAx≼b , (3.2)
with A∈R100×50. The data were generated randomly, in such a way that the problem is strictly primal and dual feasible, with optimal value p*=1.
We start the primal-dual interior-point method at a randomly generated x0, that satisfies fx≺0,and take λi0=-1∕fix0,so the initial value of the surrogate gap is η=100. The parameter values we use for the primal-dual interior-point method are μ=10, β=0.5, ϵ=10-8, α=0.01. Then we will compute the surrogate gap η, and the norm of the primal and dual residuals rfeas with iteration number, where
rfeas=rpri22+rdual221∕2 (3.3)
Figure 5 the program chart of Primal-dual interior-point method,using Newton’s method for central path
3.2.2 Simulation result and analysis
Figure 6 shows the progress of the primal-dual interior-point method. Two plots are shown: the surrogate gap η, and the norm of the primal and dual residuals rfeas versus iteration number. (The initial point is primal feasible, so the plot shows the norm of the dual feasibility residual.) The plots show that the residual converges to zero rapidly, and becomes zero to numerical precision in 24 iterations, the surrogate gap also converges to a very small number in about 28 iterations.
Figure 6 Progress of the primal-dual interior-point method for an LP,showing surrogate duality gap η and the norm of the primal and dual residuals rfeas versus iteration number.
3.2.3 Main code for Small LP
function [x, lamda, iters, gaps, resdls] = pdip(A, b, c, x0)
MAXITERS = 500;
TOL = 1e-8;
RESTOL = 1e-8;
MU = 10;
ALPHA = 0.01;
BETA = 0.5;
[m,n] = size(A);
gaps = []; resdls = [];
x = x0; minusFx = b-A*x; lamda = 1./minusFx;
gradF0 = c;
for iters = 1:MAXITERS
gap = minusFx'*lamda; gaps = [gaps, gap];
resdual = A'*lamda + gradF0;
respri = -minusFx;
resdls = [resdls, norm(resdual,2)];
if ((gap < TOL) && (norm(resdual) < RESTOL) ), return; end;
tinv = gap/(m*MU);
sol = -[zeros(n,n), A'; A, diag(-minusFx./lamda)] \ [A'*lamda+c; -minusFx+tinv*(1./lamda)];
dx = sol(1:n); dz = sol(n+[1:m]); ds = -A*dx;
% backtracking line search
r = [gradF0+A'*lamda; lamda.*minusFx-tinv];
step = min(1.0, 0.99/max(-dz./lamda));
while (min(minusFx+step*ds) <= 0), step = BETA*step; end;
newz = lamda+step*dz; newx = x+step*dx; news = minusFx+step*ds;
newr = [gradF0+A'*newz; newz.*news-tinv];
while (norm(newr) > (1-ALPHA*step)*norm(r))
step = BETA*step;
newz = lamda+step*dz; newx = x+step*dx; news = minusFx+step*ds;
newr = [gradF0+A'*newz; newz.*news-tinv];
end;
x = x+step*dx; lamda = lamda +step*dz; minusFx = b-A*x;
end;
4 Comparison
From the computing results this paper shows above, we can get that these two methods of interior-point method have good performance. To compare these two methods, in this section, I will resolve the problems with another method, that is, solve the small linear programming with Barrier method and the family of standard linear programming with Primal-dual interior-point Method.
4.1 Small linear programming of inequality form
Firstly, we use barrier method to solve a small linear programming inequality form, as the same problem (3.2).The initial point x0 is on the central path, with a duality gap of 100. The computing terminated when the duality gap is less than 10-6. The centering problems are solved by Newton’s method with backtracking, using parameters α=0.01, β=0.5 the stopping criterion for Newton’s method is λx2/2≤10-5, where λx is the Newton decrement of the function tcT+ϕx.
Four plots are shown in Figure 7, three of them are corresponding to three values of the parameter µ: 2, 50, and 150, with the Barrier method, and the fourth is with Primal-dual interior-point method.
Figure 7 Comparison of Barrier method and Primal-dual interior-point method for a small LP, showing duality/surrogate gap versus iterations number of Newton steps.
From Figure 7, we can get that the Primal-dual interior-point method is faster, especially when high accuracy is required.
4.2 A family of standard form LPs
Here we examine the performance of the primal-dual method as a function of the problem dimensions, for the same family of standard form LPs considered in Section 3.1.1. We use the primal-dual interior-point method to solve the
展开阅读全文