
Journal of Convex Analysis 12 (2005), No. 1, 045069 Copyright Heldermann Verlag 2005 Global Linear Convergence of an Augmented Lagrangian Algorithm to Solve Convex Quadratic Optimization Problems Frédéric Delbos Institut Français du Pétrol, 1&4 ave. de BoisPréau, 92852 RueilMalmaison, France J. Charles Gilbert INRIA Rocquencourt, B.P. 105, 78153 Le Chesney, France jeancharles.gilbert@inria.fr We consider an augmented Lagrangian algorithm for minimizing a convex quadratic function subject to linear inequality constraints. Linear optimization is an important particular instance of this problem. We show that, provided the augmentation parameter is large enough, the constraint value converges globally linearly to zero. This property is viewed as a consequence of the proximal interpretation of the algorithm and of the global radial Lipschitz continuity of the reciprocal of the dual function subdifferential. This Lipschitz property is itself obtained by means of a lemma of general interest, which compares the distances from a point in the positive orthant to an affine space, on the one hand, and to the polyhedron given by the intersection of this affine space and the positive orthant, on the other hand. No strict complementarity assumption is needed. The result is illustrated by numerical experiments and algorithmic implications, including complexity issues, are discussed. Keywords: Augmented Lagrangian, convex quadratic optimization, distance to a polyhedron, error bound, global linear convergence, iterative complexity, linear constraints, proximal algorithm. MSC: 49M29, 65K05, 90C05; 90C06, 90C20 [ Fulltextpdf (586 KB)] for subscribers only. 