Skip to main content

    Jane Ye

    ABSTRACT Using the theory of generalized gradients for locally Lipschitz functions, optimality conditions for two-stage problems of stochastic programming with constraints that have to be satisfied almost surely and constraints in... more
    ABSTRACT Using the theory of generalized gradients for locally Lipschitz functions, optimality conditions for two-stage problems of stochastic programming with constraints that have to be satisfied almost surely and constraints in integral form are derived. The distribution of the random variable may depend on the decision of the first stage; it is assumed however, that there is a σ-finite measure which dominates all probability measures occurring in the problem.
    In this paper, we study optimality conditions and exact penalization for the mathematical program with switching constraints (MPSC). We give new optimality conditions for MPSC including MPSC S-stationary/strongly M-stationary/M-stationary... more
    In this paper, we study optimality conditions and exact penalization for the mathematical program with switching constraints (MPSC). We give new optimality conditions for MPSC including MPSC S-stationary/strongly M-stationary/M-stationary condition in critical directions. The strongly M-stationarity in critical directions builds a bridge between the S-stationarity in critical directions and the M-stationarity in critical directions. We propose some sufficient conditions for the local error bound property and obtain exact penalty results for MPSC. Finally we apply our results to the mathematical program with either-or-constraints.
    Relaxed constant positive linear dependence constraint qualification (RCPLD) for a system of smooth equalities and inequalities is a constraint qualification that is weaker than the usual constraint qualifications such as Mangasarian... more
    Relaxed constant positive linear dependence constraint qualification (RCPLD) for a system of smooth equalities and inequalities is a constraint qualification that is weaker than the usual constraint qualifications such as Mangasarian Fromovitz constraint qualification and the linear constraint qualification. Moreover RCPLD is known to induce an error bound property. In this paper we extend RCPLD to a very general feasibility system which may include Lipschitz continuous inequality constraints, complementarity constraints and abstract constraints. We show that this RCPLD for the general system is a constraint qualification for the optimality condition in terms of limiting subdifferential and limiting normal cone and it is a sufficient condition for the error bound property under the strict complementarity condition for the complementarity system and Clarke regularity conditions for the inequality constraints and the abstract constraint set. Moreover we introduce and study some suffic...
    Abstract. In this paper we study the problem of minimizing condition numbers over a compact convex subset of the cone of symmetric positive semidefinite n × n matrices. We show that the condition number is a Clarke regular strongly... more
    Abstract. In this paper we study the problem of minimizing condition numbers over a compact convex subset of the cone of symmetric positive semidefinite n × n matrices. We show that the condition number is a Clarke regular strongly pseudoconvex function. We prove that a global solution of the problem can be approximated by an exact or an inexact solution of a nonsmooth convex program. This asymptotic analysis provides a valuable tool for designing an implementable algorithm for solving the problem of minimizing condition numbers. Key words. condition numbers, strongly pseudoconvex functions, quasi-convex functions, nonsmooth analysis, exact and inexact approximations
    ABSTRACT We introduce a relaxed version of the constant positive linear dependence constraint qualification for mathematical programs with equilibrium constraints (MPEC). This condition is weaker but easier to check than the MPEC constant... more
    ABSTRACT We introduce a relaxed version of the constant positive linear dependence constraint qualification for mathematical programs with equilibrium constraints (MPEC). This condition is weaker but easier to check than the MPEC constant positive linear dependence constraint qualification, and stronger than the MPEC Abadie constraint qualification (thus, it is an MPEC constraint qualification for M-stationarity). Neither the new constraint qualification implies the MPEC generalized quasinormality, nor the MPEC generalized quasinormality implies the new constraint qualification. The new one ensures the validity of the local MPEC error bound under certain additional assumptions. We also have improved some recent results on the existence of a local error bound in the standard nonlinear program.
    Second order necessary and sufficient conditions are given for a class of optimization problems involving optimal selection of a measurable subset from a given measure subspace subject to set function inequalities. Relations between... more
    Second order necessary and sufficient conditions are given for a class of optimization problems involving optimal selection of a measurable subset from a given measure subspace subject to set function inequalities. Relations between twice-differentiability at Ω and local convexity at Ω are also discussed.
    In this paper, we present a uniform strong law of large numbers for random set-valued mappings in separable Banach space and apply it to analyze the sample average approximation of Clarke stationary points of a nonsmooth one stage... more
    In this paper, we present a uniform strong law of large numbers for random set-valued mappings in separable Banach space and apply it to analyze the sample average approximation of Clarke stationary points of a nonsmooth one stage stochastic minimization problem in separable ...
    The bilevel program is an optimization problem where the constraint involves solutions to a parametric optimization problem. It is well-known that the value function reformulation provides an equivalent single-level optimization problem... more
    The bilevel program is an optimization problem where the constraint involves solutions to a parametric optimization problem. It is well-known that the value function reformulation provides an equivalent single-level optimization problem but it results in a nonsmooth optimization problem which never satisfies the usual constraint qualification such as the Mangasarian-Fromovitz constraint qualification (MFCQ). In this paper we show that even the first order sufficient condition for metric subregularity (which is in general weaker than MFCQ) fails at each feasible point of the bilevel program. We introduce the concept of directional calmness condition and show that under the directional calmness condition, the directional necessary optimality condition holds. While the directional optimality condition is in general sharper than the non-directional one, the directional calmness condition is in general weaker than the classical calmness condition and hence is more likely to hold. We perf...
    The error bound property for a solution set defined by a set-valued mapping refers to an inequality that bounds the distance between vectors closed to a solution of the given set by a residual function. The error bound property is a... more
    The error bound property for a solution set defined by a set-valued mapping refers to an inequality that bounds the distance between vectors closed to a solution of the given set by a residual function. The error bound property is a Lipschitz-like/calmness property of the perturbed solution mapping, or equivalently the metric subregularity of the underlining set-valued mapping. It has been proved to be extremely useful in analyzing the convergence of many algorithms for solving optimization problems, as well as serving as a constraint qualification for optimality conditions. In this paper, we study the error bound property for the solution set of a very general second-order cone complementarity problem (SOCCP). We derive some sufficient conditions for error bounds for SOCCP which is verifiable based on the initial problem data.
    In this paper we perform sensitivity analysis for optimization problems with variational inequality constraints (OPVICs). We provide upper estimates for the limiting subdifferential (singular limiting subdifferential) of the value... more
    In this paper we perform sensitivity analysis for optimization problems with variational inequality constraints (OPVICs). We provide upper estimates for the limiting subdifferential (singular limiting subdifferential) of the value function in terms of the set of normal (abnormal) coderivative (CD) multipliers for OPVICs. For the case of optimization problems with complementarity constraints (OPCCs), we provide upper estimates for the limiting subdifferentials in terms of various multipliers. An example shows that the other multipliers may not provide useful information on the subdifferentials of the value function, while the CD multipliers may provide tighter bounds. Applications to sensitivity analysis of bilevel programming problems are also given.
    Abstract. The condition number of a Gram matrix defined by a polynomial basis and a set of points is often used to measure the sensitivity of the least squares polynomial approximation. Given a polynomial basis, we consider the problem of... more
    Abstract. The condition number of a Gram matrix defined by a polynomial basis and a set of points is often used to measure the sensitivity of the least squares polynomial approximation. Given a polynomial basis, we consider the problem of finding a set of points and/or weights which minimizes the condition number of the Gram matrix. The objective function f in the minimization problem is nonconvex and nonsmooth. We present an expression of the Clarke generalized gradient of f and show that f is Clarke regular and strongly semismooth. Moreover, we develop a globally convergent smoothing method to solve the minimization problem by using the exponential smoothing function. To illustrate applications of minimizing the condition number, we report numerical results for the Gram matrix defined by the weighted Vandermonde-like matrix for least squares approximation on an interval and for the Gram matrix defined by an orthonormal set of real spherical harmonics for least squares approximatio...
    Exact penalty approach aims at replacing a constrained optimization problem by an equivalent unconstrained optimization problem. Most of results in the literature of exact penalization are mainly concerned with finding conditions under... more
    Exact penalty approach aims at replacing a constrained optimization problem by an equivalent unconstrained optimization problem. Most of results in the literature of exact penalization are mainly concerned with finding conditions under which a solution of the constrained optimization problem is a solution of an unconstrained penalized optimization problem and the reverse property is rarely studied. In this paper we study the reverse property. We give conditions under which the original constrained (single and/or multiobjective) optimization problem and the unconstrained exact penalized problem are exactly equivalent. The main conditions to ensure the exact penalty principle for optimization problems include the global and local error bound conditions. By using variational analysis, these conditions may be characterized by using generalized differentiation.
    In this paper we study nonlinear programming problems with equality, inequality, and abstract constraints where some of the functions are Fr'echet differentiable at the optimal solution, some of the functions are Lipschitz near the... more
    In this paper we study nonlinear programming problems with equality, inequality, and abstract constraints where some of the functions are Fr'echet differentiable at the optimal solution, some of the functions are Lipschitz near the optimal solution, and the abstract constraint set may be nonconvex. We derive Fritz John type and Karush--Kuhn--Tucker (KKT) type first order necessary optimality conditions for the above problem where Fr'echet derivatives are used for the differentiable functions and subdifferentials are used for the Lipschitz continuous functions. Constraint qualifications for the KKT type first order necessary optimality conditions to hold include the generalized Mangasarian--Fromovitz constraint qualification, the no nonzero abnormal multiplier constraint qualification, the metric regularity of the constraint region, and the calmness constraint qualification.
    In this paper, we study necessary optimality conditions for nonsmooth mathematical programs with equilibrium constraints. We first show that, unlike the smooth case, the Mathematical Program with Equilibrium Constraints Linear Independent... more
    In this paper, we study necessary optimality conditions for nonsmooth mathematical programs with equilibrium constraints. We first show that, unlike the smooth case, the Mathematical Program with Equilibrium Constraints Linear Independent Constraint Qualification is not a constraint qualification for the strong stationary condition when the objective function is nonsmooth. We then focus on the study of the enhanced version of the Mordukhovich stationary condition, which is a weaker optimality condition than the strong stationary condition. We introduce several new constraint qualifications and show that the enhanced Mordukhovich stationary condition holds under them. Finally, we prove that quasi-normality with subdifferential regularity implies the existence of a local error bound.
    In our paper [SIAM J.Cont olOpt , 40 (2001), pp. 699--723], due to an error in the proof, an additional assumption is needed for the conclusion of Theorem 3.6 to hold. In this erratum, we restate and prove Theorem 3.6 and correct other... more
    In our paper [SIAM J.Cont olOpt , 40 (2001), pp. 699--723], due to an error in the proof, an additional assumption is needed for the conclusion of Theorem 3.6 to hold. In this erratum, we restate and prove Theorem 3.6 and correct other related mistakes accordingly. PI I S036301290139926X In our paper[1; due to an error in the proof, an additional assumption is needed for the conclusion of Theorem 3.6 to hold. As a consequence, Theorem 4.2 does not hold, each of Theorems 4.4, 4.8, 4.1g and 4.1 requires an additional assumption, and the last two lines on page701 and the first two lines on page 702 should be changed to S (#), S (#). We first correct Theorem 3.6 by addingthe additional assumption(0.1 as follows.
    In this paper we consider a mathematical program with equilibrium con-straints (MPEC) formulated as a mathematical program with complementarity constraints. Various stationary conditions for MPECs exist in literature due to different... more
    In this paper we consider a mathematical program with equilibrium con-straints (MPEC) formulated as a mathematical program with complementarity constraints. Various stationary conditions for MPECs exist in literature due to different reformulations. We give a simple proof to the M-stationary condition and show that it is sufficient for global or local optimality under some MPEC generalized convexity assumptions. Moreover we propose new constraint qualifi-cations for M-stationary conditions to hold. These new constraint qualifications include piecewise MFCQ, piecewise Slater condition, MPEC weak reverse con-vex constraint qualification, MPEC Arrow-Hurwicz-Uzawa constraint qualifica-tion, MPEC Zangwill constraint qualification, MPEC Kuhn-Tucker constraint qualification and MPEC Abadie constraint qualification. Key words: mathematical program with equilibrium constraints, necessary optimality conditions, sufficient optimality conditions, constraint qualifications AMS subject classifica...
    For a lower semicontinuous (l.s.c.) inequality system on a Banach space, it is shown that error bounds hold, provided every element in an abstract subdifferential of the constraint function at each point outside the solution set is norm... more
    For a lower semicontinuous (l.s.c.) inequality system on a Banach space, it is shown that error bounds hold, provided every element in an abstract subdifferential of the constraint function at each point outside the solution set is norm bounded away from zero. A sufficient condition for a global error bound to exist is also given for an l.s.c. inequality system on a real normed linear space. It turns out that a global error bound closely relates to metric regularity, which is useful for presenting sufficient conditions for an l.s.c. system to be regular at sets. Under the generalized Slater condition, a continuous convex system on R n is proved to be metrically regular at bounded sets.
    We investigate sample average approximation of a general class of onestage stochastic mathematical programs with equilibrium constraints. By using graphical convergence of unbounded set-valued mappings, we demonstrate almost sure... more
    We investigate sample average approximation of a general class of onestage stochastic mathematical programs with equilibrium constraints. By using graphical convergence of unbounded set-valued mappings, we demonstrate almost sure convergence of a sequence of stationary points of sample average approximation problems to their true counterparts as the sample size increases. In particular we show the convergence of M(Mordukhovich)-stationary point and C(Clarke)-stationary point of the sample average approximation problem to those of the true problem. The research complements the existing work in the literature by considering a general equilibrium constraint to be represented by a stochastic generalized equation and exploiting graphical convergence of coderivative mappings.
    In this paper, we present difference of convex algorithms for solving bilevel programs in which the upper level objective functions are difference of convex functions, and the lower level programs are fully convex. This nontrivial class... more
    In this paper, we present difference of convex algorithms for solving bilevel programs in which the upper level objective functions are difference of convex functions, and the lower level programs are fully convex. This nontrivial class of bilevel programs provides a powerful modelling framework for dealing with applications arising from hyperparameter selection in machine learning. Thanks to the full convexity of the lower level program, the value function of the lower level program turns out to be convex and hence the bilevel program can be reformulated as a difference of convex bilevel program. We propose two algorithms for solving the reformulated difference of convex program and show their convergence under very mild assumptions. Finally we conduct numerical experiments to a bilevel model of support vector machine classification. This paper is dedicated to the memory of Olvi L. Mangasarian. The research of the first author was partially supported by NSERC. The second author was...
    In this paper, we propose a combined approach with second-order optimality conditions of the lower level problem to study constraint qualifications and optimality conditions for bilevel programming problems. The new method is inspired by... more
    In this paper, we propose a combined approach with second-order optimality conditions of the lower level problem to study constraint qualifications and optimality conditions for bilevel programming problems. The new method is inspired by the combined approach developed by Ye and Zhu in 2010, where the authors combined the classical first-order and the value function approaches to derive new necessary optimality conditions under weaker conditions. In our approach, we add the second-order optimality condition to the combined program as a new constraint. We show that when all known approaches fail, adding the second-order optimality condition as a constraint makes the corresponding partial calmness condition easier to hold. We also give some discussions on optimality conditions and advantages and disadvantages of the combined approaches with the first-order and the second-order information.
    The partial calmness for the bilevel programming problem (BLPP) is an important condition which ensures that a local optimal solution of BLPP is a local optimal solution of a partially penalized problem where the lower level optimality... more
    The partial calmness for the bilevel programming problem (BLPP) is an important condition which ensures that a local optimal solution of BLPP is a local optimal solution of a partially penalized problem where the lower level optimality constraint is moved to the objective function and hence a weaker constraint qualification can be applied. In this paper we propose a sufficient condition in the form of a partial error bound condition which guarantees the partial calmness condition. We analyse the partial calmness for the combined program based on the Bouligand (B-) and the Fritz John (FJ) stationary conditions from a generic point of view. Our main result states that the partial error bound condition for the combined programs based on B and FJ conditions are generic for an important setting with applications in economics and hence the partial calmness for the combined program is not a particularly stringent assumption. Moreover we derive optimality conditions for the combined program...
    In this paper we study second-order optimality conditions for non-convex set-constrained optimization problems. For a convex set-constrained optimization problem, it is well-known that second-order optimality conditions involve the... more
    In this paper we study second-order optimality conditions for non-convex set-constrained optimization problems. For a convex set-constrained optimization problem, it is well-known that second-order optimality conditions involve the support function of the second-order tangent set. In this paper we propose two approaches for establishing second-order optimality conditions for the non-convex case. In the first approach we extend the concept of the support function so that it is applicable to general non-convex set-constrained problems, whereas in the second approach we introduce the notion of the directional regular tangent cone and apply classical results of convex duality theory. Besides the second-order optimality conditions, the novelty of our approach lies in the systematic introduction and use, respectively, of directional versions of well-known concepts from variational analysis.
    Abstract This paper studies optimal bidding decision for a strategic wind power producer participating in a day-ahead market that employs stochastic market clearing and energy and reserver co-optimization. The proposed procedure to derive... more
    Abstract This paper studies optimal bidding decision for a strategic wind power producer participating in a day-ahead market that employs stochastic market clearing and energy and reserver co-optimization. The proposed procedure to derive strategic offers relies on a stochastic bilevel model: the upper level problem represents the profit maximization of the strategic wind power producer, while the lower level one represents the market clearing and the corresponding price formulation aiming to co-optimize both energy and reserve. Using the Karush–Kuhn–Tucker optimality condition for the lower level problem, this stochastic bilevel model is reformulated as a stochastic mathematical program with equilibrium constraints and solved using a suitable relaxation scheme. The effectiveness of the proposed method is demonstrated by two illustrative case studies.

    And 59 more