Feasible Directed Method of Constrained Nonlinear Optimization: Numerical Examples

DOI : 10.17577/IJERTV3IS090371

Download Full-Text PDF Cite this Publication

Text Only Version

Feasible Directed Method of Constrained Nonlinear Optimization: Numerical Examples

Wellington Manurung, Soesanto and Fadhilah Juli Yanti H.

Graduate School of Mathematics, University Sumatera Utara, Medan, Indonesia

Abstract The aim of this paper is to develop the new feasible direction of the constrained nonlinear optimization. The algorithm is based on the enhancements of the search determination and initial total number of line searching steps. The Newton and Kuhn-Tucker method are the most popular method in determine the certain structured constraint linearly of nonlinear problem whilst the rate of the convergence is not effective. This paper has successfully improved the Newton and Kuhn-Tucker method by extended the algorithms due to the conjugate directions. The method is applied the conjugations respect to the last two direction and can be applied to the single user traffic problem with the lower cost and stochastic transportation problem.

Keywords feasible direction method, nonlinear optimization, Kuhn-Tucker method, stochastic problem.

  1. INTRODUCTION

    Generally, the aim of the optimization problem is to choose the best alternative method due to the any available criteria, such as maximizing profits in the company, the rate of change of the volume of business, or minimizing the cost of the production. Mathematically, the optimization problem can be stated as follows:

    min f (x)

    xX

    how each visitor to minimize their own travel expenses to achieve the desired goal. The travel time modeling, congestion and the difference in the number of tourists from time to time is being considered as the constraint and causing the problem turn into a nonlinear problem [6]. This is characterized by the presence of nonlinear functions between goals or constraints. We can written the nonlinear

    x

    form such as x2 , 1 , ex ,sin(x), tan(x) and etc.

    The nonlinearities can be caused due to the interaction between two or more variables [3]. Constraint in a nonlinear program can be represented in an equality or inequality form. Optimization with a continuous function can be derived to the constraint equality form. This paper is used the extended Lagrange procedure to solve the optimization problem In this paper we focus in determine the optimum solution of constrained nonlinear program by combining and modifying the two previous method that has been studied by [3] and [1]. This paper unfolds as follows. Section II we review some background information that adopted and related to constrained nonlinear problem. Section III we present our model then present the computation result in a given data in Section IV. Finally, conclusion and future research in the last section.

    with x is a vector at n ,

    f (x) is an objective function and

  2. NONLINEAR PROGRAM WITH CONSTRAINT EQUALITY

    x n is a set of constraint of feasible area (see [1], [4], [5]). Optimization models are often formulated as a linear program in which all parameters are assumed in deterministic, but in some cases the application of a linear relationship can not be used and not feasible to determine the exact problem. The effective and feasible methods that we can use to solve the problem is called algorithms, a nonlinear program. Nonlinear program has a very broad scope and its applications has undergone major developments in decades recently. This variety approach models are being made to solve any nonlinear optimization program, for example traffic, telecommunications, chemical industry, large-scale structural design optimization, structural optimization applications in economics, marketing, business applications, scientific applications such as biology, chemistry, physics, mechanical and protein structure prediction. Traffic network problems [2] is not a linear model that explain

    1. Nonlinear optimization with constraint equality

      Assume that the maximizing problem of a continuous function can be derived as y0 f (x1, x2 ,, xn ) with constraints g(x1, x2 ,, xn ) b where g( X ) is a continuous and also can be derived. Under this assumption, it suggested that the model can choose the variable xn of the constraint, so xn H (x1, x2 ,, xn1) . Then, it substituting to the objective function and can be written as

      y0 f [(x1, x2 ,, xn1, H (x1, x2 ,, xn1)]

      Based on the above form, the general method can be applied due to the function that has no constraint of the

      i

      model. An necessary condition at extreme points can be written as follows.

      Assume that S 2 ( 0) denoted as slack quantity that has

      been adding to constraint i and

      gi (x) 0 . Identifying that

      y f f H

      S (s , s ,s

      ) and

      S 2 (s2 , s2 ,s2 )

      where m is the

      0 .

      ; where

      j 1, 2,, n 1

      (1.1)

      1 2 m

      1 2 m

      x j

      x j

      xn

      x j

      total number of constraint equality of the model. Thus, the Lagrange function can be written as follows.

      From g(x1, x2 ,, xn ) b

      g g . H 0 ; where

      j 1, 2,, n 1

      (1.2)

      L(X , S, ) f (X ) [g(X ) S 2 ] with g(x) 0

      (1.9)

      x j xn x j

      and

  3. MODELS AND NUMERICAL EXAMPLES

    H g . g . g 0 ; where j 1, 2, , n 1

    x j x j xn xn

    Thus, it can written as

    (1.3)

    This problem is based on studied by [1] by using the seven steps in determine the best solution as follows.

    Step 1. Choose the best or feasible method to solve the problem.

    y0

    f f . g . g 0 ;

    j 1, 2,, n 1

    (1.4)

    Step 2. Assume that there exists known variables

    x j

    x j xn x j xn

    d1, d2 , dn1

    If the solution vector is the maximum vector, then

    Step 3. Denoted that the objective of the model is to

    x1, x2 ,, xn

    is the maximum solution of the model. By

    maximize the function dn1 .

    substituting

    f

    xn

    . g

    xn

    , then

    Step 4. Denoted all the constraint of the model as follows.

    f g

    • f d f d f

    d d 0

    (1.10)

    • 0 ; where

    j 1, 2,, n

    (1.5)

    X B 1 X B 2 X B n n1

    x j x j

    1 2 n

    with assumption

    g(x , x ,, x ) b . Now, we have

    n 1

    g1 B d1 g1

    B d2 g1

    B dn k1dn1 g1(B)

    (1.11)

    1 2 n

    X1

    X2

    Xn

    with

    n 1

    unknown variables. If an objective function

    f (x1, x2 ,, xn )

    with constraint

    g(x1, x2 ,, xn ) b , then

    g2

    d g2

    d g2

    d k d

    g (B)

    (1.12)

    the Lagrange function as follows.

    X1 B 1

    X2 B 2

    Xn B n

    2 n1 2

    L f (x1, x2 ,, xn ) [g(x1, x2 ,, xn ) b]

    (1.6)

    • g p

    d g p

    d g p

    d k d

    g

    (B)

    (1.13)

    and for the necessary condition of stationery point

    X1 B 1

    X2 B 2

    Xn B n p n1 p

    L

    x j

    f g 0 ; where

    x j x j

    j 1, 2,, n

    (1.7)

    where ki (I 1, 2,, p) is 0 if gi (x) linear and 1 otherwise.

    Step 5. If dn1 = 0, then X* = B. If not, back to Step 4.

    L g(x , x ,, x ) b 0

    (1.8)

    Step 6. Choose D (d1, d2,, dn1)T . Denote a nonnegative

    1 2 n

    x to maximize

    f (B D) and

    f (B D) is feasible.

    1. Nonlinear optimization with constraint inequaity

      Step 7. Choose B D and back to Step 2.

      Kuhn and Tucker has successfully extended the theory

      Minimum program min f (x) . D( 0)

      xX

      is feasible where

      to solve the general nonlinear program with constraint both

      x* X

      if 0 (x *D) X for

      [0, )

      as we can see on

      equality or inequality. The necessary condition of Kuhn- Tucker method that discussed in this paper is to indentify the stationer point of a nonlinear problem with constraint inequality. These constraint inequality can be changed to be an equality by adding the nonnegative slack variable.

      Fig. 1 and Fig. 2 below.

      Fig 1. Feasible direction

      Thus, we get the feasible solution that shows on Table 1 as follows.

      X1

      X 2

      d1

      d2

      d3

      *

      1

      1

      1

      0

      1

      4

      5

      1

      1

      -1/2

      ½

      2

      7

      0

      1

      0

      1

      1

      8

      0

      -2/3

      1

      1/3

      0.531

      7.645

      0.531

      0

      0

      0

      Thus, from the result we determine the optimum solution

      1

      2

      x* 7.645 and x* 0.513 and the objective function is

      1 2

      z* f (x*, x*) 7.645 0.513 8.158 .

  4. CONCLUSIONS

Fig. 2 Not feasible direction

In standard form of nonlinear program that contains inequality constraint, the model can be written as for this example.

Nonlinear optimization program with equality and inequality constraints can be solved by combining the Newton-Raphson method and conditions of Kuhn-Tuckher. Both of this method can be done by using the principle of the Lagrange method. To determine the optimum value of linear programs with inequality constraints not based on the requirements Kuhn-Tuckher. However, this necessary

condition must be done by changing back the inequality

Max s.t

Z x1 x2

x2x1 2×2 3 0 3×1 2×2 24 0

x1 0

x2 0

constraints into equality constraints to determine the optimum solution becomes less effective.

REFERENCES

    1. Mitradjieva, M. Feasible direction methods for constrained nonlinear optimization, Dissertation. (2007), Lincoping

      Here, we assume that we have g1(X ) x2x1 2×2 3 , g2(X ) 3×1 2×2 24 , g4(X ) x2 .

      f (X ) x1 x2 ,

      g3(X ) x1 and

      University.

    2. Lamberti, L. and Pappalettere, C. Move limits definition in structural optimization with sequential linear programming. Part II: Numerical examples, Computer & Structures, Vol. 81 (2003), pp. 215-238.

    3. Landi, G. A feasible direction method for nonlinear constrained

      f 1

      x1

      g1 x2

      x1

      g2 3

      x1

      g3 1

      x1

      g4 0

      x1

      f 1

      x2

      g1 x1 2

      x2

      g2 2

      x2

      g3 0

      x2

      g4 1

      x2

      optimization, Ann. Univ. Ferrara, Vol. 47 (2002), pp. 49-73.

    4. Haizer, J. and Render, B. Production and Operation Research,

      Prentice-Hall (1996). New Jersey.

    5. Hiller, S.F. and Lieberman, G.J. Introduction to Operation Research, McGraw-Hill (2001), New York.

    6. Miguel, A. and Herskovits, J. A feasible direction interior point algorithm for nonlinear semidefinite programming, International Journal of Computer Science Issues, Vol. 10 (2012), pp. 373- 379.

Leave a Reply