Скачать книгу

table attributes columnalign right left right left right left right left right left right left columnspacing 0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em end attributes row cell min blank end cell cell 10 x squared plus 15 y squared plus exp invisible function application left parenthesis x plus y right parenthesis end cell row blank cell x plus y equals 5 end cell end table (2.55)

      Its corresponding lagrangian is presented below:

       calligraphic L left parenthesis x comma y comma lambda right parenthesis equals 10 x squared plus 15 y squared plus exp invisible function application left parenthesis x plus y right parenthesis plus lambda left parenthesis 5 minus x minus y right parenthesis (2.56)

      and the optimal conditions forms the following set of algebraic equations:

       S left parenthesis x comma y comma lambda right parenthesis equals left curly bracket table attributes columnspacing 1em end attributes row cell 20 x plus exp invisible function application left parenthesis x plus y right parenthesis minus lambda equals 0 end cell row cell 30 y plus exp invisible function application left parenthesis x plus y right parenthesis minus lambda equals 0 end cell row cell 5 minus x minus y equals 0 end cell end table right curly bracket (2.57)

      The corresponding Jacobian is the following matrix:

       calligraphic J equals left parenthesis table attributes columnalign center center center columnspacing 1em end attributes row cell 20 plus exp invisible function application left parenthesis x plus y right parenthesis end cell cell exp invisible function application left parenthesis x plus y right parenthesis end cell cell negative 1 end cell row cell exp invisible function application left parenthesis x plus y right parenthesis end cell cell 30 plus exp invisible function application left parenthesis x plus y right parenthesis end cell cell negative 1 end cell row cell negative 1 end cell cell negative 1 end cell 0 end table right parenthesis (2.58)

      It is possible to formulate Newton’s method using the information described above. The algorithm implemented in Python is presented below:

      In this case, we used a tolerance of 10−3. The algorithm achieves convergence in few iterations, as the reader can check by running the code.

      2.8 Further readings

      This chapter presented basic optimization methods for constrained and unconstrained problems. Conditions for convergence of these algorithms were not presented here. However, they are incorporated into modules and solvers called for a modeling/programming language such as Python. Our approach is to use these solvers and concentrate on studying the characteristics of the models. Readers interested in details of the algorithms are invited to review [12] and [11], for a formal analysis of convergence; other variants of the algorithms can be found in [13]. Moreover, a complete review of Lagrange multipliers can be studied in [14].

      This chapter is also an excuse for presenting Python’s features as programming and modeling language. The reader can review Appendix C for more details about each of the commands used in this section.

      2.9 Exercises

      1 Find the highest and lowest point, of the set given by the intersection of the cylinder x2 + y2 ≤ 1 with the plane x + y + z = 1, as shown in Figure 2.8.Figure 2.8 Intersection of an affine space with a cylinder.

      2 What is the new value of zmax and zmin, if the cylinder increases its radius in a small value, that is, if the radius changes from (r = 1) to (r = 1 + Δr) (Consider the interpretation of the Lagrange multipliers).

      3 The following algebraic equation gives the mechanical power in a wind turbine: (2.59)where P is the power extracted from the wind; ρ is the air density; Cp is the performance coefficient or power coefficient; λ is the tip speed ratio; v is the wind velocity, and A is the area covered by the rotor (see [15] for details). Determine the value of λ that produce maximum efficiency if the performance coefficient is given by (Equation 2.60): (2.60)Use the gradient method, starting from λ = 10 and a step of t = 0.1. Hint: use the module SymPy to obtain the expression of the gradient.

      4 Solve the following optimization problem using the gradient method: (2.61)Depart from the point (0, 0) and use a fixed step t = 0.8. Repeat the problem with a fixed step t = 1.1. Show a plot of convergence.

      5 Solve the following optimization problem using the gradient method. (2.62)where 1n is a column vector of size n, with all entries equal to 1; b is a column vector such that bk = kn2; and H is a symmetric matrix of size n × n constructed in the following way: hkm = (m + k) / 2 if k ≠ m and hkm = n2 + n if k = m. Show the convergence of the method for different steps t and starting from an initial point x = 0. Use n = 10, n = 100, and n = 1000. All index k or m starts in zero.

      6 Show that Euclidean, Manhattan, and uniform norms fulfill the four conditions to be considered a norm.

      7 Consider a modified version of Example 2.6, where the position of the common point E must be such that xE = yE. Solve this optimization problem using Newton’s method.

      8 Solve the problem of Item 4 with the following constraint (use Newton’s method): (2.63)

      9 Solve problem of Item 5 including the following constraint (use Newton’s method): (2.64)

      10 Newton’s method can be used to solve unconstrained optimization problems. Solve the following problem using Newton’s method and compare the convergence rate and the solution with the gradient method. (2.65)

      Notes

      1 1 At this point, the only tool we have to check these results is plotting the function and locating the optimum.

      2 2 Notice P is defined in a line outside the function definition. Recall that x2 is represented as x**2 in Python (see Appendix C)

      3 3 A complete discussion about the calculation of t is beyond this book’s objectives. Interested readers can consult the work of Nesterov and Nemirovskii, in [11] and [12].

      4 4 Again, a classic method that became more important with the advent of the computer.

      5 5 The jacobian matrix of S is equivalent to the hessian matrix of f.

      Конец ознакомительного фрагмента.

      Текст предоставлен

Скачать книгу