×
DON'T BE SATISFIED WITH B when you can get A+
--Your friends at LectureNotes
Close

Note for Computational Method - CM by Abhishek Apoorv

  • Computational Method - CM
  • Note
  • 7 Topics
  • 63 Views
  • Uploaded 1 year ago
Abhishek Apoorv
Abhishek Apoorv
0 User(s)
Download PDFOrder Printed Copy

Share it with your friends

Leave your Comments

Text from page-2

Contents 1 Introduction to Linear Programming 1.1 What is a linear program? . . . . . . . . . . . . . . . . . . . . . . 3 3 1.2 1.3 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manipulating a Linear Programming Problem . . . . . . . . . . . 5 6 1.4 1.5 The Linear Algebra of Linear Programming . . . . . . . . . . . . Convex Sets and Directions . . . . . . . . . . . . . . . . . . . . . 7 8 2 Examples from Bazaraa et. al. and Winston 11 2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 The Theory Behind Linear Programming 16 17 3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.2 The General Representation Theorem . . . . . . . . . . . . . . . 19 4 An Outline of the Proof 20 5 Examples With Convex Sets and Extreme Points From Bazaara et. al. 22 6 Tools for Solving Linear Programs 23 6.1 Important Precursors to the Simplex Method . . . . . . . . . . . 23 7 The Simplex Method In Practice 25 8 What if there is no initial basis in the Simplex tableau? 28 8.1 8.2 The Two-Phase Method . . . . . . . . . . . . . . . . . . . . . . . The Big-M Method . . . . . . . . . . . . . . . . . . . . . . . . . . 29 31 9 Cycling 33 9.1 The Lexicographic Method . . . . . . . . . . . . . . . . . . . . . 34 9.2 Bland’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 9.3 9.4 Theorem from [2] . . . . . . . . . . . . . . . . . . . . . . . . . . . Which Rule to Use? . . . . . . . . . . . . . . . . . . . . . . . . . 37 39 10 Sensitivity Analysis 39 10.1 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 10.1.1 Sensitivity Analysis for a cost coefficient . . . . . . . . . . 1 40

Text from page-3

10.1.2 Sensitivity Analysis for a right-hand-side value . . . . . . 41 11 Case Study: Busing Children to School 41 11.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 11.2 The Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 42 11.2.2 Objective Function . . . . . . . . . . . . . . . . . . . . . . 11.2.3 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 43 43 11.3 The Complete Program . . . . . . . . . . . . . . . . . . . . . . . 11.4 Road Construction and Portables . . . . . . . . . . . . . . . . . . 47 49 11.4.1 Construction . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.2 Portable Classrooms . . . . . . . . . . . . . . . . . . . . . 49 50 11.5 Keeping Neighborhoods Together . . . . . . . . . . . . . . . . . . 11.6 The Case Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . 55 56 11.6.1 Shadow Prices . . . . . . . . . . . . . . . . . . . . . . . . 56 11.6.2 The New Result . . . . . . . . . . . . . . . . . . . . . . . 57 12 Conclusion 57 2

Text from page-4

1 Introduction to Linear Programming Linear programming was developed during World War II, when a system with which to maximize the efficiency of resources was of utmost importance. New war-related projects demanded attention and spread resources thin. “Programming” was a military term that referred to activities such as planning schedules efficiently or deploying men optimally. George Dantzig, a member of the U.S. Air Force, developed the Simplex method of optimization in 1947 in order to provide an efficient algorithm for solving programming problems that had linear structures. Since then, experts from a variety of fields, especially mathematics and economics, have developed the theory behind “linear programming” and explored its applications [1]. This paper will cover the main concepts in linear programming, including examples when appropriate. First, in Section 1 we will explore simple properties, basic definitions and theories of linear programs. In order to illustrate some applications of linear programming, we will explain simplified “real-world” examples in Section 2. Section 3 presents more definitions, concluding with the statement of the General Representation Theorem (GRT). In Section 4, we explore an outline of the proof of the GRT and in Section 5 we work through a few examples related to the GRT. After learning the theory behind linear programs, we will focus methods of solving them. Section 6 introduces concepts necessary for introducing the Simplex algorithm, which we explain in Section 7. In Section 8, we explore the Simplex further and learn how to deal with no initial basis in the Simplex tableau. Next, Section 9 discusses cycling in Simplex tableaux and ways to counter this phenomenon. We present an overview of sensitivity analysis in Section 10. Finally, we put all of these concepts together in an extensive case study in Section 11. 1.1 What is a linear program? We can reduce the structure that characterizes linear programming problems (perhaps after several manipulations) into the following form: 3

Text from page-5

Minimize c 1 x1 Subject to a11 x1 a21 x1 .. . am1 x1 x1 , + c 2 x2 + ··· + + + a12 x2 a22 x2 .. . am2 x2 + + ··· ··· + + + ··· + x2 , c n xn a1n xn a2n xn .. . + amn xn ..., xn = z = b1 = b2 .. .. . . = bm ≥ 0. In linear programming z, the expression being optimized, is called the objective function. The variables x1 , x2 . . . xn are called decision variables, and their values are subject to m + 1 constraints (every line ending with a bi , plus the nonnegativity constraint). A set of x1 , x2 . . . xn satisfying all the constraints is called a feasible point and the set of all such points is called the feasible region. The solution of the linear program must be a point (x1 , x2 , . . . , xn ) in the feasible region, or else not all the constraints would be satisfied. The following example from Chapter 3 of Winston [3] illustrates that geometrically interpreting the feasible region is a useful tool for solving linear programming problems with two decision variables. The linear program is: Minimize 4x1 Subject to 3x1 x1 x1 + + x2 x2 = z ≥ 10 + x2 ≥ ≥ 5 3 x1 , x 2 ≥ 0. We plotted the system of inequalities as the shaded region in Figure 1. Since all of the constraints are “greater than or equal to” constraints, the shaded region above all three lines is the feasible region. The solution to this linear program must lie within the shaded region. Recall that the solution is a point (x1 , x2 ) such that the value of z is the smallest it can be, while still lying in the feasible region. Since z = 4x1 + x2 , plotting the line x1 = (z − x2 )/4 for various values of z results in isocost lines, which have the same slope. Along these lines, the value of z is constant. In Figure 1, the dotted lines represent isocost lines for different values of z. Since isocost lines are parallel to each other, the thick dotted isocost line for which z = 14 is clearly the line that intersects the feasible region at the smallest possible value for z. Therefore, z = 14 is the smallest possible value of z given 4

Lecture Notes