First used by American mathematician GEORGE BERNARD DANTZIG (1914 ), and widely adopted in logistical planning and the optimization of economic development planning; linear programming is a mathematical technique for determining a range of maximum values at minimum cost while dealing with known constraints.
A carmaker would try to program the optimum production of a range of models, where each has different raw material, labor, warehousing and sales requirements, all of which are limited in supply.
Source:
G B Dantzig, ‘Programming in a Linear Structure’, Comptroller, USAF (Washington, DC, February, 1948)
History
The problem of solving a system of linear inequalities dates back at least as far as Fourier, who in 1827 published a method for solving them,^{[1]} and after whom the method of Fourier–Motzkin elimination is named.
In 1939 a linear programming formulation of a problem that is equivalent to the general linear programming problem was given by the Soviet mathematician and economist Leonid Kantorovich, who also proposed a method for solving it.^{[2]} It is a way he developed, during World War II, to plan expenditures and returns in order to reduce costs of the army and to increase losses imposed on the enemy.^{[citation needed]} Kantorovich’s work was initially neglected in the USSR.^{[3]} About the same time as Kantorovich, the DutchAmerican economist T. C. Koopmans formulated classical economic problems as linear programs. Kantorovich and Koopmans later shared the 1975 Nobel prize in economics.^{[1]} In 1941, Frank Lauren Hitchcock also formulated transportation problems as linear programs and gave a solution very similar to the later simplex method.^{[2]} Hitchcock had died in 1957 and the Nobel prize is not awarded posthumously.
During 1946–1947, George B. Dantzig independently developed general linear programming formulation to use for planning problems in the US Air Force.^{[4]} In 1947, Dantzig also invented the simplex method that for the first time efficiently tackled the linear programming problem in most cases.^{[4]} When Dantzig arranged a meeting with John von Neumann to discuss his simplex method, Neumann immediately conjectured the theory of duality by realizing that the problem he had been working in game theory was equivalent.^{[4]} Dantzig provided formal proof in an unpublished report “A Theorem on Linear Inequalities” on January 5, 1948.^{[3]} Dantzig’s work was made available to public in 1951. In the postwar years, many industries applied it in their daily planning.
Dantzig’s original example was to find the best assignment of 70 people to 70 jobs. The computing power required to test all the permutations to select the best assignment is vast; the number of possible configurations exceeds the number of particles in the observable universe. However, it takes only a moment to find the optimum solution by posing the problem as a linear program and applying the simplex algorithm. The theory behind linear programming drastically reduces the number of possible solutions that must be checked.
The linear programming problem was first shown to be solvable in polynomial time by Leonid Khachiyan in 1979,^{[5]} but a larger theoretical and practical breakthrough in the field came in 1984 when Narendra Karmarkar introduced a new interiorpoint method for solving linearprogramming problems.^{[6]}
Uses
Linear programming is a widely used field of optimization for several reasons. Many practical problems in operations research can be expressed as linear programming problems.^{[3]} Certain special cases of linear programming, such as network flow problems and multicommodity flow problems are considered important enough to have generated much research on specialized algorithms for their solution. A number of algorithms for other types of optimization problems work by solving LP problems as subproblems. Historically, ideas from linear programming have inspired many of the central concepts of optimization theory, such as duality, decomposition, and the importance of convexity and its generalizations. Likewise, linear programming was heavily used in the early formation of microeconomics and it is currently utilized in company management, such as planning, production, transportation, technology and other issues. Although the modern management issues are everchanging, most companies would like to maximize profits and minimize costs with limited resources. Therefore, many issues can be characterized as linear programming problems.
Standard form
Standard form is the usual and most intuitive form of describing a linear programming problem. It consists of the following three parts:
 A linear function to be maximized
 e.g. {\displaystyle f(x_{1},x_{2})=c_{1}x_{1}+c_{2}x_{2}}
 Problem constraints of the following form
 e.g.
 {\displaystyle {\begin{matrix}a_{11}x_{1}+a_{12}x_{2}&\leq b_{1}\\a_{21}x_{1}+a_{22}x_{2}&\leq b_{2}\\a_{31}x_{1}+a_{32}x_{2}&\leq b_{3}\\\end{matrix}}}
 Nonnegative variables
 e.g.
 {\displaystyle {\begin{matrix}x_{1}\geq 0\\x_{2}\geq 0\end{matrix}}}
The problem is usually expressed in matrix form, and then becomes:
 {\displaystyle \max\{\mathbf {c} ^{\mathrm {T} }\mathbf {x} \;\;A\mathbf {x} \leq \mathbf {b} \land \mathbf {x} \geq 0\}}
Other forms, such as minimization problems, problems with constraints on alternative forms, as well as problems involving negative variables can always be rewritten into an equivalent problem in standard form.
Example
Suppose that a farmer has a piece of farm land, say L km^{2}, to be planted with either wheat or barley or some combination of the two. The farmer has a limited amount of fertilizer, F kilograms, and pesticide, P kilograms. Every square kilometer of wheat requires F_{1} kilograms of fertilizer and P_{1} kilograms of pesticide, while every square kilometer of barley requires F_{2} kilograms of fertilizer and P_{2} kilograms of pesticide. Let S_{1} be the selling price of wheat per square kilometer, and S_{2} be the selling price of barley. If we denote the area of land planted with wheat and barley by x_{1} and x_{2} respectively, then profit can be maximized by choosing optimal values for x_{1} and x_{2}. This problem can be expressed with the following linear programming problem in the standard form:
Maximize: {\displaystyle S_{1}\cdot x_{1}+S_{2}\cdot x_{2}}  (maximize the revenue – revenue is the “objective function”)  
Subject to:  {\displaystyle x_{1}+x_{2}\leq L}  (limit on total area) 
{\displaystyle F_{1}\cdot x_{1}+F_{2}\cdot x_{2}\leq F}  (limit on fertilizer)  
{\displaystyle P_{1}\cdot x_{1}+P_{2}\cdot x_{2}\leq P}  (limit on pesticide)  
{\displaystyle x_{1}\geq 0,x_{2}\geq 0}  (cannot plant a negative area). 
In matrix form this becomes:
 maximize {\displaystyle {\begin{bmatrix}S_{1}&S_{2}\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}}
 subject to {\displaystyle {\begin{bmatrix}1&1\\F_{1}&F_{2}\\P_{1}&P_{2}\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}\leq {\begin{bmatrix}L\\F\\P\end{bmatrix}},\,{\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}\geq {\begin{bmatrix}0\\0\end{bmatrix}}.}
Augmented form (slack form)
Linear programming problems can be converted into an augmented form in order to apply the common form of the simplex algorithm. This form introduces nonnegative slack variables to replace inequalities with equalities in the constraints. The problems can then be written in the following block matrix form:
 Maximize {\displaystyle z}:
 {\displaystyle {\begin{bmatrix}1&\mathbf {c} ^{T}&0\\0&\mathbf {A} &\mathbf {I} \end{bmatrix}}{\begin{bmatrix}z\\\mathbf {x} \\\mathbf {s} \end{bmatrix}}={\begin{bmatrix}0\\\mathbf {b} \end{bmatrix}}}
 {\displaystyle \mathbf {x} \geq 0,\mathbf {s} \geq 0}
where {\displaystyle \mathbf {s} } are the newly introduced slack variables, {\displaystyle \mathbf {x} } are the decision variables, and {\displaystyle z} is the variable to be maximized.
Example
The example above is converted into the following augmented form:

Maximize: {\displaystyle S_{1}\cdot x_{1}+S_{2}\cdot x_{2}} (objective function) subject to: {\displaystyle x_{1}+x_{2}+x_{3}=L} (augmented constraint) {\displaystyle F_{1}\cdot x_{1}+F_{2}\cdot x_{2}+x_{4}=F} (augmented constraint) {\displaystyle P_{1}\cdot x_{1}+P_{2}\cdot x_{2}+x_{5}=P} (augmented constraint) {\displaystyle x_{1},x_{2},x_{3},x_{4},x_{5}\geq 0.}
where {\displaystyle x_{3},x_{4},x_{5}} are (nonnegative) slack variables, representing in this example the unused area, the amount of unused fertilizer, and the amount of unused pesticide.
In matrix form this becomes:
 Maximize {\displaystyle z}:
 {\displaystyle {\begin{bmatrix}1&S_{1}&S_{2}&0&0&0\\0&1&1&1&0&0\\0&F_{1}&F_{2}&0&1&0\\0&P_{1}&P_{2}&0&0&1\\\end{bmatrix}}{\begin{bmatrix}z\\x_{1}\\x_{2}\\x_{3}\\x_{4}\\x_{5}\end{bmatrix}}={\begin{bmatrix}0\\L\\F\\P\end{bmatrix}},\,{\begin{bmatrix}x_{1}\\x_{2}\\x_{3}\\x_{4}\\x_{5}\end{bmatrix}}\geq 0.}
Duality
Every linear programming problem, referred to as a primal problem, can be converted into a dual problem, which provides an upper bound to the optimal value of the primal problem. In matrix form, we can express the primal problem as:
 Maximize c^{T}x subject to Ax ≤ b, x ≥ 0;
 with the corresponding symmetric dual problem,
 Minimize b^{T}y subject to A^{T}y ≥ c, y ≥ 0.
An alternative primal formulation is:
 Maximize c^{T}x subject to Ax ≤ b;
 with the corresponding asymmetric dual problem,
 Minimize b^{T}y subject to A^{T}y = c, y ≥ 0.
There are two ideas fundamental to duality theory. One is the fact that (for the symmetric dual) the dual of a dual linear program is the original primal linear program. Additionally, every feasible solution for a linear program gives a bound on the optimal value of the objective function of its dual. The weak duality theorem states that the objective function value of the dual at any feasible solution is always greater than or equal to the objective function value of the primal at any feasible solution. The strong duality theorem states that if the primal has an optimal solution, x^{*}, then the dual also has an optimal solution, y^{*}, and c^{T}x^{*}=b^{T}y^{*}.
A linear program can also be unbounded or infeasible. Duality theory tells us that if the primal is unbounded then the dual is infeasible by the weak duality theorem. Likewise, if the dual is unbounded, then the primal must be infeasible. However, it is possible for both the dual and the primal to be infeasible. See dual linear program for details and several more examples.