Mathematical Optimization

Mathematical Optimization

 According to the Oxford dictionary, Optimization is the act of making the best or most effective use of a situation or resource.

In Latin the word, “optimum means “the ultimate ideal;” similarly, “Optimus” means “the best.” Therefore, to optimize refers to trying to bring whatever we are dealing with towards its ultimate state.

Mathematical Optimization is thus bringing out the best or the ultimate solution of a problem using mathematical functions. One must know Calculus thoroughly to understand optimization techniques.

The following blog will cover the topic of Mathematical Optimization in depth.

 

  • Introduction to Optimization:

 

Optimization is a precise procedure using design constraints and criteria to enable the planner to find the optimal solution. Optimization techniques have been applied in numerous fields to deal with different practical problems.

            Mathematical Optimization is a branch of applied mathematics that is useful in many different fields like:


• Manufacturing            
• Inventory control
• Transportation
• Scheduling
• Networks
• Production
• Finance
• Engineering
• Mechanics
• Economics
• Control engineering
 • Marketing
• Policy Modeling

 


An optimization problem is also termed mathematical programming. In Layman’s terms, it is the mechanism through which one can find an element, variable, or quantity that best fits a set of given criteria or constraints.

A basic optimization problem consists of:

 1. The objective function, f(x), is the output one is trying to maximize or minimize.

 2. Variables- x1,x2,x3 …xn, and so on, which are the inputs – things one can control.

3. Constraints, which are equations that place limits on how big or small some variables can get.      Equality constraints are usually noted in (x) and inequality constraints are noted in (x).

 

To understand this better:

Suppose, you have a job interview tomorrow.

·       Your main aim will be to prepare to the fullest and work on every skill. This is your Objective Function.

·       You have a set of questions to prepare, communicating skills, technical skills, and much more and need to spend the required amount of time on all. Hence the time you spend on each task is your variable.

·       However, there is a limit to the amount of time you have to spare on the preparation and also on every particular task. You can’t just be reading the QnAs or can’t just be giving speeches. You can only allot a fraction of time on the particular tasks. This becomes your constraint.

eg] Minimization or maximization of scalar functions are the simplest cases of optimization problems. If we have a scalar function of one or more variables, f(x_1, x_2, … x_n) then an optimization problem is :

                   Find x_1, x_2, …, x_n where f(x) is minimum

 

 

An optimization algorithm is always working in the background of any task. Be it Supervised or Unsupervised learning models, classification, regression, or any clustering problem, all of them employ optimization.

Optimization is largely used in Machine Learning algorithms. For ex:

 

  • Gradient descent in neural networks (unconstrained optimization).
  • Method of Lagrange multipliers in support vector machines (constrained optimization).
  • Principal component analysis (constrained optimization) 
  • Clustering via expectation-maximization algorithm (constrained optimization)
  • Logistic regression (unconstrained optimization)
  • Genetic algorithms in evolutionary learning algorithms (different variants exist to solve both constrained and unconstrained optimization problems).

 

To understand this better we must understand the type of constraints in the next part.

  •  Types of Optimization Problems :

 

 An important step in the optimization process is classifying your optimization model since algorithms for solving optimization problems are tailored to a particular type of problem.

 

1.     Continuous vs Discrete:

 

Some models can only be understood if the variables take values from a discrete set, such as a subset of integers, but others have variables that can take any actual value. Models with continuous variables are continuous optimization problems

 models with discrete variables are discrete optimization problems.

The smoothness of the functions means that the objective function and constraint function values at a point x may be utilized to deduce information about points in the neighborhood of x, making continuous optimization problems easier to solve than discrete optimization problems. Improvements in algorithms, together with advances in computing technology, have substantially reduced the time it takes to complete a task. 

 

2.     Constrained vs Unconstrained:

 

Unconstrained optimization problems arise directly in many practical applications; they also arise when constrained optimization problems are reformulated with a penalty term in the objective function to replace the constraints. Constrained optimization challenges emerge in situations when the variables are constrained explicitly. Constraints vary from simple bounds to systems of equalities and inequalities that reflect complex interactions among the variables are all examples of variable constraints. The nature of the constraints (e.g., linear, nonlinear, convex) and the smoothness of the functions (e.g., linear, nonlinear, convex) can be used to further classify constrained optimization problems.

 

3.     None, one or many:

 

Although most optimization problems have a single objective function, there are certain fascinating circumstances where there is no objective function or numerous objective functions. Feasibility problems are those in which the purpose is to discover values for variables that satisfy the constraints of a model but there is no specific goal to achieve. In both engineering and economics, complementarity issues are common. The goal is to come up with a solution that meets all of the complementarity requirements. 

When optimal judgments must be made in the context of trade-offs between two or more conflicting objectives, multi-objective optimization problems arise in many domains. Developing a new component, for example, can entail minimizing weight while maximizing strength.

 

4.     Deterministic vs stochastic:


In deterministic optimization, it is assumed that the data for the given problem are known accurately. And in stochastic optimization or optimization under uncertainty,

the uncertainty is incorporated into the model. Robust optimization techniques can be used when the parameters are known only within certain bounds; the goal is to find a solution that is feasible for all data and optimal in some sense.

 

  • Optimizing single variate functions:

Uni-variate/Single variate optimization is a non-linear optimization with no constraints in which we are aiming to obtain a value for only one decision variable.

So, when you look at this optimization problem, you usually write it like this: Minimize f(x), and this function is called the objective function. And the decision variable, which you can use to minimize this function, is expressed as w.r.t x here, and you can also state x is continuous, which means it may take any value on the real number line.

 

In a single variate optimization problem x is always a scalar variable. To explain a single variate optimization problem to anyone, one needs to use a 2D picture as on the x-axis lies the decision variable, and in the y-axis lies the value of the function.

Here, the first order necessary conditions for x to be the minimizer of the function f(x) is f'(x) = 0 and the second-order sufficiency condition for x to be the minimizer of the function f(x) is f”(x) > 0

 

  • Optimizing multi-variate functions:

Multiple variables in a multivariate optimization problem operate as decision variables in the optimization problem.

z = f(x1, x2, x3, x4, x5, ……xn

 

So, in these kinds of problems, a generic function z could be a non-linear function of decision variables x1, x2, x3, and so on. So, to optimize this function z, one could alter or pick n variables.

In multi-variate optimization problems, x can either be a scalar or a vector variable and to explain it to someone, one has to use 3D pictures, unlike single variate variables.

In multi-variate optimization problems, the first-order necessary conditions for x̄* to be the minimizer of the function f(x̄) is ∇ f(x̄*) = 0, and In case of unconstrained multivariate optimization the second-order sufficiency condition for x̄* to be the minimizer of the function f(x̄) is ∇ 2 f(x̄*) > 0.


Conclusion
To conclude I would like to highlight the key points of this article:
1.
Mathematical Optimization is thus bringing out the best or the ultimate solution of a problem using mathematical functions.
2. An important prerequisite to understanding optimization is Calculus.

3. A basic optimization problem consists of Objective functions, Variables, and Constraints

4. Types of Optimization are Continuous-Discrete, Constrained-Unconstrained, None one or many, Deterministic-Stochastic.

5. Uni-variate/Single variate optimization is a non-linear optimization with no constraints in which we are aiming to obtain a value for only one decision variable.

6. 
Multiple variables in a multivariate optimization problem operate as decision variables in the optimization problem.


I hope this particular blog inculcated in you the meaning of optimization and its applications via detailed examples. I have also tried my best to give a simple overview of all the types of optimization problems and optimization in single variate and multivariate problems.
You need to keep solving questions of every kind to get the best at this topic. All the best!

Comments