+1 (315) 557-6473 

Solving 2D Unconstrained Optimization with Gradient Ascent in MATLAB

June 14, 2024
Cameron Duncan
Cameron Duncan
USA
MATLAB
Cameron Duncan has over a decade of experience in numerical optimization. She earned her Ph.D. in Applied Mathematics from Stanford University, USA.

In this blog post, we explore the application of optimization algorithms to 2D functions, focusing on gradient ascent with inexact (backtracking) line search to locate local maxima of functions like f(x, y). Whether you're a student grappling with Matlab assignments or a curious learner fascinated by numerical methods, this guide provides a systematic approach. This post will offer valuable insights and practical strategies to help you effectively implement and understand these optimization techniques.

We begin by defining a function f(x, y) and initializing parameters for the optimization process. The core of our method lies in implementing the calcMaxStudent function, which iteratively climbs towards the function's peak using gradients and backtracking to refine step sizes. This process ensures that each movement aims to increase the function's value according to the Armijo condition, balancing exploration and exploitation in the search space.

To visualize our progress and results, we generate contour plots that overlay the optimization path, providing a visual narrative of how the algorithm navigates the function landscape. Through these steps, you'll gain not only practical coding skills in MATLAB but also a deeper understanding of optimization principles applicable across diverse domains, from economics to machine learning. Whether for academic pursuits or personal enrichment, mastering these techniques equips you to tackle complex optimization challenges effectively.

2D unconstrained optimization

Understanding the Problem

Imagine you're embarking on a journey to maximize a 2D function f(x,y)f(x, y)f(x,y) using computational methods. This function could represent crucial aspects of diverse fields like economics, engineering, or scientific research—ranging from profit optimization models to identifying peaks in signal processing data. The task at hand involves developing an algorithm that efficiently ascends towards the function's local or global maximum across the xyxyxy-plane.

Implementing such algorithms, like gradient ascent with inexact line search, becomes pivotal. These methods simulate iterative climbs, where each step strategically adjusts towards higher function values based on gradients and conditions set by optimization criteria. In essence, the goal is to harness mathematical rigor within computational frameworks—like MATLAB—to navigate through the landscape defined by f(x,y)f(x, y)f(x,y), revealing peaks that signify optimal solutions.

This journey not only sharpens programming skills but also offers insights into fundamental optimization principles. As you explore and refine these methods, you uncover their versatility in solving complex, real-world challenges, making them indispensable tools across various domains seeking efficiency and performance enhancement.

The Mathematical Foundation

Let's consider the function:

f(x,y)=−10(x−2)2−5(y+3)2+20

This function has a maximum at (2,−3)(2, -3)(2,−3). Our task is to programmatically find this maximum using gradient ascent, a common optimization technique.

Steps to Implement the Solution

Step 1: Define the Objective Function

The first step is to define the function f(x,y)f(x, y)f(x,y) in MATLAB. This function represents the landscape we are trying to explore.

f = @(x, y) -10*(x-2)^2 - 5*(y+3)^2 + 20;

Step 2: Initialize Parameters

Next, we set up the initial conditions and parameters for our optimization algorithm.

xi = -10; yi = 5; % Initial guesses for x and y

tol = 1e-2; % Error tolerance

sigma = 0.0001; % Armijo condition constant

beta = 0.5; % Backtracking constant

Step 3: Implement the Optimization Function

The calcMaxStudent function implements gradient ascent with inexact line search to maximize the function f(x,y)f(x, y)f(x,y). It iteratively computes the gradient of fff, adjusts the step size using backtracking to satisfy the Armijo condition, and updates the position until the gradient norm falls below a specified tolerance. It tracks the path taken and counts function evaluations, providing the final coordinates where the local maximum is found along with the number of steps taken.

This function encapsulates the core mechanics necessary to navigate and maximize the specified 2D function efficiently.

function [xypos, numsteps, numfneval] = calcMaxStudent(f, xi, yi, tol, sigma, beta)

% Initialize variables

xypos = [xi, yi]; % Store path taken

numsteps = 0; % Step counter

numfneval = 0; % Function evaluation counter

% Gradient ascent with inexact line search

while true

% Calculate gradient

gx = @(x, y) -20*(x-2); % Partial derivative w.r.t. x

gy = @(x, y) -10*(y+3); % Partial derivative w.r.t. y

% Evaluate gradient at current point

grad = [gx(xi, yi), gy(xi, yi)];

numfneval = numfneval + 1;

% Check termination criterion

if norm(grad) < tol

break;

end

% Perform inexact line search (backtracking)

h = 1; % Initial step size guess

while true

xi_next = xi + h * grad(1);

yi_next = yi + h * grad(2);

% Armijo condition

if f(xi_next, yi_next) > f(xi, yi) + sigma * h * grad * grad'

break;

else

h = beta * h; % Reduce step size

end

end

% Update position

xi = xi_next;

yi = yi_next;

xypos = [xypos; xi, yi]; % Store path

numsteps = numsteps + 1;

end

end

Step 4: Generate Visualization

Visualizing the optimization path through a contour plot offers valuable insights into the algorithm's journey across the function's landscape. By plotting the contours of the objective function f(x,y)f(x, y)f(x,y) and superimposing the trajectory traced by the gradient ascent algorithm, we can observe how the algorithm navigates towards the local maximum. This graphical representation not only enhances understanding of the optimization process but also highlights key aspects such as convergence behavior, step sizes, and the influence of algorithmic parameters like step-size reduction factors and Armijo condition thresholds.

% Generate contour plot

x_range = [-12, 12];

y_range = [-3.6, 3.6];

[X, Y] = meshgrid(x_range(1):0.1:x_range(2), y_range(1):0.1:y_range(2));

Z = f(X, Y);

contour(X, Y, Z, 30); % Contour plot

hold on;

plot(xypos(:, 1), xypos(:, 2), '-o'); % Plot path

colorbar; % Add color bar

xlabel('x');

ylabel('y');

title('Contour Plot with Gradient Ascent Path');

hold off;

Conclusion

In conclusion, this blog post serves as a thorough introduction to implementing a gradient ascent with inexact line search algorithm in MATLAB for solving 2D unconstrained optimization problems. By following the outlined steps, you not only grasp the foundational concepts behind optimization algorithms but also gain practical coding experience and visualization skills crucial for tackling real-world numerical challenges.

Through the defined steps, starting from function definition to parameter initialization and the implementation of the calcMaxStudent function, you've learned how to systematically approach and solve optimization tasks programmatically. The provided MATLAB code illustrates the application of gradient ascent, emphasizing the iterative improvement towards finding the local maximum of a specific function.

Moreover, the visualization component with contour plots enhances understanding by depicting the path taken by the optimization algorithm across the function's landscape. This visual feedback is invaluable for debugging, validating algorithm behavior, and gaining insights into algorithmic efficiency.

Beyond basic implementation, the post encourages exploration into more complex functions and parameter adjustments. This experimentation is pivotal for optimizing performance and adapting the algorithm to various optimization scenarios effectively.

Armed with this newfound knowledge, you're well-equipped to tackle assignments and projects involving numerical optimization in MATLAB confidently. Whether in academic settings or professional environments, these skills are foundational for addressing optimization challenges across diverse fields.

In closing, understanding gradient ascent with inexact line search not only enriches your theoretical understanding but also empowers you to apply these principles to solve real-world problems effectively. Happy coding and exploring the vast realm of optimization algorithms in MATLAB!


Comments
No comments yet be the first one to post a comment!
Post a comment