❶ matlab中的fmincon函數怎麼用
一、fmincon函數基本介紹
求解問題的標准型為
min F(X)
s.t
AX <= b
AeqX = beq
G(x) <= 0
Ceq(X) = 0
VLB <= X <= VUB
其中X為n維變元向量,G(x)與Ceq(X)均為非線性函數組成的向量,其它變數的含義與線性規劃,二次規劃中相同,用Matlab求解上述問題,基本步驟分為三步:
1. 首先建立M文件fun.m定義目標函數F(X):
function f = fun(X);
f = F(X)
2. 若約束條件中有非線性約束:G(x) <= 0 或 Ceq(x) = 0,則建立M文件nonlcon.m定義函數G(X)和Ceq(X);
function [G, Ceq] = nonlcon(X)
G = ...
Ceq = ...
3. 建立主程序,非線性規劃求解的函數時fmincon,命令的基本格式如下:
[轉載]Matlab <wbr>fmincon函數用法
注意:
(1)fmincon函數提供了大型優化演算法和中型優化演算法。默認時,若在fun函數中提供了梯度(options 參數的GradObj設置為'on'),並且只有上下界存在或只有等式約束,fmincon函數將選擇大型演算法,當既有等式約束又有梯度約束時,使用中型演算法。
(2)fmincon函數的中型演算法使用的是序列二次規劃法。在每一步迭代中 求解二次規劃子問題,並用BFGS法更新拉格朗日Hessian矩陣。
(3)fmincon函數可能會給出局部最優解,這與初值X0的選取有關。
二、實例
1. 第一種方法,直接設置邊界
主要是指直接設置A,b等參數。
例1:min f = -x1 - 2*x2 + 1/2*x1^2 + 1/2 * x2^2
2*x1 + 3*x2 <= 6
x1 + 4*x2 <= 5
x1, x2 >= 0
function ex131101
x0 = [1; 1];
A = [2, 3; 1, 4];
b = [6, 5];
Aeq = [];
beq = [];
VLB = [0; 0];
VUB = [];
[x, fval] = fmincon(@fun3, x0, A, b, Aeq, beq, VLB, VUB)
function f = fun3(x)
f = -x(1) - 2*x(2) + (1/2)*x(1)^2 + (1/2)*x(2)^2;
2. 第二種方法,通過函數設置邊界
例2: min f(x) = exp(x1) * (4*x1^2 + 2*x2^2 + 4*x1*x2 + 2*x2 + 1)
x1 + x2 = 0
1.5 + x1 * x2 - x1 - x2 <= 0
-x1*x2 - 10 <= 0
function youh3
clc;
x0 = [-1, 1];
A = [];b = [];
Aeq = []; beq = [];
vlb = []; vub = [];
[x, fval] = fmincon(@fun4, x0, A, b, Aeq, beq, vlb, vub, @mycon)
function f = fun4(x);
f = exp(x(1)) * (4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1);
function [g, ceq] = mycon(x)
g = [1.5 + x(1)*x(2) - x(1) - x(2); -x(1)*x(2) - 10];
ceq = [x(1) + x(2)];
3. 進階用法,增加梯度以及傳遞參數
這里用無約束優化函數fminunc做示例,對於fmincon方法相同,只需將邊界項設為空即可。
(1)定義目標函數
function [J, grad] = costFunction(theta, X, y)
%COSTFUNCTION Compute cost and gradient for logistic regression
% J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the
% parameter for logistic regression and the gradient of the cost
% w.r.t. to the parameters.
% Initialize some useful values
m = length(y); % number of training examples
% You need to return the following variables correctly
J = 0;
grad = zeros(size(theta));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
% You should set J to the cost.
% Compute the partial derivatives and set grad to the partial
% derivatives of the cost w.r.t. each parameter in theta
%
% Note: grad should have the same dimensions as theta
%
z = X * theta;
hx = 1 ./ (1 + exp(-z));
J = 1/m * sum([-y' * log(hx) - (1 - y)' * log(1 - hx)]);
for j = 1: length(theta)
grad(j) = 1/m * sum((hx - y)' * X(:,j));
end
% =============================================================
end
(2)優化求極小值
% Set options for fminunc
options = optimset('GradObj', 'on', 'MaxIter', 400);
% Run fminunc to obtain the optimal theta
% This function will return theta and the cost
[theta, cost] = ...
fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);
% [theta, cost] = ...
% fminunc(@(t)(costFunction(t, X, y)), initial_theta);
% Print theta to screen
fprintf('Cost at theta found by fminunc: %fn', cost);
fprintf('theta: n');
fprintf(' %f n', theta);