導航:首頁 > 源碼編譯 > bp演算法源代碼

bp演算法源代碼

發布時間:2024-11-24 07:06:40

❶ 關於神經網路BP演算法的輸入問題

可以的!
之所以叫BP網路,是因為使用了反向傳遞演算法,這是一種結果導向的自學習方法,用在五子棋上是可以的。因為五子棋的游戲方法正是很明顯的結果導向的過程。
簡單說這么個過程:
1.設置輸入和輸出類型,比如都是坐標[x1,y1]、[x2,y2]...
2.訓練:
你告訴網路
A情況下應該輸出A1
B情況下應該輸出B1
C情況下應該輸出C1
...
A+B情況下應該輸出AB1
3.測試:
你問網路A+B+C情況下,應該輸出多少?在哪裡下子(就是[x,y]是多少?)網路就根據前面BP訓練的結果自動算出你要的坐標。

用Matlab神經網路工具箱做吧,不是很難。另外,五子棋的棋盤別太大了,訓練難度和時間是隨棋盤大小成級數增長的。

❷ 2024年新演算法-紅嘴藍鵲優化器(RBMO)優化BP神經網路回歸預測

紅嘴藍鵲優化器(RBMO)在2024年為優化BP神經網路回歸預測提供了新的演算法。RBMO設計靈感來自紅嘴藍喜鵲的高效協作捕食行為,通過模擬其搜索、追逐、攻擊獵物和儲存食物的過程,構建了數學模型以優化參數。此演算法被證明在數值優化、無人機路徑規劃和工程設計問題上表現卓越,特別是在解決NP-hard問題時。RBMO在無人機航跡規劃應用中表現良好,並在五個工程設計問題中均能產生最低成本,展示了其在實際問題解決中的優勢。與廣泛認可的演算法比較,RBMO在性能上始終表現最佳。

BP神經網路作為機器學習基礎技術,通過其多層結構處理復雜非線性問題。每一層由神經元組成,神經元之間通過權重和偏置參數相連接。權重和偏置參數在訓練初期隨機初始化,通過前向傳播計算輸出誤差,反向傳播演算法調整這些參數,以優化網路性能。BP神經網路在回歸/分類、圖像識別、語音處理、自然語言處理等多個領域廣泛應用。

RBM-BP神經網路結合了RBMO優化器與BP神經網路,通過優化網路參數以提高模型性能,特別適合處理復雜回歸預測任務。RBMO優化器提供了一種高效解決方案,能有效提升BP神經網路的訓練速度和精度。

實驗結果使用波士頓房價數據集,展示了RBMO-BP神經網路在預測性能上的優勢。實驗包括運行main_BP、main_RBMO和main_BPvsBP_RBMO腳本,輸出評價指標R2、RMSE、MSE、MAPE和MAE。還提供了RBMO收斂曲線和預測結果對比圖、預測誤差對比圖,以及所有評價指標,以直觀展示演算法性能。

完整代碼包含迭代次數、評估次數,可改進經典演算法,實現一鍵運行。提供回歸/分類預測功能。對於論文、SCI、EI、核心、學報、普刊、會議、專利、軟著等輔導服務,代碼部分來源於網路,如有侵權,請聯系刪除。

❸ 運行遺基於遺傳演算法的BP神經網路MATLAB代碼程序時總是出錯!!!

這個問題也困擾了我好久,終於解決了。給你個ga.m程序,新建m文件復制進去,再運行程序試試。
%ga.m
function [x,endPop,bPop,traceInfo] = ga(bounds,evalFN,evalOps,startPop,opts,...
termFN,termOps,selectFN,selectOps,xOverFNs,xOverOps,mutFNs,mutOps)
% GA run a genetic algorithm
% function [x,endPop,bPop,traceInfo]=ga(bounds,evalFN,evalOps,startPop,opts,
% termFN,termOps,selectFN,selectOps,
% xOverFNs,xOverOps,mutFNs,mutOps)
%
% Output Arguments:
% x - the best solution found ring the course of the run
% endPop - the final population
% bPop - a trace of the best population
% traceInfo - a matrix of best and means of the ga for each generation
%
% Input Arguments:
% bounds - a matrix of upper and lower bounds on the variables
% evalFN - the name of the evaluation .m function
% evalOps - options to pass to the evaluation function ([NULL])
% startPop - a matrix of solutions that can be initialized
% from initialize.m
% opts - [epsilon prob_ops display] change required to consider two
% solutions different, prob_ops 0 if you want to apply the
% genetic operators probabilisticly to each solution, 1 if
% you are supplying a deterministic number of operator
% applications and display is 1 to output progress 0 for
% quiet. ([1e-6 1 0])
% termFN - name of the .m termination function (['maxGenTerm'])
% termOps - options string to be passed to the termination function
% ([100]).
% selectFN - name of the .m selection function (['normGeomSelect'])
% selectOpts - options string to be passed to select after
% select(pop,#,opts) ([0.08])
% xOverFNS - a string containing blank seperated names of Xover.m
% files (['arithXover heuristicXover simpleXover'])
% xOverOps - A matrix of options to pass to Xover.m files with the
% first column being the number of that xOver to perform
% similiarly for mutation ([2 0;2 3;2 0])
% mutFNs - a string containing blank seperated names of mutation.m
% files (['boundaryMutation multiNonUnifMutation ...
% nonUnifMutation unifMutation'])
% mutOps - A matrix of options to pass to Xover.m files with the
% first column being the number of that xOver to perform
% similiarly for mutation ([4 0 0;6 100 3;4 100 3;4 0 0])

% Binary and Real-Valued Simulation Evolution for Matlab
% Copyright (C) 1996 C.R. Houck, J.A. Joines, M.G. Kay
%
% C.R. Houck, J.Joines, and M.Kay. A genetic algorithm for function
% optimization: A Matlab implementation. ACM Transactions on Mathmatical
% Software, Submitted 1996.
%
% This program is free software; you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published by
% the Free Software Foundation; either version 1, or (at your option)
% any later version.
%
% This program is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details. A of the GNU
% General Public License can be obtained from the
% Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.

%%$Log: ga.m,v $
%Revision 1.10 1996/02/02 15:03:00 jjoine
% Fixed the ordering of imput arguments in the comments to match
% the actual order in the ga function.
%
%Revision 1.9 1995/08/28 20:01:07 chouck
% Updated initialization parameters, updated mutation parameters to reflect
% b being the third option to the nonuniform mutations
%
%Revision 1.8 1995/08/10 12:59:49 jjoine
%Started Logfile to keep track of revisions
%

n=nargin;
if n<2 | n==6 | n==10 | n==12
disp('Insufficient arguements')
end
if n<3 %Default evalation opts.
evalOps=[];
end
if n<5
opts = [1e-6 1 0];
end
if isempty(opts)
opts = [1e-6 1 0];
end

if any(evalFN<48) %Not using a .m file
if opts(2)==1 %Float ga
e1str=['x=c1; c1(xZomeLength)=', evalFN ';'];
e2str=['x=c2; c2(xZomeLength)=', evalFN ';'];
else %Binary ga
e1str=['x=b2f(endPop(j,:),bounds,bits); endPop(j,xZomeLength)=',...
evalFN ';'];
end
else %Are using a .m file
if opts(2)==1 %Float ga
e1str=['[c1 c1(xZomeLength)]=' evalFN '(c1,[gen evalOps]);'];
e2str=['[c2 c2(xZomeLength)]=' evalFN '(c2,[gen evalOps]);'];
else %Binary ga
e1str=['x=b2f(endPop(j,:),bounds,bits);[x v]=' evalFN ...
'(x,[gen evalOps]); endPop(j,:)=[f2b(x,bounds,bits) v];'];
end
end

if n<6 %Default termination information
termOps=[100];
termFN='maxGenTerm';
end
if n<12 %Default muatation information
if opts(2)==1 %Float GA
mutFNs=['boundaryMutation multiNonUnifMutation nonUnifMutation unifMutation'];
mutOps=[4 0 0;6 termOps(1) 3;4 termOps(1) 3;4 0 0];
else %Binary GA
mutFNs=['binaryMutation'];
mutOps=[0.05];
end
end
if n<10 %Default crossover information
if opts(2)==1 %Float GA
xOverFNs=['arithXover heuristicXover simpleXover'];
xOverOps=[2 0;2 3;2 0];
else %Binary GA
xOverFNs=['simpleXover'];
xOverOps=[0.6];
end
end
if n<9 %Default select opts only i.e. roullete wheel.
selectOps=[];
end
if n<8 %Default select info
selectFN=['normGeomSelect'];
selectOps=[0.08];
end
if n<6 %Default termination information
termOps=[100];
termFN='maxGenTerm';
end
if n<4 %No starting population passed given
startPop=[];
end
if isempty(startPop) %Generate a population at random
%startPop=zeros(80,size(bounds,1)+1);
startPop=initializega(80,bounds,evalFN,evalOps,opts(1:2));
end

if opts(2)==0 %binary
bits=calcbits(bounds,opts(1));
end

xOverFNs=parse(xOverFNs);
mutFNs=parse(mutFNs);

xZomeLength = size(startPop,2); %Length of the xzome=numVars+fittness
numVar = xZomeLength-1; %Number of variables
popSize = size(startPop,1); %Number of indivials in the pop
endPop = zeros(popSize,xZomeLength); %A secondary population matrix
c1 = zeros(1,xZomeLength); %An indivial
c2 = zeros(1,xZomeLength); %An indivial
numXOvers = size(xOverFNs,1); %Number of Crossover operators
numMuts = size(mutFNs,1); %Number of Mutation operators
epsilon = opts(1); %Threshold for two fittness to differ
oval = max(startPop(:,xZomeLength)); %Best value in start pop
bFoundIn = 1; %Number of times best has changed
done = 0; %Done with simulated evolution
gen = 1; %Current Generation Number
collectTrace = (nargout>3); %Should we collect info every gen
floatGA = opts(2)==1; %Probabilistic application of ops
display = opts(3); %Display progress

while(~done)
%Elitist Model
[bval,bindx] = max(startPop(:,xZomeLength)); %Best of current pop
best = startPop(bindx,:);

if collectTrace
traceInfo(gen,1)=gen; %current generation
traceInfo(gen,2)=startPop(bindx,xZomeLength); %Best fittness
traceInfo(gen,3)=mean(startPop(:,xZomeLength)); %Avg fittness
traceInfo(gen,4)=std(startPop(:,xZomeLength));
end

if ( (abs(bval - oval)>epsilon) | (gen==1)) %If we have a new best sol
if display
fprintf(1,'\n%d %f\n',gen,bval); %Update the display
end
if floatGA
bPop(bFoundIn,:)=[gen startPop(bindx,:)]; %Update bPop Matrix
else
bPop(bFoundIn,:)=[gen b2f(startPop(bindx,1:numVar),bounds,bits)...
startPop(bindx,xZomeLength)];
end
bFoundIn=bFoundIn+1; %Update number of changes
oval=bval; %Update the best val
else
if display
fprintf(1,'%d ',gen); %Otherwise just update num gen
end
end

endPop = feval(selectFN,startPop,[gen selectOps]); %Select

if floatGA %Running with the model where the parameters are numbers of ops
for i=1:numXOvers,
for j=1:xOverOps(i,1),
a = round(rand*(popSize-1)+1); %Pick a parent
b = round(rand*(popSize-1)+1); %Pick another parent
xN=deblank(xOverFNs(i,:)); %Get the name of crossover function
[c1 c2] = feval(xN,endPop(a,:),endPop(b,:),bounds,[gen xOverOps(i,:)]);

if c1(1:numVar)==endPop(a,(1:numVar)) %Make sure we created a new
c1(xZomeLength)=endPop(a,xZomeLength); %solution before evaluating
elseif c1(1:numVar)==endPop(b,(1:numVar))
c1(xZomeLength)=endPop(b,xZomeLength);
else
%[c1(xZomeLength) c1] = feval(evalFN,c1,[gen evalOps]);
eval(e1str);
end
if c2(1:numVar)==endPop(a,(1:numVar))
c2(xZomeLength)=endPop(a,xZomeLength);
elseif c2(1:numVar)==endPop(b,(1:numVar))
c2(xZomeLength)=endPop(b,xZomeLength);
else
%[c2(xZomeLength) c2] = feval(evalFN,c2,[gen evalOps]);
eval(e2str);
end

endPop(a,:)=c1;
endPop(b,:)=c2;
end
end

for i=1:numMuts,
for j=1:mutOps(i,1),
a = round(rand*(popSize-1)+1);
c1 = feval(deblank(mutFNs(i,:)),endPop(a,:),bounds,[gen mutOps(i,:)]);
if c1(1:numVar)==endPop(a,(1:numVar))
c1(xZomeLength)=endPop(a,xZomeLength);
else
%[c1(xZomeLength) c1] = feval(evalFN,c1,[gen evalOps]);
eval(e1str);
end
endPop(a,:)=c1;
end
end

else %We are running a probabilistic model of genetic operators
for i=1:numXOvers,
xN=deblank(xOverFNs(i,:)); %Get the name of crossover function
cp=find(rand(popSize,1)<xOverOps(i,1)==1);
if rem(size(cp,1),2) cp=cp(1:(size(cp,1)-1)); end
cp=reshape(cp,size(cp,1)/2,2);
for j=1:size(cp,1)
a=cp(j,1); b=cp(j,2);
[endPop(a,:) endPop(b,:)] = feval(xN,endPop(a,:),endPop(b,:),...
bounds,[gen xOverOps(i,:)]);
end
end
for i=1:numMuts
mN=deblank(mutFNs(i,:));
for j=1:popSize
endPop(j,:) = feval(mN,endPop(j,:),bounds,[gen mutOps(i,:)]);
eval(e1str);
end
end
end

gen=gen+1;
done=feval(termFN,[gen termOps],bPop,endPop); %See if the ga is done
startPop=endPop; %Swap the populations

[bval,bindx] = min(startPop(:,xZomeLength)); %Keep the best solution
startPop(bindx,:) = best; %replace it with the worst
end

[bval,bindx] = max(startPop(:,xZomeLength));
if display
fprintf(1,'\n%d %f\n',gen,bval);
end

x=startPop(bindx,:);
if opts(2)==0 %binary
x=b2f(x,bounds,bits);
bPop(bFoundIn,:)=[gen b2f(startPop(bindx,1:numVar),bounds,bits)...
startPop(bindx,xZomeLength)];
else
bPop(bFoundIn,:)=[gen startPop(bindx,:)];
end

if collectTrace
traceInfo(gen,1)=gen; %current generation
traceInfo(gen,2)=startPop(bindx,xZomeLength); %Best fittness
traceInfo(gen,3)=mean(startPop(:,xZomeLength)); %Avg fittness
end

❹ 麻雀搜索演算法SSA優化BP神經網路(SSA-BP)回歸預測-MATLAB代碼實現

本文主要介紹的是將麻雀搜索演算法(SSA)優化應用於BP神經網路(SSA-BP)回歸預測的MATLAB代碼實現。SSA是一種受鳥類覓食行為啟發的新型優化演算法,它通過模擬麻雀群體的搜索策略,提高全局優化效率和參數設置的靈活性。在SSA-BP中,首先構建BP神經網路,設定輸入和輸出變數,然後通過SSA優化網路的權重和偏置參數,劃分為主群體和輔助群體,並通過覓食行為和偵查預警機制避免局部最優解。

在每次迭代中,主群體的發現者根據適應度值調整位置,而追隨者在適應度低時會嘗試遷移到其他區域。在BP神經網路優化過程中,SSA負責調整網路參數,通過多次迭代尋找最佳權重和偏置。實驗部分展示了最佳隱含層節點的選擇,以及SSA-BP與傳統BP演算法在預測准確性和誤差方面的對比。同時,還提供了適應度進化曲線、回歸圖和誤差直方圖,以直觀展示演算法的性能。

值得注意的是,由於演算法的隨機性,每次優化的結果可能有所差異,因此需要通過重復實驗驗證模型的穩定性和可靠性。本文提供的MATLAB代碼有助於讀者理解和實現這一優化策略在回歸預測中的應用。

❺ matlab BP神經網路的訓練演算法中訓練函數(traingdm 、trainlm、trainbr)的實現過程及相應的VC源代碼

VC源代碼?你很搞笑嘛。。
給你trainlm的m碼

function [out1,out2] = trainlm(varargin)
%TRAINLM Levenberg-Marquardt backpropagation.
%
% <a href="matlab:doc trainlm">trainlm</a> is a network training function that updates weight and
% bias states according to Levenberg-Marquardt optimization.
%
% <a href="matlab:doc trainlm">trainlm</a> is often the fastest backpropagation algorithm in the toolbox,
% and is highly recommended as a first choice supervised algorithm,
% although it does require more memory than other algorithms.
%
% [NET,TR] = <a href="matlab:doc trainlm">trainlm</a>(NET,X,T) takes a network NET, input data X
% and target data T and returns the network after training it, and a
% a training record TR.
%
% [NET,TR] = <a href="matlab:doc trainlm">trainlm</a>(NET,X,T,Xi,Ai,EW) takes additional optional
% arguments suitable for training dynamic networks and training with
% error weights. Xi and Ai are the initial input and layer delays states
% respectively and EW defines error weights used to indicate
% the relative importance of each target value.
%
% Training occurs according to training parameters, with default values.
% Any or all of these can be overridden with parameter name/value argument
% pairs appended to the input argument list, or by appending a structure
% argument with fields having one or more of these names.
% show 25 Epochs between displays
% showCommandLine 0 generate command line output
% showWindow 1 show training GUI
% epochs 100 Maximum number of epochs to train
% goal 0 Performance goal
% max_fail 5 Maximum validation failures
% min_grad 1e-10 Minimum performance gradient
% mu 0.001 Initial Mu
% mu_dec 0.1 Mu decrease factor
% mu_inc 10 Mu increase factor
% mu_max 1e10 Maximum Mu
% time inf Maximum time to train in seconds
%
% To make this the default training function for a network, and view
% and/or change parameter settings, use these two properties:
%
% net.<a href="matlab:doc nnproperty.net_trainFcn">trainFcn</a> = 'trainlm';
% net.<a href="matlab:doc nnproperty.net_trainParam">trainParam</a>
%
% See also trainscg, feedforwardnet, narxnet.

% Mark Beale, 11-31-97, ODJ 11/20/98
% Updated by Orlando De Jes鷖, Martin Hagan, Dynamic Training 7-20-05
% Copyright 1992-2010 The MathWorks, Inc.
% $Revision: 1.1.6.11.2.2 $ $Date: 2010/07/23 15:40:16 $

%% =======================================================
% BOILERPLATE_START
% This code is the same for all Training Functions.

persistent INFO;
if isempty(INFO), INFO = get_info; end
nnassert.minargs(nargin,1);
in1 = varargin{1};
if ischar(in1)
switch (in1)
case 'info'
out1 = INFO;
case 'check_param'
nnassert.minargs(nargin,2);
param = varargin{2};
err = nntest.param(INFO.parameters,param);
if isempty(err)
err = check_param(param);
end
if nargout > 0
out1 = err;
elseif ~isempty(err)
nnerr.throw('Type',err);
end
otherwise,
try
out1 = eval(['INFO.' in1]);
catch me, nnerr.throw(['Unrecognized first argument: ''' in1 ''''])
end
end
return
end
nnassert.minargs(nargin,2);
net = nn.hints(nntype.network('format',in1,'NET'));
oldTrainFcn = net.trainFcn;
oldTrainParam = net.trainParam;
if ~strcmp(net.trainFcn,mfilename)
net.trainFcn = mfilename;
net.trainParam = INFO.defaultParam;
end
[args,param] = nnparam.extract_param(varargin(2:end),net.trainParam);
err = nntest.param(INFO.parameters,param);
if ~isempty(err), nnerr.throw(nnerr.value(err,'NET.trainParam')); end
if INFO.isSupervised && isempty(net.performFcn) % TODO - fill in MSE
nnerr.throw('Training function is supervised but NET.performFcn is undefined.');
end
if INFO.usesGradient && isempty(net.derivFcn) % TODO - fill in
nnerr.throw('Training function uses derivatives but NET.derivFcn is undefined.');
end
if net.hint.zeroDelay, nnerr.throw('NET contains a zero-delay loop.'); end
[X,T,Xi,Ai,EW] = nnmisc.defaults(args,{},{},{},{},{1});
X = nntype.data('format',X,'Inputs X');
T = nntype.data('format',T,'Targets T');
Xi = nntype.data('format',Xi,'Input states Xi');
Ai = nntype.data('format',Ai,'Layer states Ai');
EW = nntype.nndata_pos('format',EW,'Error weights EW');
% Prepare Data
[net,data,tr,~,err] = nntraining.setup(net,mfilename,X,Xi,Ai,T,EW);
if ~isempty(err), nnerr.throw('Args',err), end
% Train
net = struct(net);
fcns = nn.subfcns(net);
[net,tr] = train_network(net,tr,data,fcns,param);
tr = nntraining.tr_clip(tr);
if isfield(tr,'perf')
tr.best_perf = tr.perf(tr.best_epoch+1);
end
if isfield(tr,'vperf')
tr.best_vperf = tr.vperf(tr.best_epoch+1);
end
if isfield(tr,'tperf')
tr.best_tperf = tr.tperf(tr.best_epoch+1);
end
net.trainFcn = oldTrainFcn;
net.trainParam = oldTrainParam;
out1 = network(net);
out2 = tr;
end

% BOILERPLATE_END
%% =======================================================

% TODO - MU => MU_START
% TODO - alternate parameter names (i.e. MU for MU_START)

function info = get_info()
info = nnfcnTraining(mfilename,'Levenberg-Marquardt',7.0,true,true,...
[ ...
nnetParamInfo('showWindow','Show Training Window Feedback','nntype.bool_scalar',true,...
'Display training window ring training.'), ...
nnetParamInfo('showCommandLine','Show Command Line Feedback','nntype.bool_scalar',false,...
'Generate command line output ring training.'), ...
nnetParamInfo('show','Command Line Frequency','nntype.strict_pos_int_inf_scalar',25,...
'Frequency to update command line.'), ...
...
nnetParamInfo('epochs','Maximum Epochs','nntype.pos_int_scalar',1000,...
'Maximum number of training iterations before training is stopped.'), ...
nnetParamInfo('time','Maximum Training Time','nntype.pos_inf_scalar',inf,...
'Maximum time in seconds before training is stopped.'), ...
...
nnetParamInfo('goal','Performance Goal','nntype.pos_scalar',0,...
'Performance goal.'), ...
nnetParamInfo('min_grad','Minimum Gradient','nntype.pos_scalar',1e-5,...
'Minimum performance gradient before training is stopped.'), ...
nnetParamInfo('max_fail','Maximum Validation Checks','nntype.strict_pos_int_scalar',6,...
'Maximum number of validation checks before training is stopped.'), ...
...
nnetParamInfo('mu','Mu','nntype.pos_scalar',0.001,...
'Mu.'), ...
nnetParamInfo('mu_dec','Mu Decrease Ratio','nntype.real_0_to_1',0.1,...
'Ratio to decrease mu.'), ...
nnetParamInfo('mu_inc','Mu Increase Ratio','nntype.over1',10,...
'Ratio to increase mu.'), ...
nnetParamInfo('mu_max','Maximum mu','nntype.strict_pos_scalar',1e10,...
'Maximum mu before training is stopped.'), ...
], ...
[ ...
nntraining.state_info('gradient','Gradient','continuous','log') ...
nntraining.state_info('mu','Mu','continuous','log') ...
nntraining.state_info('val_fail','Validation Checks','discrete','linear') ...
]);
end

function err = check_param(param)
err = '';
end

function [net,tr] = train_network(net,tr,data,fcns,param)

% Checks
if isempty(net.performFcn)
warning('nnet:trainlm:Performance',nnwarning.empty_performfcn_corrected);
net.performFcn = 'mse';
net.performParam = mse('defaultParam');
tr.performFcn = net.performFcn;
tr.performParam = net.performParam;
end
if isempty(strmatch(net.performFcn,{'sse','mse'},'exact'))
warning('nnet:trainlm:Performance',nnwarning.nonjacobian_performfcn_replaced);
net.performFcn = 'mse';
net.performParam = mse('defaultParam');
tr.performFcn = net.performFcn;
tr.performParam = net.performParam;
end

% Initialize
startTime = clock;
original_net = net;
[perf,vperf,tperf,je,jj,gradient] = nntraining.perfs_jejj(net,data,fcns);
[best,val_fail] = nntraining.validation_start(net,perf,vperf);
WB = getwb(net);
lengthWB = length(WB);
ii = sparse(1:lengthWB,1:lengthWB,ones(1,lengthWB));
mu = param.mu;

% Training Record
tr.best_epoch = 0;
tr.goal = param.goal;
tr.states = {'epoch','time','perf','vperf','tperf','mu','gradient','val_fail'};

% Status
status = ...
[ ...
nntraining.status('Epoch','iterations','linear','discrete',0,param.epochs,0), ...
nntraining.status('Time','seconds','linear','discrete',0,param.time,0), ...
nntraining.status('Performance','','log','continuous',perf,param.goal,perf) ...
nntraining.status('Gradient','','log','continuous',gradient,param.min_grad,gradient) ...
nntraining.status('Mu','','log','continuous',mu,param.mu_max,mu) ...
nntraining.status('Validation Checks','','linear','discrete',0,param.max_fail,0) ...
];
nn_train_feedback('start',net,status);

% Train
for epoch = 0:param.epochs

% Stopping Criteria
current_time = etime(clock,startTime);
[userStop,userCancel] = nntraintool('check');
if userStop, tr.stop = 'User stop.'; net = best.net;
elseif userCancel, tr.stop = 'User cancel.'; net = original_net;
elseif (perf <= param.goal), tr.stop = 'Performance goal met.'; net = best.net;
elseif (epoch == param.epochs), tr.stop = 'Maximum epoch reached.'; net = best.net;
elseif (current_time >= param.time), tr.stop = 'Maximum time elapsed.'; net = best.net;
elseif (gradient <= param.min_grad), tr.stop = 'Minimum gradient reached.'; net = best.net;
elseif (mu >= param.mu_max), tr.stop = 'Maximum MU reached.'; net = best.net;
elseif (val_fail >= param.max_fail), tr.stop = 'Validation stop.'; net = best.net;
end

% Feedback
tr = nntraining.tr_update(tr,[epoch current_time perf vperf tperf mu gradient val_fail]);
nn_train_feedback('update',net,status,tr,data, ...
[epoch,current_time,best.perf,gradient,mu,val_fail]);

% Stop
if ~isempty(tr.stop), break, end

% Levenberg Marquardt
while (mu <= param.mu_max)
% CHECK FOR SINGULAR MATRIX
[msgstr,msgid] = lastwarn;
lastwarn('MATLAB:nothing','MATLAB:nothing')
warnstate = warning('off','all');
dWB = -(jj+ii*mu) \ je;
[~,msgid1] = lastwarn;
flag_inv = isequal(msgid1,'MATLAB:nothing');
if flag_inv, lastwarn(msgstr,msgid); end;
warning(warnstate)
WB2 = WB + dWB;
net2 = setwb(net,WB2);
perf2 = nntraining.train_perf(net2,data,fcns);

% TODO - possible speed enhancement
% - retain intermediate variables for Memory Rection = 1

if (perf2 < perf) && flag_inv
WB = WB2; net = net2;
mu = max(mu*param.mu_dec,1e-20);
break
end
mu = mu * param.mu_inc;
end

% Validation
[perf,vperf,tperf,je,jj,gradient] = nntraining.perfs_jejj(net,data,fcns);
[best,tr,val_fail] = nntraining.validation(best,tr,val_fail,net,perf,vperf,epoch);
end
end

閱讀全文

與bp演算法源代碼相關的資料

熱點內容
用演算法控制玩家的行為 瀏覽:478
androidsdk17下載 瀏覽:792
怎麼給單獨表格添加密碼 瀏覽:12
下載壓縮密碼 瀏覽:259
android系統上編程 瀏覽:468
單片機模擬i2c從機 瀏覽:236
教育年報系統伺服器如何開啟 瀏覽:840
對稱密鑰加密後的長度 瀏覽:292
微製造編程軟體下載 瀏覽:106
旋住宿酒店用哪個App最好 瀏覽:60
三菱編程中怎麼創建子程序 瀏覽:199
在單片機溫度輸入採集信號有 瀏覽:684
電腦雲伺服器同步 瀏覽:418
方舟生存進化手游版怎麼轉伺服器 瀏覽:89
哪個app可以聽小說 瀏覽:160
網路發送數據如何加密 瀏覽:201
教材完全解讀pdf 瀏覽:820
什麼是多台伺服器 瀏覽:36
菜鳥音樂編輯app哪個好 瀏覽:547
人工魚群演算法matlab 瀏覽:82