gpt4 book ai didi

matlab - 连续 RBM : Poor performance only for negative valued input data?

转载 作者:行者123 更新时间:2023-12-04 02:52:32 26 4
gpt4 key购买 nike

我试图将这个连续 RBM 的 python 实现移植到 Matlab: http://imonad.com/rbm/restricted-boltzmann-machine/

我生成了一个(嘈杂的)圆形状的二维训练数据,并用 2 个可见层和 8 个隐藏层训练了 rbm。为了测试实现,我将均匀分布的随机数据提供给 RBM 并绘制重建数据(与上面链接中使用的过程相同)。

现在是令人困惑的部分:训练数据在 (0,1)x(0,1) 范围内,我得到了非常令人满意的结果,但是训练数据在 (-0.5,-0.5)x(-0.5,-0.5) 范围内) 或 (-1,0)x(-1,0) RBM 仅重建圆圈右上角的数据。我不明白是什么原因导致的,这只是我没有看到的实现中的错误吗?

一些图,蓝点是训练数据,红点是重建数据。

这是我对 RBM 的实现:培训:

maxepoch = 300;
ksteps = 10;
sigma = 0.2; % cd standard deviation
learnW = 0.5; % learning rate W
learnA = 0.5; % learning rate A
nVis = 2; % number of visible units
nHid = 8; % number of hidden units
nDat = size(dat, 1);% number of training data points
cost = 0.00001; % cost
moment = 0.9; % momentum
W = randn(nVis+1, nHid+1) / 10; % weights
dW = randn(nVis+1, nHid+1) / 1000; % change of weights
sVis = zeros(1, nVis+1); % state of visible neurons
sVis(1, end) = 1.0; % bias
sVis0 = zeros(1, nVis+1); % initial state of visible neurons
sVis0(1, end) = 1.0; % bias
sHid = zeros(1, nHid+1); % state of hidden neurons
sHid(1, end) = 1.0; % bias
aVis = 0.1*ones(1, nVis+1);% A visible
aHid = ones(1, nHid+1); % A hidden
err = zeros(1, maxepoch);
e = zeros(1, maxepoch);
for epoch = 1:maxepoch
wPos = zeros(nVis+1, nHid+1);
wNeg = zeros(nVis+1, nHid+1);
aPos = zeros(1, nHid+1);
aNeg = zeros(1, nHid+1);
for point = 1:nDat
sVis(1:nVis) = dat(point, :);
sVis0(1:nVis) = sVis(1:nVis); % initial sVis
% positive phase
activHid;
wPos = wPos + sVis' * sHid;
aPos = aPos + sHid .* sHid;
% negative phase
activVis;
activHid;
for k = 1:ksteps
activVis;
activHid;
end
tmp = sVis' * sHid;
wNeg = wNeg + tmp;
aNeg = aNeg + sHid .* sHid;
delta = sVis0(1:nVis) - sVis(1:nVis);
err(epoch) = err(epoch) + sum(delta .* delta);
e(epoch) = e(epoch) - sum(sum(W' * tmp));
end
dW = dW*moment + learnW * ((wPos - wNeg) / numel(dat)) - cost * W;
W = W + dW;
aHid = aHid + learnA * (aPos - aNeg) / (numel(dat) * (aHid .* aHid));
% error
err(epoch) = err(epoch) / (nVis * numel(dat));
e(epoch) = e(epoch) / numel(dat);
disp(['epoch: ' num2str(epoch) ' err: ' num2str(err(epoch)) ...
' ksteps: ' num2str(ksteps)]);
end
save(['rbm_' filename '.mat'], 'W', 'err', 'aVis', 'aHid');

activHid.m:

sHid = (sVis * W) + randn(1, nHid+1);
sHid = sigFun(aHid .* sHid, datRange);
sHid(end) = 1.; % bias

activis.m:

sVis = (W * sHid')' + randn(1, nVis+1);
sVis = sigFun(aVis .* sVis, datRange);
sVis(end) = 1.; % bias

sigFun.m:

function [sig] = sigFun(X, datRange)
a = ones(size(X)) * datRange(1);
b = ones(size(X)) * (datRange(2) - datRange(1));
c = ones(size(X)) + exp(-X);
sig = a + (b ./ c);
end

重构:

nSamples = 2000;
ksteps = 10;
nVis = 2;
nHid = 8;
sVis = zeros(1, nVis+1); % state of visible neurons
sVis(1, end) = 1.0; % bias
sHid = zeros(1, nHid+1); % state of hidden neurons
sHid(1, end) = 1.0; % bias
input = rand(nSamples, 2);
output = zeros(nSamples, 2);
for sample = 1:nSamples
sVis(1:nVis) = input(sample, :);
for k = 1:ksteps
activHid;
activVis;
end
output(sample, :) = sVis(1:nVis);
end

最佳答案

RBM 最初设计为仅处理二进制数据。但也处理 0 到 1 之间的数据。它是算法的一部分。 Further reading

关于matlab - 连续 RBM : Poor performance only for negative valued input data?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/17395751/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com