gpt4 book ai didi

python - scikit-learn 高斯过程回归的错误输出

转载 作者:行者123 更新时间:2023-11-30 09:43:29 25 4
gpt4 key购买 nike

我有一组数据(X,y),其中X是我的输入,它是二维的,y是我的输出,它是一维的,每对(X,y)都有一个对应的非均匀噪声项。这是一个我应用高斯过程回归的工作示例:

import numpy as np 
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, WhiteKernel, ConstantKernel as C

lenX = 20
X1min = 0.
X1max = 1.
X2min = 0.
X2max = 2.
X1 = np.linspace(X1min,X1max,lenX)
X2 = np.linspace(X2min,X2max,lenX)
time_spacing = X2[1] - X2[0]

X = []
i = 0
while i < lenX:
j = 0
while j < lenX:
X.append([X1[i],X2[j]])
j = j + 1
i = i + 1

X = np.array(X)

def fun_y(X):
y = 5.*((np.sin(X[:,0]))**2.)*(np.e**(-(X[:,1]**2.)))
y[y < 0.001] = 0.0
return y

y = fun_y(X)
noise = 0.1*y #0.2/y + 0.2#*np.linspace(1.0,0.1,len(X))

len_x1 = 10
len_x2 = 100
x1_min = X1min
x2_min = X2min
x1_max = X1max
x2_max = X2max
x1 = np.linspace(x1_min, x1_max, len_x1)
x2 = np.linspace(x2_min, x2_max, len_x2)

i = 0
inputs_x = []
while i < len(x1):
j = 0
while j < len(x2):
inputs_x.append([x1[i],x2[j]])
j = j + 1
i = i + 1
inputs_x_array = np.array(inputs_x) #simply a set of inputs I want to predict at

kernel = C(1.0, (1e-10, 1000)) * RBF(length_scale = [1., 1.], length_scale_bounds=[(1e-5, 1e5),(1e-7, 1e7)]) \
+ WhiteKernel(noise_level=1, noise_level_bounds=(1e-10, 1e10)) #\

gp = GaussianProcessRegressor(kernel=kernel, alpha=noise ** 2, n_restarts_optimizer=100)

# Fit to data using Maximum Likelihood Estimation of the parameters
gp.fit(X, y.reshape(-1,1)) #removing reshape results in a different error

y_pred_index, y_pred_sigma_index = gp.predict(inputs_x_array, return_std=True)

尽管使用了多种内核变体,但在尝试找到超参数与数据的最佳拟合时,我仍然观察到收敛错误:

/.local/lib/python3.6/site-packages/sklearn/gaussian_process/gpr.py:481: ConvergenceWarning: fmin_l_bfgs_b terminated abnormally with the  state: {'grad': array([ 3.89194489e-03,  9.32690036e-03, -0.00000000e+00,  6.42836597e+01]), 'task': b'ABNORMAL_TERMINATION_IN_LNSRCH', 'funcalls': 128, 'nit': 26, 'warnflag': 2}
ConvergenceWarning)

我尝试添加/相乘 RBF 内核、改变超参数的范围并包含 WhiteNoise,但我的方法似乎都不起作用。关于如何避免此错误并选择一个好的内核来拟合数据,我有什么想法吗?

最佳答案

我不确定这对于您的数据来说是否是一个好的内核,但仅仅通过限制超参数范围,我确实设法摆脱了ConvergenceWarning:

kernel = C(1.0, (1e-3, 1e3)) * RBF(length_scale = [.1, .1], length_scale_bounds=[(1e-2, 1e2),(1e-2, 1e2)]) \
+ WhiteKernel(noise_level=1e-5, noise_level_bounds=(1e-10, 1e-4))

请求gp.kernel_.get_params(deep=True)会产生

{'k1': 1.51**2 * RBF(length_scale=[0.843, 1.15]),
'k1__k1': 1.51**2,
'k1__k1__constant_value': 2.275727769273166,
'k1__k1__constant_value_bounds': (0.001, 1000.0),
'k1__k2': RBF(length_scale=[0.843, 1.15]),
'k1__k2__length_scale': array([0.84331346, 1.15091614]),
'k1__k2__length_scale_bounds': [(0.01, 100.0), (0.01, 100.0)],
'k2': WhiteKernel(noise_level=1.4e-08),
'k2__noise_level': 1.403204609548082e-08,
'k2__noise_level_bounds': (1e-10, 0.0001)}

另请参阅this remark .

关于python - scikit-learn 高斯过程回归的错误输出,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55860023/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com