gpt4 book ai didi

python - 具有多项式特征的内核岭和简单岭

转载 作者:行者123 更新时间:2023-11-28 18:07:49 24 4
gpt4 key购买 nike

具有多项式内核的 Kernel Ridge(来自 sklearn.kernel_ridge)和使用 PolynomialFeatures + Ridge(来自 sklearn.linear_model)有什么区别?

最佳答案

区别在于特征计算。 PolynomialFeaturesKernelRidge(kernel='poly') 时显式计算输入特征之间的多项式组合达到所需程度只考虑一个多项式内核(a polynomial representation of feature dot products),它将根据原始特征表示。 This document总体上提供了一个很好的概述。

关于计算我们可以从源码中查看相关部分:

(训练)内核的计算遵循类似的过程:比较 RidgeKernelRidge .主要区别是 Ridge 明确考虑它收到的任何(多项式)特征之间的点积,而对于 KernelRidge,这些多项式特征是 generated implicitly during the computation .例如,考虑单个特征 xgamma = coef0 = 1 KernelRidge 计算 (x**2 + 1)**2 == (x**4 + 2*x** 2 + 1)。如果您现在考虑 PolynomialFeatures 这将提供特征 x**2, x, 1 并且对应的点积是 x**4 + x**2 + 1。因此,点积与 x**2 项不同。当然,我们可以重新缩放多边形特征以具有 x**2, sqrt(2)*x, 1 而使用 KernelRidge(kernel='poly') 我们不有这种灵 active 。另一方面,差异可能无关紧要(在大多数情况下)。

请注意,对偶系数的计算也以类似的方式执行:RidgeKernelRidge .最后,KernelRidge 保留对偶系数,而 Ridge 直接计算权重。

让我们看一个小例子:

import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import Ridge
from sklearn.kernel_ridge import KernelRidge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.utils.extmath import safe_sparse_dot

np.random.seed(20181001)

a, b = 1, 4
x = np.linspace(0, 2, 100).reshape(-1, 1)
y = a*x**2 + b*x + np.random.normal(scale=0.2, size=(100,1))

poly = PolynomialFeatures(degree=2, include_bias=True)
xp = poly.fit_transform(x)
print('We can see that the new features are now [1, x, x**2]:')
print(f'xp.shape: {xp.shape}')
print(f'xp[-5:]:\n{xp[-5:]}', end='\n\n')
# Scale the `x` columns so we obtain similar results.
xp[:, 1] *= np.sqrt(2)

ridge = Ridge(alpha=0, fit_intercept=False, solver='cholesky')
ridge.fit(xp, y)

krr = KernelRidge(alpha=0, kernel='poly', degree=2, gamma=1, coef0=1)
krr.fit(x, y)

# Let's try to reproduce some of the involved steps for the different models.
ridge_K = safe_sparse_dot(xp, xp.T)
krr_K = krr._get_kernel(x)
print('The computed kernels are (alomst) similar:')
print(f'Max. kernel difference: {np.abs(ridge_K - krr_K).max()}', end='\n\n')
print('Predictions slightly differ though:')
print(f'Max. difference: {np.abs(krr.predict(x) - ridge.predict(xp)).max()}', end='\n\n')

# Let's see if the fit changes if we provide `x**2, x, 1` instead of `x**2, sqrt(2)*x, 1`.
xp_2 = xp.copy()
xp_2[:, 1] /= np.sqrt(2)
ridge_2 = Ridge(alpha=0, fit_intercept=False, solver='cholesky')
ridge_2.fit(xp_2, y)
print('Using features "[x**2, x, 1]" instead of "[x**2, sqrt(2)*x, 1]" predictions are (almost) the same:')
print(f'Max. difference: {np.abs(ridge_2.predict(xp_2) - ridge.predict(xp)).max()}', end='\n\n')
print('Interpretability of the coefficients changes though:')
print(f'ridge.coef_[1:]: {ridge.coef_[0, 1:]}, ridge_2.coef_[1:]: {ridge_2.coef_[0, 1:]}')
print(f'ridge.coef_[1]*sqrt(2): {ridge.coef_[0, 1]*np.sqrt(2)}')
print(f'Compare with: a, b = ({a}, {b})')

plt.plot(x.ravel(), y.ravel(), 'o', color='skyblue', label='Data')
plt.plot(x.ravel(), ridge.predict(xp).ravel(), '-', label='Ridge', lw=3)
plt.plot(x.ravel(), krr.predict(x).ravel(), '--', label='KRR', lw=3)
plt.grid()
plt.legend()
plt.show()

我们从中获得:

We can see that the new features are now [x, x**2]:
xp.shape: (100, 3)
xp[-5:]:
[[1. 1.91919192 3.68329762]
[1. 1.93939394 3.76124885]
[1. 1.95959596 3.84001632]
[1. 1.97979798 3.91960004]
[1. 2. 4. ]]

The computed kernels are (alomst) similar:
Max. kernel difference: 1.0658141036401503e-14

Predictions slightly differ though:
Max. difference: 0.04244651134471766

Using features "[x**2, x, 1]" instead of "[x**2, sqrt(2)*x, 1]" predictions are (almost) the same:
Max. difference: 7.15642822779472e-14

Interpretability of the coefficients changes though:
ridge.coef_[1:]: [2.73232239 1.08868872], ridge_2.coef_[1:]: [3.86408737 1.08868872]
ridge.coef_[1]*sqrt(2): 3.86408737392841
Compare with: a, b = (1, 4)

Example plot

关于python - 具有多项式特征的内核岭和简单岭,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52573224/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com