gpt4 book ai didi

python - Sklearn Lasso 回归比岭回归差几个数量级?

转载 作者:太空宇宙 更新时间:2023-11-03 11:47:31 24 4
gpt4 key购买 nike

我目前使用 sklearn.linear_model 模块实现了 Ridge 和 Lasso 回归。

但是,套索回归在同一数据集上的效果似乎差了 3 个数量级!

我不确定哪里出了问题,因为从数学上讲,这不应该发生。这是我的代码:

def ridge_regression(X_train, Y_train, X_test, Y_test, model_alpha):
clf = linear_model.Ridge(model_alpha)
clf.fit(X_train, Y_train)
predictions = clf.predict(X_test)
loss = np.sum((predictions - Y_test)**2)
return loss

def lasso_regression(X_train, Y_train, X_test, Y_test, model_alpha):
clf = linear_model.Lasso(model_alpha)
clf.fit(X_train, Y_train)
predictions = clf.predict(X_test)
loss = np.sum((predictions - Y_test)**2)
return loss


X_train, X_test, Y_train, Y_test = cross_validation.train_test_split(X, Y, test_size=0.1, random_state=0)
for alpha in [0, 0.01, 0.1, 0.5, 1, 2, 5, 10, 100, 1000, 10000]:
print("Lasso loss for alpha=" + str(alpha) +": " + str(lasso_regression(X_train, Y_train, X_test, Y_test, alpha)))

for alpha in [1, 1.25, 1.5, 1.75, 2, 5, 10, 100, 1000, 10000, 100000, 1000000]:
print("Ridge loss for alpha=" + str(alpha) +": " + str(ridge_regression(X_train, Y_train, X_test, Y_test, alpha)))

这是我的输出:

Lasso loss for alpha=0: 20575.7121727
Lasso loss for alpha=0.01: 19762.8763969
Lasso loss for alpha=0.1: 17656.9926418
Lasso loss for alpha=0.5: 15699.2014387
Lasso loss for alpha=1: 15619.9772649
Lasso loss for alpha=2: 15490.0433166
Lasso loss for alpha=5: 15328.4303197
Lasso loss for alpha=10: 15328.4303197
Lasso loss for alpha=100: 15328.4303197
Lasso loss for alpha=1000: 15328.4303197
Lasso loss for alpha=10000: 15328.4303197
Ridge loss for alpha=1: 61.6235890425
Ridge loss for alpha=1.25: 61.6360790934
Ridge loss for alpha=1.5: 61.6496312133
Ridge loss for alpha=1.75: 61.6636076713
Ridge loss for alpha=2: 61.6776331539
Ridge loss for alpha=5: 61.8206621527
Ridge loss for alpha=10: 61.9883144732
Ridge loss for alpha=100: 63.9106882674
Ridge loss for alpha=1000: 69.3266510866
Ridge loss for alpha=10000: 82.0056669678
Ridge loss for alpha=100000: 88.4479064159
Ridge loss for alpha=1000000: 91.7235727543

知道为什么吗?

谢谢!

最佳答案

有趣的问题。我可以确认这不是算法实现的问题,而是对您输入的正确响应。

这是一个想法:您没有根据您的描述对我认为的数据进行标准化。这可能会导致不稳定,因为您的特征具有明显不同的数量级和方差。 Lasso 比 ridge 更“全有或全无”(您可能已经注意到它选择的 0 系数比 ridge 多得多),因此不稳定性被放大。

尝试规范化您的数据,看看您是否更喜欢您的结果。

另一个想法:这可能是伯克利的老师有意为之,以突出脊线和套索之间根本不同的行为。

关于python - Sklearn Lasso 回归比岭回归差几个数量级?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35714772/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com