gpt4 book ai didi

python - 在 BaggingRegressor 中使用 xgboost

转载 作者:太空宇宙 更新时间:2023-11-03 10:48:36 25 4
gpt4 key购买 nike

我需要在BaggingRegressor中运行xgboost,我使用xgboost

import xgboost

D_train = xgboost.DMatrix(X_train, lab_train)
D_val = xgboost.DMatrix(X_train[test_index], lab_train[test_index])
D_pred =xgboost.DMatrix( X_train[test_index])
D_test = xgboost.DMatrix(X_test)
D_ttest = xgboost.DMatrix(ttest)


xgb_params = dict()
xgb_params["objective"] = "reg:linear"
xgb_params["eta"] = 0.01
xgb_params["min_child_weight"] = 6
xgb_params["subsample"] = 0.7
xgb_params["colsample_bytree"] = 0.6
xgb_params["scale_pos_weight"] = 0.8
xgb_params["silent"] = 1
xgb_params["max_depth"] = 10
xgb_params["max_delta_step"]=2
watchlist = [(D_train, 'train')]
xg = xgboost.Booster()

print('1000')
model = xgboost.train(params=xgb_params, dtrain=D_train, num_boost_round=1000,
evals=watchlist, verbose_eval=1, early_stopping_rounds=20)

y_pred1 = model.predict(D_ttest)

如何在 BaggingRegressor 中使用所有相同的参数?

如果我这样做

gdr = BaggingRegressor(base_estimator= xgboost.train( params=xgb_params,
dtrain=D_train,
num_boost_round=3000,
evals=watchlist,
verbose_eval=1,
early_stopping_rounds=20))

然后开始xgboost训练,然后是代码

gdr_model = gdr
print(gdr_model)
gdr_model.fit(X_train, lab_train)
train_pred = gdr_model.predict(X_test)

print('mse from log: ', mean_squared_error(lab_train, train_pred))

train_pred = gdr_model.predict(ttest)

没有意义,还是我错了?告诉我如何解决这个问题

最佳答案

Xgboost 有一个 Sklearn wrapper 。尝试使用以下模板!

import xgboost
from sklearn.datasets import load_boston
from xgboost.sklearn import XGBRegressor
from sklearn.ensemble import BaggingRegressor

X,y = load_boston(return_X_y=True)

reg = BaggingRegressor(base_estimator=XGBRegressor())

reg.fit(X,y)

关于python - 在 BaggingRegressor 中使用 xgboost,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55934293/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com