gpt4 book ai didi

python - xgboost 中的泊松回归在低频时失败

转载 作者:太空狗 更新时间:2023-10-30 02:54:13 33 4
gpt4 key购买 nike

我正在尝试在 xgboost 中实现增强型泊松回归模型,但我发现结果在低频时有偏差。为了说明这一点,这里有一些我认为可以重现该问题的最小 Python 代码:

import numpy as np
import pandas as pd
import xgboost as xgb

def get_preds(mult):
# generate toy dataset for illustration
# 4 observations with linearly increasing frequencies
# the frequencies are scaled by `mult`
dmat = xgb.DMatrix(data=np.array([[0, 0], [0, 1], [1, 0], [1, 1]]),
label=[i*mult for i in [1, 2, 3, 4]],
weight=[1000, 1000, 1000, 1000])

# train a poisson booster on the toy data
bst = xgb.train(
params={"objective": "count:poisson"},
dtrain=dmat,
num_boost_round=100000,
early_stopping_rounds=5,
evals=[(dmat, "train")],
verbose_eval=False)

# return fitted frequencies after reversing scaling
return bst.predict(dmat)/mult

# test multipliers in the range [10**(-8), 10**1]
# display fitted frequencies
mults = [10**i for i in range(-8, 1)]
df = pd.DataFrame(np.round(np.vstack([get_preds(m) for m in mults]), 0))
df.index = mults
df.columns = ["(0, 0)", "(0, 1)", "(1, 0)", "(1, 1)"]
df

# --- result ---
# (0, 0) (0, 1) (1, 0) (1, 1)
#1.000000e-08 11598.0 11598.0 11598.0 11598.0
#1.000000e-07 1161.0 1161.0 1161.0 1161.0
#1.000000e-06 118.0 118.0 118.0 118.0
#1.000000e-05 12.0 12.0 12.0 12.0
#1.000000e-04 2.0 2.0 3.0 3.0
#1.000000e-03 1.0 2.0 3.0 4.0
#1.000000e-02 1.0 2.0 3.0 4.0
#1.000000e-01 1.0 2.0 3.0 4.0
#1.000000e+00 1.0 2.0 3.0 4.0

请注意,在低频下,预测似乎会爆炸。这可能与 Poisson lambda * 权重下降到 1 以下有关(事实上,将权重增加到 1000 以上确实会将“放大”转移到较低频率),但我仍然希望预测接近均值训练频率(2.5)。此外(上面的示例中未显示),减少 eta 似乎会增加预测中的偏差量。

什么会导致这种情况发生?是否有可以减轻影响的可用参数?

最佳答案

经过一番挖掘,我找到了解决方案。在这里记录以防其他人遇到同样的问题。事实证明,我需要添加一个等于平均频率的(自然)对数的偏移项。如果这不是很明显,那是因为初始预测从 0.5 的频率开始,并且需要许多提升迭代才能将预测重新调整为平均频率。

请参阅下面的代码以获取玩具示例的更新。正如我在原始问题中所建议的那样,预测现在接近较低尺度的平均频率 (2.5)。

import numpy as np
import pandas as pd
import xgboost as xgb

def get_preds(mult):
# generate toy dataset for illustration
# 4 observations with linearly increasing frequencies
# the frequencies are scaled by `mult`
dmat = xgb.DMatrix(data=np.array([[0, 0], [0, 1], [1, 0], [1, 1]]),
label=[i*mult for i in [1, 2, 3, 4]],
weight=[1000, 1000, 1000, 1000])

## adding an offset term equal to the log of the mean frequency
offset = np.log(np.mean([i*mult for i in [1, 2, 3, 4]]))
dmat.set_base_margin(np.repeat(offset, 4))

# train a poisson booster on the toy data
bst = xgb.train(
params={"objective": "count:poisson"},
dtrain=dmat,
num_boost_round=100000,
early_stopping_rounds=5,
evals=[(dmat, "train")],
verbose_eval=False)

# return fitted frequencies after reversing scaling
return bst.predict(dmat)/mult

# test multipliers in the range [10**(-8), 10**1]
# display fitted frequencies
mults = [10**i for i in range(-8, 1)]
## round to 1 decimal point to show the result approaches 2.5
df = pd.DataFrame(np.round(np.vstack([get_preds(m) for m in mults]), 1))
df.index = mults
df.columns = ["(0, 0)", "(0, 1)", "(1, 0)", "(1, 1)"]
df

# --- result ---
# (0, 0) (0, 1) (1, 0) (1, 1)
#1.000000e-08 2.5 2.5 2.5 2.5
#1.000000e-07 2.5 2.5 2.5 2.5
#1.000000e-06 2.5 2.5 2.5 2.5
#1.000000e-05 2.5 2.5 2.5 2.5
#1.000000e-04 2.4 2.5 2.5 2.6
#1.000000e-03 1.0 2.0 3.0 4.0
#1.000000e-02 1.0 2.0 3.0 4.0
#1.000000e-01 1.0 2.0 3.0 4.0
#1.000000e+00 1.0 2.0 3.0 4.0

关于python - xgboost 中的泊松回归在低频时失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46698872/

33 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com