- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在使用带有自定义评估函数的 xgboost,我想实现提前停止设置 150 轮的限制。
我得到了 4 个评估指标,而不是预期的 2 个,我不知道如何解释它们。此外,我不确定如何激活提前停止设置限制(例如,150 轮)。
一个可重现的例子:
import numpy as np
def F1_eval_gen(preds, labels):
t = np.arange(0, 1, 0.005)
f = np.repeat(0, 200)
results = np.vstack([t, f]).T
# assuming labels only containing 0's and 1's
n_pos_examples = sum(labels)
if n_pos_examples == 0:
n_pos_examples = 1
for i in range(200):
pred_indexes = (preds >= results[i, 0])
TP = sum(labels[pred_indexes])
FP = len(labels[pred_indexes]) - TP
precision = 0
recall = TP / n_pos_examples
if (FP + TP) > 0:
precision = TP / (FP + TP)
if (precision + recall > 0):
F1 = 2 * precision * recall / (precision + recall)
else:
F1 = 0
results[i, 1] = F1
return (max(results[:, 1]))
def F1_eval(preds, dtrain):
res = F1_eval_gen(preds, dtrain.get_label())
return 'f1_err', 1-res
from sklearn import datasets
from sklearn.model_selection import *
skl_data = datasets.load_breast_cancer()
X = skl_data.data
y = skl_data.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
scale_pos_weight = sum(y_train == 0)/sum(y_train == 1)
base_score = sum(y_train == 1)/len(y_train)
max_depth = 6
learning_rate = 0.1
gamma = 0
min_child_weight = 1
subsample = 0.8
colsample_bytree = 0.8
colsample_bylevel = 1
reg_alpha = 0
reg_lambda = 1
clf = xgb.XGBClassifier(max_depth= max_depth, learning_rate= learning_rate,silent=False, objective='binary:logistic', \
booster='gbtree', n_jobs=8, nthread=None, gamma=gamma, min_child_weight=min_child_weight, max_delta_step=0, \
subsample= subsample, colsample_bytree=colsample_bytree, colsample_bylevel=colsample_bylevel, \
reg_alpha= reg_alpha, reg_lambda=1, scale_pos_weight= scale_pos_weight, base_score= base_score)
clf.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_test, y_test)], eval_metric= F1_eval, verbose=True)
..................
[94] validation_0-error:0 validation_1-error:0.035088 validation_0-f1_err:0 validation_1-f1_err:0.018634
[95] validation_0-error:0 validation_1-error:0.035088 validation_0-f1_err:0 validation_1-f1_err:0.018634
[96] validation_0-error:0 validation_1-error:0.035088 validation_0-f1_err:0 validation_1-f1_err:0.018634
[97] validation_0-error:0 validation_1-error:0.035088 validation_0-f1_err:0 validation_1-f1_err:0.018634
[98] validation_0-error:0 validation_1-error:0.035088 validation_0-f1_err:0 validation_1-f1_err:0.018634
[99] validation_0-error:0 validation_1-error:0.035088 validation_0-f1_err:0 validation_1-f1_err:0.018634
clf = xgb.XGBClassifier(max_depth= max_depth, niterations = 1000, learning_rate= learning_rate,silent=False, \
objective='binary:logistic', booster='gbtree', n_jobs=8, nthread=None, gamma=gamma,\
min_child_weight=min_child_weight, max_delta_step=0, \
subsample= subsample, colsample_bytree=colsample_bytree, colsample_bylevel=colsample_bylevel, \
reg_alpha= reg_alpha, reg_lambda=1, scale_pos_weight= scale_pos_weight, base_score= base_score)
clf.fit(X_train, y_train, early_stopping_rounds= 25,
eval_set=[(X_train, y_train), (X_test, y_test)], eval_metric= F1_eval, verbose=True)
[0] validation_0-error:0.386813 validation_1-error:0.315789 validation_0-f1_err:0.032609 validation_1-f1_err:0.031847
Multiple eval metrics have been passed: 'validation_1-f1_err' will be used for early stopping.
Will train until validation_1-f1_err hasn't improved in 25 rounds.
[1] validation_0-error:0.131868 validation_1-error:0.078947 validation_0-f1_err:0.016216 validation_1-f1_err:0.031056
[2] validation_0-error:0.048352 validation_1-error:0.052632 validation_0-f1_err:0.012522 validation_1-f1_err:0.037037
[3] validation_0-error:0.032967 validation_1-error:0.04386 validation_0-f1_err:0.008977 validation_1-f1_err:0.031447
[4] validation_0-error:0.01978 validation_1-error:0.04386 validation_0-f1_err:0.010753 validation_1-f1_err:0.031447
[5] validation_0-error:0.015385 validation_1-error:0.035088 validation_0-f1_err:0.008977 validation_1-f1_err:0.025316
[6] validation_0-error:0.013187 validation_1-error:0.04386 validation_0-f1_err:0.010676 validation_1-f1_err:0.025316
[7] validation_0-error:0.017582 validation_1-error:0.04386 validation_0-f1_err:0.010638 validation_1-f1_err:0.018868
[8] validation_0-error:0.013187 validation_1-error:0.04386 validation_0-f1_err:0.008913 validation_1-f1_err:0.025
[9] validation_0-error:0.008791 validation_1-error:0.04386 validation_0-f1_err:0.007143 validation_1-f1_err:0.025
[10] validation_0-error:0.010989 validation_1-error:0.04386 validation_0-f1_err:0.007143 validation_1-f1_err:0.025
[11] validation_0-error:0.008791 validation_1-error:0.04386 validation_0-f1_err:0.007143 validation_1-f1_err:0.025
[12] validation_0-error:0.008791 validation_1-error:0.052632 validation_0-f1_err:0.007143 validation_1-f1_err:0.025
[13] validation_0-error:0.008791 validation_1-error:0.052632 validation_0-f1_err:0.007117 validation_1-f1_err:0.025
[14] validation_0-error:0.008791 validation_1-error:0.052632 validation_0-f1_err:0.005348 validation_1-f1_err:0.018868
[15] validation_0-error:0.008791 validation_1-error:0.052632 validation_0-f1_err:0.005348 validation_1-f1_err:0.018868
[16] validation_0-error:0.008791 validation_1-error:0.052632 validation_0-f1_err:0.005348 validation_1-f1_err:0.018868
[17] validation_0-error:0.008791 validation_1-error:0.052632 validation_0-f1_err:0.005348 validation_1-f1_err:0.018868
[18] validation_0-error:0.008791 validation_1-error:0.052632 validation_0-f1_err:0.005348 validation_1-f1_err:0.018868
[19] validation_0-error:0.008791 validation_1-error:0.052632 validation_0-f1_err:0.005348 validation_1-f1_err:0.018868
[20] validation_0-error:0.008791 validation_1-error:0.052632 validation_0-f1_err:0.005348 validation_1-f1_err:0.018868
[21] validation_0-error:0.006593 validation_1-error:0.052632 validation_0-f1_err:0.005348 validation_1-f1_err:0.018868
[22] validation_0-error:0.006593 validation_1-error:0.052632 validation_0-f1_err:0.003571 validation_1-f1_err:0.018868
[23] validation_0-error:0.006593 validation_1-error:0.052632 validation_0-f1_err:0.003571 validation_1-f1_err:0.018868
[24] validation_0-error:0.006593 validation_1-error:0.052632 validation_0-f1_err:0.003571 validation_1-f1_err:0.018868
[25] validation_0-error:0.006593 validation_1-error:0.052632 validation_0-f1_err:0.003571 validation_1-f1_err:0.018868
[26] validation_0-error:0.004396 validation_1-error:0.052632 validation_0-f1_err:0.003571 validation_1-f1_err:0.018868
[27] validation_0-error:0.004396 validation_1-error:0.052632 validation_0-f1_err:0.003584 validation_1-f1_err:0.018868
[28] validation_0-error:0.004396 validation_1-error:0.052632 validation_0-f1_err:0.003584 validation_1-f1_err:0.018868
[29] validation_0-error:0.004396 validation_1-error:0.052632 validation_0-f1_err:0.003571 validation_1-f1_err:0.018868
[30] validation_0-error:0.004396 validation_1-error:0.052632 validation_0-f1_err:0.001789 validation_1-f1_err:0.018868
[31] validation_0-error:0.004396 validation_1-error:0.052632 validation_0-f1_err:0.001789 validation_1-f1_err:0.018868
[32] validation_0-error:0.004396 validation_1-error:0.052632 validation_0-f1_err:0.001789 validation_1-f1_err:0.018868
Stopping. Best iteration:
[7] validation_0-error:0.017582 validation_1-error:0.04386 validation_0-f1_err:0.010638 validation_1-f1_err:0.018868
XGBClassifier(base_score=0.6131868131868132, booster='gbtree',
colsample_bylevel=1, colsample_bytree=0.8, gamma=0,
learning_rate=0.1, max_delta_step=0, max_depth=6,
min_child_weight=1, missing=None, n_estimators=100, n_jobs=8,
niterations=1000, nthread=None, objective='binary:logistic',
random_state=0, reg_alpha=0, reg_lambda=1,
scale_pos_weight=0.6308243727598566, seed=None, silent=False,
subsample=0.8)
最佳答案
您得到 4 个评估矩阵,因为 xgboost 以某种方式向您的 eval_set
添加了另一个评估指标。就个人而言,我使用的是核心 xgboost 而不是 scikit warp up。因此,如果您想了解更多信息,请阅读文档。
对于 early_stopping,您必须将 n_estimators=1000
(或您想要的迭代次数)设置为 xgb.XGBClassifier
中的参数
并在 clf.fit
中设置 early_stopping_rounds=50
(或您想要的任何值)。 Here's the documentation .
提前停止决定何时需要停止提升算法以避免过度拟合。它通过评估您在 eval_set
中定义的 tuple
(X_test, y_test)
来实现。如果评估错误没有减少超过 50 次迭代,early_stopping 将停止您的提升。
关于python-3.x - 具有自定义评估函数的 Python 中 xgboost 的意外行为,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51626360/
关闭。这个问题不符合Stack Overflow guidelines .它目前不接受答案。 我们不允许在 Stack Overflow 上提出有关通用计算硬件和软件的问题。您可以编辑问题,使其成为
当我尝试在 db2 中创建表时,它抛出以下错误 $ db2 CREATE TABLE employee(emp_id INT NOT NULL, emp_name VARCHAR(100)) sh:
我有: while (i < l) { if (one === two) { continue; } i++; } 但是 JSLint 说: Problem at line 1 chara
所以我有这个代码: char inputs[10] = ""; int numInputs = 0; while (numInputs < 10){ char c; printf("E
var ninja = { name: 'Ninja', say: function () { return 'I am a ' + this.name; }
我收到一个我不明白的错误,请注意,我是编码新手,所以这可能是一个简单的错误。 #include using namespace std; int main() { //Initialise Fahr
我正在使用 javascript 和 react,由于某种原因,我收到了一个奇怪的 token 错误。 这是发生错误的代码: renderNavBar() { if (!this.us
Closed. This question is off-topic。它当前不接受答案。
由于某种我无法解释的原因,编译器正在输出一个错误,指出它发现了一个意外的#else 标记。 这发生在文件的开头: #if defined( _USING_MFC ) #include "stda
这个问题不太可能帮助任何 future 的访问者;它只与一个小的地理区域、一个特定的时间点或一个非常狭窄的情况有关,这些情况并不普遍适用于互联网的全局受众。为了帮助使这个问题更广泛地适用,visit
这个问题在这里已经有了答案: Difference between sh and Bash (11 个答案) 关闭 2 年前。 我正在编写一个简单的 bash 脚本,我在 XX `(' unexpe
关闭。这个问题是not reproducible or was caused by typos .它目前不接受答案。 此问题是由拼写错误或无法再重现的问题引起的。虽然类似的问题可能是 on-topic
我在 Windows 7 上编写了一个脚本,它不断给我一个错误“(此时出乎意料。”对于以下代码 if %vardns%=="NODNS" ( netsh interface ipv4 set ad
我正在尝试使用xmlstarlet(使用xpath)解析XML文件,但是出现语法错误,并且我不知道如何更正我的代码。 这是我的脚本: #!/bin/bash if [ $1=="author" ];
以下脚本旨在在目录中的所有文件上运行程序“senna”,并将每个文件的输出(保留输入文件名)写入另一个目录 for file in ./Data/in/*; do ./senna -iobta
我从 challengers.coffee 运行此代码,并收到错误 ActionView::Template::Error (SyntaxError: [stdin]:3:31:unexpected
我在 config.db.database; 行中有语法错误(意外的标记“.”)。这是我在文件中的代码 const config = require('../config/config') const
这一定很明显,但是我无法使它正常工作。我正在尝试传输应该用于构建$ classKey的对象,这反过来又导致删除所需的软件(amd64或i386)。好吧,这里的代码: $name = @("softwa
我正在使用 1.3.7 版学习 Grails,但我一直无缘无故地遇到以下语法错误: unexpected token: mapping @ line x, column y. 有一次,我通过运行“gr
我正在尝试找出这段Pascal代码的问题 function Factorial(n: integer): integer; begin if n = 0 then Result := 1
我是一名优秀的程序员,十分优秀!