gpt4 book ai didi

lightgbm - LightGBM:继续训练模型

转载 作者:行者123 更新时间:2023-12-03 09:59:36 55 4
gpt4 key购买 nike

我正在使用交叉验证来训练模型,如下所示:

classifier = lgb.Booster(
params=params,
train_set=lgb_train_set,
)

result = lgb.cv(
init_model=classifier,
params=params,
train_set=lgb_train_set,
num_boost_round=1000,
early_stopping_rounds=20,
verbose_eval=50,
shuffle=True
)
我想通过多次运行第二个命令(也许使用新的训练集或使用不同的参数)来继续训练模型,它将继续改进模型。
但是,当我尝试此操作时,很明显,每次都会从头开始创建模型。
是否有其他方法可以做我打算做的事情?

最佳答案

可以使用lightgbm.train的init_model选项解决,该选项接受两个对象之一

  • LightGBM模型的文件名,或
  • lightgbm Booster对象

  • 代码说明:
    import numpy as np 
    import lightgbm as lgb

    data = np.random.rand(1000, 10) # 1000 entities, each contains 10 features
    label = np.random.randint(2, size=1000) # binary target
    train_data = lgb.Dataset(data, label=label, free_raw_data=False)
    params = {}

    #Initialize with 10 iterations
    gbm_init = lgb.train(params, train_data, num_boost_round = 10)
    print("Initial iter# %d" %gbm_init.current_iteration())

    # Example of option #1 (pass a file):
    gbm_init.save_model('model.txt')
    gbm = lgb.train(params, train_data, num_boost_round = 10,
    init_model='model.txt')
    print("Option 1 current iter# %d" %gbm.current_iteration())


    # Example of option #2 (pass a lightgbm Booster object):
    gbm_2 = lgb.train(params, train_data, num_boost_round = 10,
    init_model = gbm_init)
    print("Option 2 current iter# %d" %gbm_2.current_iteration())

    https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.train.html

    关于lightgbm - LightGBM:继续训练模型,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45654998/

    55 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com