gpt4 book ai didi

python - 有没有办法在 tensorflow 的时代中间停止训练?

转载 作者:行者123 更新时间:2023-12-03 22:45:13 27 4
gpt4 key购买 nike

只是想知道是否有一种方法可以在一个时代的中间保存最高的准确度和最低的损失,并将其用作下一个时代的分数。通常我的数据最高准确率为 43.56%,但我看到它在一个时代的中间一直上升到 46% 以上。有没有一种方法可以让我在那个时间点停止纪元,并用它来让分数继续前进?

这是我现在正在运行的代码

import pandas as pd
import numpy as np
import pickle
import random
from skopt import BayesSearchCV
from sklearn.neural_network import MLPRegressor
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM, Bidirectional, SimpleRNN, GRU
from tensorflow.keras.wrappers.scikit_learn import KerasRegressor
from tensorflow.keras import layers
import tensorflow_docs as tfdocs
import tensorflow as tf
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
import tensorflow_docs.modeling
from tensorflow import keras
import warnings
warnings.filterwarnings("ignore")
warnings.filterwarnings('ignore', category=DeprecationWarning)

train_df = avenues[["LFA's", "Spend"]].sample(frac=0.8,random_state=0)
test_df = avenues[["LFA's", "Spend"]].drop(train_df.index)
train_df = clean_dataset(train_df)
test_df = clean_dataset(test_df)
train_df = train_df.reset_index(drop=True)
test_df = test_df.reset_index(drop=True)
train_stats = train_df.describe()
train_stats = train_stats.pop("LFA's")
train_stats = train_stats.transpose()
train_labels = train_df.pop("LFA's").values
test_labels = test_df.pop("LFA's").values
normed_train_data = np.array(norm(train_df)).reshape((train_df.shape[0], 1, 1))
normed_test_data = np.array(norm(test_df)).reshape((test_df.shape[0], 1, 1))
model = KerasRegressor(build_fn=build_model, epochs=25,
batch_size=1, verbose=0)
gs = BayesSearchCV(model, param_grid, cv=3, n_iter=25, n_jobs=1,
optimizer_kwargs={'base_estimator': 'RF'},
fit_params={"callbacks": [es_acc, es_loss, tfdocs.modeling.EpochDots()]})
try:
gs.fit(normed_train_data, train_labels)
except Exception as e:
print(e)

最佳答案

尝试使用 train_on_batch 而不是 fit。通过这种方式,你可以控制你的时代在你想要的每批之后停止(尽管从机器学习的角度来看,我怀疑这是否是一个好主意,最后你会得到不太通用的模型).

关于python - 有没有办法在 tensorflow 的时代中间停止训练?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59725944/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com