gpt4 book ai didi

python - 在 Tensorboard 中组织运行

转载 作者:行者123 更新时间:2023-12-01 21:22:38 25 4
gpt4 key购买 nike

我正在使用 RNN 开发概率预测模型,并希望在 Tensorboard 中记录使用不同参数的多次运行以评估和比较它们。我是 Tensorboard 的新手,无法真正想出组织运行的好方法。我希望能够在 Tensorboard 中按参数值对它们进行排序,所以目前我正在使用这种相当笨拙的方法:

tb = SummaryWriter(log_dir=f'runs/leakyrelu/cuda{cuda_id}/m_epochs{max_epochs}/lr{learning_rate}/'
f'bs{batch_size}/h_h{history_horizon}/f_h{forecast_horizon}/'
f'core_{core_net}/drop_fc{dropout_fc}/'
f'drop_core{dropout_core}')

在不创建一英里长的文件名或几公里深的目录的情况下,是否有任何聪明的方法或惯例可以做到这一点?

最佳答案

看来您正在使用多个参数进行超参数调整。

在 Tensorboard 中记录此类运行的最佳方法是使用其 HParams 插件。

第一步:导入

import tensorflow as tf
from tensorboard.plugins.hparams import api as hp

之后,您创建 Hparam 参数对象,您希望为其尝试不同的值并创建摘要编写器。

第 2 步:创建 Hparam 对象和摘要编写器

HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([16, 32]))
HP_DROPOUT = hp.HParam('dropout', hp.RealInterval(0.1, 0.2))
HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd']))

METRIC_ACCURACY = 'accuracy'

with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
hp.hparams_config(
hparams=[HP_NUM_UNITS, HP_DROPOUT, HP_OPTIMIZER],
metrics=[hp.Metric(METRIC_ACCURACY, display_name='Accuracy')],
)

您创建的对象将如下所示:

HP_NUM_UNITS
HParam(name='num_units', domain=IntInterval(16, 32), display_name=None, description=None)

第 3 步:创建用于训练和测试的函数

def train_test_model(hparams):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(hparams[HP_NUM_UNITS], activation=tf.nn.relu),
tf.keras.layers.Dropout(hparams[HP_DROPOUT]),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
model.compile(
optimizer=hparams[HP_OPTIMIZER],
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)

model.fit(x_train, y_train, epochs=1) # Run with 1 epoch to speed things up for demo purposes
_, accuracy = model.evaluate(x_test, y_test)
return accuracy

在这个函数中,hparams 是一个类型的字典:

{
HParam Object 1: VALUE-FOR-THE-OBJECT,
HParam Object 2: VALUE-FOR-THE-OBJECT,
HParam Object 3: VALUE-FOR-THE-OBJECT,
}

实际的字典是这样的:

{HParam(name='num_units', domain=Discrete([16, 32]), display_name=None, description=None): 32,
HParam(name='dropout', domain=RealInterval(0.1, 0.2), display_name=None, description=None): 0.2,
HParam(name='optimizer', domain=Discrete(['adam', 'sgd']), display_name=None, description=None): 'sgd'}

第 4 步:登录 Tensorboard 的函数。

def run(run_dir, hparams):
with tf.summary.create_file_writer(run_dir).as_default():
hp.hparams(hparams) # record the values used in this trial
accuracy = train_test_model(hparams)
tf.summary.scalar(METRIC_ACCURACY, accuracy, step=1)

这里,run_dir 是每个单独运行的路径。

第 5 步:尝试不同的参数:

session_num = 0

for num_units in HP_NUM_UNITS.domain.values:
for dropout_rate in (HP_DROPOUT.domain.min_value, HP_DROPOUT.domain.max_value):
for optimizer in HP_OPTIMIZER.domain.values:
hparams = {
HP_NUM_UNITS: num_units,
HP_DROPOUT: dropout_rate,
HP_OPTIMIZER: optimizer,
}
run_name = "run-%d" % session_num
print('--- Starting trial: %s' % run_name)
print({h.name: hparams[h] for h in hparams})
run('logs/hparam_tuning/' + run_name, hparams)
session_num += 1

注意:num_units 将采用“16”和“32”这两个值,而不是 16 到 32 之间的每个值。

您的 Tensorboard 将如下所示:表格 View :

table view

散点图 View :

Scatter plot view. .

您还可以通过将回调路径设置为 run_dir,将其与 Keras 中的 Tensorboard 回调相结合。

例如:

def train_test_model(hparams, run_dir):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(hparams[HP_NUM_UNITS], activation=tf.nn.relu),
tf.keras.layers.Dropout(hparams[HP_DROPOUT]),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(
optimizer=hparams[HP_OPTIMIZER],
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)

callbacks = [
tf.keras.callbacks.TensorBoard(run_dir),
]

model.fit(x_train, y_train, epochs=10, callbacks = callbacks) # Run with 1 epoch to speed things up for demo purposes

_, accuracy = model.evaluate(x_test,
y_test)
return accuracy

如果您想要记录自定义指标或除您在编译方法中定义的准确性或损失之外的各种指标,上述步骤很好。

但如果您不想使用自定义指标或不想与摘要编写者打交道等。您可以使用 Keras 回调来简化流程。没有摘要编写器的带回调的完整代码

# Creating Hparams
HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([16, 32]))
HP_DROPOUT = hp.HParam('dropout', hp.RealInterval(0.1, 0.2))
HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd']))

# Creating train test function
def train_test_model(hparams, run_dir):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(hparams[HP_NUM_UNITS], activation=tf.nn.relu),
tf.keras.layers.Dropout(hparams[HP_DROPOUT]),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(
optimizer=hparams[HP_OPTIMIZER],
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
callbacks = [
tf.keras.callbacks.TensorBoard(run_dir),# log metrics
hp.KerasCallback(run_dir, hparams), # log hparams
]
model.fit(x_train, y_train, epochs=10, callbacks = callbacks) # Run with 1 epoch to speed things up for demo purposes
_, accuracy = model.evaluate(x_test,
y_test)
return accuracy

# Running different configurations
session_num = 0

for num_units in HP_NUM_UNITS.domain.values:
for dropout_rate in (HP_DROPOUT.domain.min_value, HP_DROPOUT.domain.max_value):
for optimizer in HP_OPTIMIZER.domain.values:
hparams = {
HP_NUM_UNITS: num_units,
HP_DROPOUT: dropout_rate,
HP_OPTIMIZER: optimizer,
}
run_name = "run-%d" % session_num
print('--- Starting trial: %s' % run_name)
print({h.name: hparams[h] for h in hparams})
train_test_model(hparams, 'logs/hparam_tuning/' + run_name)
session_num += 1

有用的链接:

  1. Hyperparameter Tuning with the HParams Dashboard
  2. Hparams demo using all possible Hparam objects - Official Github Repo

关于python - 在 Tensorboard 中组织运行,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63671407/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com