gpt4 book ai didi

python - DNNRegressor 训练输入多个标签

转载 作者:太空宇宙 更新时间:2023-11-03 14:09:22 27 4
gpt4 key购买 nike

我正在尝试实现一个 TensorFlow DNNRegressor,它使用具有多个标签的张量,但它总是失败并出现我不明白的错误。我在 Tensorflow 1.4.1 上完成了 95% 的测试,然后我刚刚切换到 1.5.0/CUDA 9,但它仍然失败(你知道,我只是希望:))

作为引用,我使用了波士顿示例和 pandas input func 源代码 https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/input_fn/boston.py https://github.com/tensorflow/tensorflow/blob/r1.5/tensorflow/python/estimator/inputs/pandas_io.py

在下面的要点中,您可以找到完整的 python 代码、生成的输出、训练数据和(当前未使用的)测试数据。训练数据和测试数据都很小,只是为了构建代码。 https://gist.github.com/anonymous/c3e9fbe5f5faf373fa230909347318cd

错误消息如下(堆栈跟踪是要点,我没有将其发布在这里以避免污染帖子)

tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [labels shape must be [batch_size, 20]] [Condition x == y did not hold element-wise:] [x (dnn/head/labels/assert_equal/x:0) = ] [20] [y (dnn/head/labels/strided_slice:0) = ] [3] [[Node: dnn/head/labels/assert_equal/Assert/Assert = Assert[T=[DT_STRING, DT_STRING, DT_STRING, DT_INT32, DT_STRING, DT_INT32], summarize=3, _device="/job:localhost/replica:0/task:0/device:CPU:0"](dnn/head/labels/assert_equal/All/_151, dnn/head/labels/assert_equal/Assert/Assert/data_0, dnn/head/labels/assert_equal/Assert/Assert/data_1, dnn/head/labels/assert_equal/Assert/Assert/data_2, dnn/head/logits/assert_equal/x/_153, dnn/head/labels/assert_equal/Assert/Assert/data_4, dnn/head/labels/strided_slice/_155)]]

input_fn 如下

def get_input_fn(dataset,
model_labels=None,
batch_size=128,
num_epochs=1,
shuffle=None,
queue_capacity=1000,
num_threads=1):

dataset = dataset.copy()

if queue_capacity is None:
if shuffle:
queue_capacity = 4 * len(dataset)
else:
queue_capacity = len(dataset)

min_after_dequeue = max(queue_capacity / 4, 1)

def input_fn():
queue = feeding_functions._enqueue_data(
dataset,
queue_capacity,
shuffle=shuffle,
min_after_dequeue=min_after_dequeue,
num_threads=num_threads,
enqueue_size=batch_size,
num_epochs=num_epochs)

if num_epochs is None:
features = queue.dequeue_many(batch_size)
else:
features = queue.dequeue_up_to(batch_size)

assert len(features) == len(dataset.columns) + 1, ('Features should have one '
'extra element for the index.')

features = features[1:]
features = dict(zip(list(dataset.columns), features))

if model_labels is not None:
#labels = tf.stack([features.pop(model_label) for model_label in model_labels], 0);
labels = [features.pop(model_label) for model_label in model_labels]

return features, labels

return features

return input_fn

我能够使用以下输入 fn 进行训练和预测,但看起来不适合处理我稍后想要用于训练的数据量。此外,当我将它与评估方法一起使用时,它会卡住。

def get_input_fn(dataset,
model_labels=None):

def input_fn():
features = {k: tf.constant(len(dataset), shape=[dataset[k].size, 1]) for k in model_features}

if model_labels is not None:
labels_data = []
for i in range(0, len(dataset)):
temp = []
for label in model_labels:
temp.append(dataset[label].values[i])
labels_data.append(temp)
labels = tf.constant(labels_data, shape=[len(dataset), len(model_labels)])

return features, labels
else:
return features

return input_fn

谢谢!

注释:如果您检查要点中的完整代码,您会注意到功能和标签的数量取决于类别的数量,它是根据种子数据动态构建的。也许我可以改用 RNN 并将每个时期映射到一个类别,而不是构建那个巨大的矩阵,但目前我专注于让这个测试工作。

最佳答案

最后我稍微改变了我的生成方法,测试代码已拆分为prepare.py和train.py,prepare.py将数据写入一些CSV(输入数据和类别)和train中.py 我将输入 fn 替换为加载这些 csv、构建数据集、使用 tf.read_csv 解析数据集行(以及更多其他内容)的 fn。

csv_field_defaults = [[0]] * (1 + len(model_features) + len(model_labels))

def _parse_line(line):
fields = tf.decode_csv(line, csv_field_defaults)

# Remove the user id
fields.pop(0)

features = dict(zip(model_features + model_labels,fields))
labels = tf.stack([features.pop(model_label) for model_label in model_labels])

return features, labels

def csv_input_fn(csv_path, batch_size):
dataset = tf.data.TextLineDataset(csv_path).skip(1)
dataset = dataset.map(_parse_line)
dataset = dataset.shuffle(1000).repeat().batch(batch_size)
return dataset.make_one_shot_iterator().get_next()

# Initialize tensor flow
tf.logging.set_verbosity(tf.logging.INFO)

# Initialize the neural network
feature_cols = [tf.feature_column.numeric_column(k) for k in model_features]
regressor = tf.estimator.DNNRegressor(feature_columns=feature_cols,
label_dimension=len(model_labels),
hidden_units=[4096, 2048, 1024, 512],
model_dir="tf_model")

我目前能够处理 10000 条记录,但我需要解析更多数据,希望这个实现表现更好

csv_input_fn 来自 tensorflow 示例,同时我修改了 _parse_line 以根据需要处理功能和标签。

关于python - DNNRegressor 训练输入多个标签,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48629347/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com