gpt4 book ai didi

javascript - 拟合时的 TensorflowJS,损失为 NaN

转载 作者:行者123 更新时间:2023-11-30 06:23:01 27 4
gpt4 key购买 nike

我正在尝试使用 TensorflowJS 制作 Tensorflow 的 python 版本的相同示例。不幸的是,当我运行脚本时,我不知道为什么训练时记录的损失值是 NaN。

我想要实现的是一个简单的文本分类,它根据经过训练的模型返回 0 或 1。这是我关注的 Python 教程 https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub

这是我到目前为止翻译的代码:

import * as tf  from '@tensorflow/tfjs'

// Load the binding:
//require('@tensorflow/tfjs-node'); // Use '@tensorflow/tfjs-node-gpu' if running with GPU.

// utils
const tuple = <A, B>(a: A, b: B): [A, B] => [a, b]

// prepare the data, first is result, second is the raw text
const data: [number, string][] = [
[0, 'aaaaaaaaa'],
[0, 'aaaa'],
[1, 'bbbbbbbbb'],
[1, 'bbbbbb']
]

// normalize the data
const arrayFill = [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]
const normalizeData = data.map(item => {
return tuple(item[0], item[1].split('').map(c => c.charCodeAt(0)).concat(arrayFill).slice(0, 10))
})

const xs = tf.tensor(normalizeData.map(i => i[1]))
const ys = tf.tensor(normalizeData.map(i => i[0]))

console.log(xs)

// Configs
const LEARNING_RATE = 1e-4

// Train a simple model:
//const optimizer = tf.train.adam(LEARNING_RATE)
const model = tf.sequential();
model.add(tf.layers.embedding({inputDim: 1000, outputDim: 16}))
model.add(tf.layers.globalAveragePooling1d({}))
model.add(tf.layers.dense({units: 16, activation: 'relu'}))
model.add(tf.layers.dense({units: 1, activation: 'sigmoid'}))
model.summary()
model.compile({optimizer: 'adam', loss: 'binaryCrossentropy', metrics: ['accuracy']});

model.fit(xs, ys, {
epochs: 10,
validationData: [xs, ys],
callbacks: {
onEpochEnd: async (epoch, log) => {
console.log(`Epoch ${epoch}: loss = ${log.loss}`);
}
}
});

(here pure JS code)这就是我得到的输出

_________________________________________________________________
Layer (type) Output shape Param #
=================================================================
embedding_Embedding1 (Embedd [null,null,16] 16000
_________________________________________________________________
global_average_pooling1d_Glo [null,16] 0
_________________________________________________________________
dense_Dense1 (Dense) [null,16] 272
_________________________________________________________________
dense_Dense2 (Dense) [null,1] 17
=================================================================
Total params: 16289
Trainable params: 16289
Non-trainable params: 0
_________________________________________________________________
Epoch 0: loss = NaN
Epoch 1: loss = NaN
Epoch 2: loss = NaN
Epoch 3: loss = NaN
Epoch 4: loss = NaN
Epoch 5: loss = NaN
Epoch 6: loss = NaN
Epoch 7: loss = NaN
Epoch 8: loss = NaN
Epoch 9: loss = NaN

最佳答案

损失或预测可以变成NaN。这是 vanishing gradient 的结果问题。在训练期间,梯度(偏导数)可以变得很小(趋于 0)。 binarycrossentropy 损失函数在计算中使用对数。并且根据涉及对数的数学运算,结果可能是 NaN。

binary cross entropy

如果模型的权重变为 NaN,则预测 ŷ 也可能变为 NaN,因此损失。可以调整 epoch 的数量来避免这个问题。解决问题的另一种方法可能是更改损失函数或优化器函数。

话虽如此,您的代码的损失不是 NaN。这是在 stackblitz 上执行的代码.另外,请注意以下 answer ,为了不预测 NaN 而修正模型

关于javascript - 拟合时的 TensorflowJS,损失为 NaN,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52040670/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com