gpt4 book ai didi

Tensorflow,tf.nn.softmax_cross_entropy_with_logits 和 tf.nn.sparse_softmax_cross_entropy_with_logits 的区别

转载 作者:行者123 更新时间:2023-12-03 20:09:16 26 4
gpt4 key购买 nike

我已阅读 docs of both functions ,但据我所知,对于函数 tf.nn.softmax_cross_entropy_with_logits(logits, labels, dim=-1, name=None) ,结果是交叉熵损失,其中logits的维度和 labels是相同的。

但是,对于函数 tf.nn.sparse_softmax_cross_entropy_with_logits ,尺寸logitslabels不一样吗?

你能给出一个更详细的tf.nn.sparse_softmax_cross_entropy_with_logits的例子吗? ?

最佳答案

不同的是tf.nn.softmax_cross_entropy_with_logits不假设这些类是互斥的:

Measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multilabel classification where a picture can contain both an elephant and a dog at the same time.



比较 sparse_* :

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.



因此,对于稀疏函数, logits 的维度和 labels不一样: labels每个示例包含一个数字,而 logits每个示例的类数,表示概率。

关于Tensorflow,tf.nn.softmax_cross_entropy_with_logits 和 tf.nn.sparse_softmax_cross_entropy_with_logits 的区别,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41283115/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com