gpt4 book ai didi

python - `_softmax_cross_entropy_with_logits`在tensorflow中定义在哪里?

转载 作者:太空狗 更新时间:2023-10-30 00:42:44 27 4
gpt4 key购买 nike

我正在尝试查看 softmax_cross_entropy_with_logits_v2() 是如何实现的。它调用 _softmax_cross_entropy_with_logits()。但我看不到后者的定义。有人知道如何找到它的定义吗?

$ ack '\b_softmax_cross_entropy_with_logits\b'
tensorflow/compiler/tests/binary_ops_test.py
176: gen_nn_ops._softmax_cross_entropy_with_logits,

tensorflow/python/kernel_tests/xent_op_test.py
52: loss, backprop = gen_nn_ops._softmax_cross_entropy_with_logits(
75: loss, backprop = gen_nn_ops._softmax_cross_entropy_with_logits(
93: gen_nn_ops._softmax_cross_entropy_with_logits,
135: gen_nn_ops._softmax_cross_entropy_with_logits(
141: gen_nn_ops._softmax_cross_entropy_with_logits([0., 1., 2., 3.],

tensorflow/python/ops/nn_ops.py
1803: cost, unused_backprop = gen_nn_ops._softmax_cross_entropy_with_logits(

最佳答案

kmario23 的回答是正确的:基本上,当您看到对 gen_* 的引用时包,表示自动生成python代码。

在本例中,它是 gen_nn_ops.py :

def _softmax_cross_entropy_with_logits(features, labels, name=None):
r"""Computes softmax cross entropy cost and gradients to backpropagate.

Inputs are the logits, not probabilities.

Args:
features: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
batch_size x num_classes matrix
labels: A `Tensor`. Must have the same type as `features`.
batch_size x num_classes matrix
The caller must ensure that each batch of labels represents a valid
probability distribution.
name: A name for the operation (optional).

Returns:
A tuple of `Tensor` objects (loss, backprop).

loss: A `Tensor`. Has the same type as `features`. Per example loss (batch_size vector).
backprop: A `Tensor`. Has the same type as `features`. backpropagated gradients (batch_size x num_classes matrix).
"""
_ctx = _context.context()
if _ctx.in_graph_mode():
_, _, _op = _op_def_lib._apply_op_helper(
"SoftmaxCrossEntropyWithLogits", features=features, labels=labels,
name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("T", _op.get_attr("T"))
else:
_attr_T, _inputs_T = _execute.args_to_matching_eager([features, labels], _ctx)
(features, labels) = _inputs_T
_attr_T = _attr_T.as_datatype_enum
_inputs_flat = [features, labels]
_attrs = ("T", _attr_T)
_result = _execute.execute(b"SoftmaxCrossEntropyWithLogits", 2,
inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
name=name)
_execute.record_gradient(
"SoftmaxCrossEntropyWithLogits", _inputs_flat, _attrs, _result, name)
_result = _SoftmaxCrossEntropyWithLogitsOutput._make(_result)
return _result

但由于此函数是原生 C++ 实现的包装器,您可能有兴趣查看实际的 C++ 代码。它在 tensorflow/core/kernels/xent_op.cc ,对于 CPU 和 GPU:

template <typename Device, typename T>
class SoftmaxXentWithLogitsOp : public OpKernel {
public:
explicit SoftmaxXentWithLogitsOp(OpKernelConstruction* context)
: OpKernel(context) {}

void Compute(OpKernelContext* context) override {
const Tensor& logits_in = context->input(0);
const Tensor& labels_in = context->input(1);
OP_REQUIRES(context, logits_in.IsSameSize(labels_in),
errors::InvalidArgument(
"logits and labels must be same size: logits_size=",
logits_in.shape().DebugString(), " labels_size=",
labels_in.shape().DebugString()));
OP_REQUIRES(context, TensorShapeUtils::IsMatrix(logits_in.shape()),
errors::InvalidArgument("logits must be 2-dimensional"));
// As we already tested that both inputs have the same shape no need to
// check that "labels" is a matrix too.

// loss is 1-D (one per example), and size is batch_size.

Tensor scratch;
OP_REQUIRES_OK(
context, context->allocate_temp(DataTypeToEnum<T>::value,
TensorShape({logits_in.dim_size(0), 1}),
&scratch));

Tensor* loss_out = nullptr;
OP_REQUIRES_OK(context,
context->allocate_output(
0, TensorShape({logits_in.dim_size(0)}), &loss_out));
Tensor* back_out = nullptr;
// Try to reuse the logits_in buffer for the backprop output.
OP_REQUIRES_OK(context, context->forward_input_or_allocate_output(
{0}, 1, logits_in.shape(), &back_out));
functor::XentFunctor<Device, T> functor;
functor(context->eigen_device<Device>(), logits_in.matrix<T>(),
labels_in.matrix<T>(), scratch.matrix<T>(), loss_out->vec<T>(),
back_out->matrix<T>());
}
};

如果您有兴趣深入了解,主要调用在最后一行:functor(...) , 其中functorXentFunctor<Device, T> .实际逻辑被分派(dispatch)给第三方 Eigen 库。参见 this very similar question ,这表明这一切到底有多深。

关于python - `_softmax_cross_entropy_with_logits`在tensorflow中定义在哪里?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47987202/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com