gpt4 book ai didi

machine-learning - Caffe 损失层、均值和准确度

转载 作者:行者123 更新时间:2023-11-30 09:53:25 25 4
gpt4 key购买 nike

我有一个用于深度估计的完全卷积网络,如下所示:(为了简单起见,只有上层和下层):

# input: image and depth_image
layer {
name: "train-data"
type: "Data"
top: "data"
top: "silence_1"
include {
phase: TRAIN
}
transform_param {
#mean_file: "mean_train.binaryproto"
scale: 0.00390625
}
data_param {
source: "/train_lmdb"
batch_size: 4
backend: LMDB
}
}
layer {
name: "train-depth"
type: "Data"
top: "depth"
top: "silence_2"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
}
data_param {
source: "train_depth_lmdb"
batch_size: 4
backend: LMDB
}
}
layer {
name: "val-data"
type: "Data"
top: "data"
top: "silence_1"
include {
phase: TEST
}
transform_param {
#mean_file: "mean_val.binaryproto"
scale: 0.00390625
}
data_param {
source: "val_lmdb"
batch_size: 4
backend: LMDB
}
}
layer {
name: "val-depth"
type: "Data"
top: "depth"
top: "silence_2"
include {
phase: TEST
}
transform_param {
scale: 0.00390625
}
data_param {
source: "val_depth_lmdb"
batch_size: 4
backend: LMDB
}
}
################## Silence unused labels ##################
layer {
name: "silence_layer_1"
type: "Silence"
bottom: "silence_1"
}

layer {
name: "silence_layer_2"
type: "Silence"
bottom: "silence_2"
}
....
layer {
name: "conv"
type: "Convolution"
bottom: "concat"
top: "conv"
convolution_param {
num_output: 1
kernel_size: 5
pad: 2
stride: 1
engine: CUDNN
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}

layer {
name: "relu"
type: "ReLU"
bottom: "conv"
top: "result"
relu_param{
negative_slope: 0.01
engine: CUDNN
}
}

# Error
layer {
name: "accuracy"
type: "Accuracy"
bottom: "result"
bottom: "depth"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "result"
bottom: "depth"
top: "loss"
}

现在我有 3 个问题:

当我训练网络时,准确度层始终为 1。我不明白为什么?

EuclideanLayer 是用于此目的的正确层吗?

在这种情况下是否需要平均值,或者我可以忽略平均值吗?

#Define image transformers
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
transformer.set_mean('data', mean_array)
transformer.set_transpose('data', (2,0,1))

image = "test.png"

img = caffe.io.load_image(image, False)

img = caffe.io.resize_image( img, (IMAGE_WIDTH, IMAGE_HEIGHT))

net.blobs['data'].data[...] = transformer.preprocess('data', img)

pred = net.forward()

output_blob = pred['result']

最佳答案

  1. 准确度始终为 1 - 请参阅 this answer
  2. "EuclideanLoss" 层非常适合回归。
  3. 减去平均值应该有助于网络更好地收敛。继续使用它。您可以阅读更多有关数据标准化的重要性以及在这方面可以采取的措施 here .

关于machine-learning - Caffe 损失层、均值和准确度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40462524/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com