gpt4 book ai didi

python - 如何以编程方式在python中为caffe生成deploy.txt

转载 作者:太空狗 更新时间:2023-10-29 23:57:38 26 4
gpt4 key购买 nike

我编写了 python 代码以编程方式生成卷积神经网络 (CNN),用于在 caffe 中训练和验证 .prototxt 文件。以下是我的功能:

def custom_net(lmdb, batch_size):

# define your own net!
n = caffe.NetSpec()

# keep this data layer for all networks
n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb,
ntop=2, transform_param=dict(scale=1. / 255))

n.conv1 = L.Convolution(n.data, kernel_size=6,
num_output=48, weight_filler=dict(type='xavier'))
n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)

n.conv2 = L.Convolution(n.pool1, kernel_size=5,
num_output=48, weight_filler=dict(type='xavier'))
n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)

n.conv3 = L.Convolution(n.pool2, kernel_size=4,
num_output=48, weight_filler=dict(type='xavier'))
n.pool3 = L.Pooling(n.conv3, kernel_size=2, stride=2, pool=P.Pooling.MAX)

n.conv4 = L.Convolution(n.pool3, kernel_size=2,
num_output=48, weight_filler=dict(type='xavier'))
n.pool4 = L.Pooling(n.conv4, kernel_size=2, stride=2, pool=P.Pooling.MAX)

n.fc1 = L.InnerProduct(n.pool4, num_output=50,
weight_filler=dict(type='xavier'))

n.drop1 = L.Dropout(n.fc1, dropout_param=dict(dropout_ratio=0.5))

n.score = L.InnerProduct(n.drop1, num_output=2,
weight_filler=dict(type='xavier'))

# keep this loss layer for all networks
n.loss = L.SoftmaxWithLoss(n.score, n.label)

return n.to_proto()

with open('net_train.prototxt', 'w') as f:
f.write(str(custom_net(train_lmdb_path, train_batch_size)))

with open('net_test.prototxt', 'w') as f:
f.write(str(custom_net(test_lmdb_path, test_batch_size)))

有没有一种方法可以类似地生成 deploy.prototxt 以测试不在 lmdb 文件中的看不见的数据?如果是这样,如果有人能指出我的引用,我将不胜感激。

最佳答案

很简单:

from caffe import layers as L, params as P
def custom_net(lmdb, batch_size):
# define your own net!
n = caffe.NetSpec()

if lmdb is None: # "deploy" flavor
# assuming your data is of shape 3x224x224
n.data = L.Input(input_param={'shape':{'dim':[1,3,224,224]}})
else:
# keep this data layer for all networks
n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb,
ntop=2, transform_param=dict(scale=1. / 255))
# the other layers common to all flavors: train/val/deploy...
n.conv1 = L.Convolution(n.data, kernel_size=6,
num_output=48, weight_filler=dict(type='xavier'))
n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)

n.conv2 = L.Convolution(n.pool1, kernel_size=5,
num_output=48, weight_filler=dict(type='xavier'))
n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)

n.conv3 = L.Convolution(n.pool2, kernel_size=4,
num_output=48, weight_filler=dict(type='xavier'))
n.pool3 = L.Pooling(n.conv3, kernel_size=2, stride=2, pool=P.Pooling.MAX)

n.conv4 = L.Convolution(n.pool3, kernel_size=2,
num_output=48, weight_filler=dict(type='xavier'))
n.pool4 = L.Pooling(n.conv4, kernel_size=2, stride=2, pool=P.Pooling.MAX)

n.fc1 = L.InnerProduct(n.pool4, num_output=50,
weight_filler=dict(type='xavier'))
# do you "drop" i deploy as well? up to you to decide...
n.drop1 = L.Dropout(n.fc1, dropout_param=dict(dropout_ratio=0.5))
n.score = L.InnerProduct(n.drop1, num_output=2,
weight_filler=dict(type='xavier'))

if lmdb is None:
n.prob = L.Softmax(n.score)
else:
# keep this loss layer for all networks apart from "Deploy"
n.loss = L.SoftmaxWithLoss(n.score, n.label)

return n.to_proto()

现在调用函数:

with open('net_deploy.prototxt', 'w') as f:
f.write(str(custom_net(None, None)))

如您所见,有 two modifications到 prototxt(以 lmdbNone 为条件):
第一个不是 "Data" 层,而是声明性的 "Input" layer仅声明 "data" 而没有声明 "label"
第二个变化是输出层:你有一个预测层而不是损失层(参见,例如,this answer)。

关于python - 如何以编程方式在python中为caffe生成deploy.txt,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40986009/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com