gpt4 book ai didi

python - 在 tflearn 中可视化 CNN 层或池化层

转载 作者:行者123 更新时间:2023-11-28 17:08:23 26 4
gpt4 key购买 nike

有什么方法可以在 tflearn 中训练甚至测试时可视化 CNN 或池化层的输出?我看过 tensorflow 的可视化代码,但由于 session 和 feeddict 涉及它们,我不断收到类似“unhashable numpy.ndarray”的错误,但我的图像尺寸相同所以我决定问是否有一种方法可以可视化输出任何层。下面是我的 tflearn 层代码:-

X_train, X_test, y_train, y_test=cross_validation.train_test_split(data,labels,test_size=0.1)

tf.reset_default_graph()
convnet=input_data(shape=[None,50,50,3],name='input')
convnet=conv_2d(convnet,32,5,activation='relu')
convnet=max_pool_2d(convnet,5)
convnet=conv_2d(convnet,64,5,activation='relu')
convnet=max_pool_2d(convnet,5)

convnet=conv_2d(convnet,32,5,activation='relu')
convnet=max_pool_2d(convnet,5)

convnet=fully_connected(convnet,128,activation='relu')
convnet=dropout(convnet,0.4)
convnet=fully_connected(convnet,6,activation='softmax')
convnet=regression(convnet,optimizer='adam',learning_rate=0.005,loss='categorical_crossentropy',name='MyClassifier')
model=tflearn.DNN(convnet,tensorboard_dir='log',tensorboard_verbose=0)
model.fit(X_train,y_train, n_epoch=20,validation_set=(X_test,y_test), snapshot_step=20,show_metric=True,run_id='MyClassifier')
print("Saving the model")
model.save('model.tflearn')

在训练或测试仍然有效的情况下,如何可视化任何层的输出?我所说的输出是指检测边缘或其他低级特征的失真图像。谢谢。

最佳答案

如前所述here ,您可以通过简单地定义一个将观察层作为输出的新模型来查看中间层产生的输出。首先,声明您的原始模型(但保留对您要观察的中间层的引用):

convnet = input_data(shape=[None, 50, 50, 3], name='input')
convnet = conv_2d(convnet, 32, 5, activation='relu')
max_0 = max_pool_2d(convnet, 5)
convnet = conv_2d(max_0, 64, 5, activation='relu')
max_1 = max_pool_2d(convnet, 5)
...
convnet = regression(...)
model = tflearn.DNN(...)
model.fit(...)

现在只需为每一层创建一个模型并预测输入数据:

observed = [max_0, max_1, max_2]
observers = [tflearn.DNN(v, session=model.session) for v in observed]
outputs = [m.predict(X_test) for m in observers]
print([d.shape for d in outputs])

它为您的模型输出以下评估张量的形状:

[(2, 10, 10, 32), (2, 2, 2, 64), (2, 1, 1, 32)]

有了这个,您将能够在测试期间查看输出。至于训练,也许你可以使用回调?

class PlottingCallback(tflearn.callbacks.Callback):
def __init__(self, model, x,
layers_to_observe=(),
kernels=10,
inputs=1):
self.model = model
self.x = x
self.kernels = kernels
self.inputs = inputs
self.observers = [tflearn.DNN(l) for l in layers_to_observe]

def on_epoch_end(self, training_state):
outputs = [o.predict(self.x) for o in self.observers]

for i in range(self.inputs):
plt.figure(frameon=False)
plt.subplots_adjust(wspace=0.1, hspace=0.1)
ix = 1
for o in outputs:
for kernel in range(self.kernels):
plt.subplot(len(outputs), self.kernels, ix)
plt.imshow(o[i, :, :, kernel])
plt.axis('off')
ix += 1
plt.savefig('outputs-for-image:%i-at-epoch:%i.png'
% (i, training_state.epoch))

model.fit(X_train, y_train,
...
callbacks=[PlottingCallback(model, X_test, (max_0, max_1, max_2))])

这将在您的磁盘上在每个时期保存与此类似的图像:

Outputs for the first image in x_test, first epoch.

关于python - 在 tflearn 中可视化 CNN 层或池化层,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49516612/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com