gpt4 book ai didi

python - 如何获取卷积神经网络(CNN)倒数第二层的值?

转载 作者:行者123 更新时间:2023-11-30 09:19:09 25 4
gpt4 key购买 nike

我正在尝试实现 CNN 来执行分类任务。我想看看每个时期的权重是如何优化的。为此,我需要倒数第二层的值。另外,我将自己对最后一层和反向传播进行硬编码。请推荐 API,这会很有帮助。

编辑:我添加了 keras 示例中的代码。期待编辑它。 This链接提供了一些提示。我已经提到了我需要输出的层。

from __future__ import print_function

from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import Conv1D, GlobalMaxPooling1D
from keras.datasets import imdb

# set parameters:
max_features = 5000
maxlen = 400
batch_size = 100
embedding_dims = 50
filters = 250
kernel_size = 3
hidden_dims = 250
epochs = 100

print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')

print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)

print('Build model...')
model = Sequential()

# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features,
embedding_dims,
input_length=maxlen))
model.add(Dropout(0.2))

# we add a Convolution1D, which will learn filters
# word group filters of size filter_length:
model.add(Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1))
# we use max pooling:
model.add(GlobalMaxPooling1D())

# We add a vanilla hidden layer:
model.add(Dense(hidden_dims))
model.add(Dropout(0.2))
model.add(Activation('relu'))

# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1))
model.add(Activation('sigmoid')) #<======== I need output after this.



model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))

最佳答案

您可以像这样获取模型的各个层:

num_layer = 7 # Dense(1) layer
layer = model.layers[num_layer]

I want to see the how the weights are being optimized at each epoch.

要获取图层的权重,请使用layer.get_weights(),如下所示:

w, b = layer.get_weights() # weights and bias of Dense(1)

I need the values of penultimate layer.

要获取最后一层的评估值,请使用model.predict():

prediction = model.predict(x_test)

要获得任何其他层的评估,请使用 tensorflow ,如下所示:

input = tf.placeholder(tf.float32) # Create input placeholder
layer_output = layer(input) # create layer output operation

init_op = tf.global_variables_initializer() # initialize variables

with tf.Session() as sess:
sess.run(init_op)

# evaluate layer output
output = sess.run(layer_output, feed_dict = {input: x_test})
print(output)

关于python - 如何获取卷积神经网络(CNN)倒数第二层的值?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46578504/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com