gpt4 book ai didi

python - keras:ValueError:检查模型目标时出错:预期activation_1具有形状(无,60)但得到形状为(10、100)的数组

转载 作者:行者123 更新时间:2023-12-04 13:45:45 25 4
gpt4 key购买 nike

我正在尝试移植 RocAlphaGo玩亚马逊游戏,并且在尝试实现受监督的策略训练器时存在问题。

from keras.models import Sequential, Model
from keras.layers.core import Activation, Flatten
from keras.layers import convolutional

defaults = {
"board": 10,
"filters_per_layer": 128,
"layers": 12,
"filter_width_1": 5
}
# copy defaults, but override with anything in kwargs
params = defaults
network = Sequential()
# create first layer
network.add(convolutional.Convolution2D(
input_shape=(6, 10, 10),
nb_filter=128,
nb_row=5,
nb_col=5,
init='uniform',
activation='relu',
border_mode='same'))

# create all other layers
for i in range(2, 13):
# use filter_width_K if it is there, otherwise use 3
filter_key = "filter_width_%d" % i
filter_width = params.get(filter_key, 3)

# use filters_per_layer_K if it is there, otherwise use default value
filter_count_key = "filters_per_layer_%d" % i
filter_nb = params.get(filter_count_key, 128)

network.add(convolutional.Convolution2D(
nb_filter=filter_nb,
nb_row=filter_width,
nb_col=filter_width,
init='uniform',
activation='relu',
border_mode='same'))

# the last layer maps each <filters_per_layer> feature to a number
network.add(convolutional.Convolution2D(
nb_filter=1,
nb_row=1,
nb_col=1,
init='uniform',
border_mode='same'))
# reshape output to be board x board
network.add(Flatten())
# softmax makes it into a probability distribution
network.add(Activation('softmax'))
  • keras 1.2.0
  • python 2.7

  • 给出以下异常:

    ValueError: Error when checking model target: expected activation_1 to have shape (None, 60) but got array with shape (10, 100)



    训练数据集是一个 (10, 6, 10, 10) 数组,10 x 6 层,每层是一个 10x10 的数组(棋盘),为什么模型需要 (None, 60) ?
    如果 chagne input_shape=(6, 10, 10)input_shape=(10, 10, 10) ,会得到:

    ValueError: Error when checking model input: expected convolution2d_input_1 to have shape (None, 10, 10, 10) but got array with shape (10, 6, 10, 10)



    所有的代码是 here

    最佳答案

    正如 Matias 在评论中所说,如果您添加

    network.summary() 

    您可能会注意到您的卷积应用于输入数据的前 2 个维度(即 (6,10,10))。你的特征在第一维。默认情况下,当您使用 tensorflow 时,Keras 会假设您的特征所在的维度是第三个,而不是第一个。因此,当您在 (6,10,10) 数组上应用 128 个过滤器时,输出将是 (6,10,128),如果我理解得很好,这不是您想要的。

    出于这个原因,在最后一个卷积层的输出处,您会得到一个 (6,10,1) 数组,它会展平为 (,60) 而不是您期望的 (,100) 。

    有两种方法可以修复您的网络。要么将输入数据更改为格式 (10,10,6)。或者您使用 data_format="channels_first" Convolution2D() 的参数层。

    这是第二个选项的代码:
    from keras.models import Sequential, Model
    from keras.layers.core import Activation, Flatten
    from keras.layers import convolutional

    defaults = {
    "board": 10,
    "filters_per_layer": 128,
    "layers": 12,
    "filter_width_1": 5
    }
    # copy defaults, but override with anything in kwargs
    params = defaults
    network = Sequential()
    # create first layer
    network.add(convolutional.Convolution2D(
    input_shape=(6, 10, 10),
    nb_filter=128,
    nb_row=5,
    nb_col=5,
    init='uniform',
    activation='relu',
    border_mode='same',
    data_format='channels_first'
    ))

    # create all other layers
    for i in range(2, 13):
    # use filter_width_K if it is there, otherwise use 3
    filter_key = "filter_width_%d" % i
    filter_width = params.get(filter_key, 3)

    # use filters_per_layer_K if it is there, otherwise use default value
    filter_count_key = "filters_per_layer_%d" % i
    filter_nb = params.get(filter_count_key, 128)

    network.add(convolutional.Convolution2D(
    nb_filter=filter_nb,
    nb_row=filter_width,
    nb_col=filter_width,
    init='uniform',
    activation='relu',
    border_mode='same',
    data_format='channels_first'))

    # the last layer maps each <filters_per_layer> feature to a number
    network.add(convolutional.Convolution2D(
    nb_filter=1,
    nb_row=1,
    nb_col=1,
    init='uniform',
    border_mode='same',
    data_format='channels_first'))
    # reshape output to be board x board
    network.add(Flatten())
    # softmax makes it into a probability distribution
    network.add(Activation('softmax'))
    # display your network summary
    network.summary()

    编辑

    考虑到您的 keras 版本,您应该使用参数“dim_ordering”并将其设置为“th”。

    我在 keras documentation 中找到了此信息
    from keras.models import Sequential, Model
    from keras.layers.core import Activation, Flatten
    from keras.layers import convolutional

    defaults = {
    "board": 10,
    "filters_per_layer": 128,
    "layers": 12,
    "filter_width_1": 5
    }
    # copy defaults, but override with anything in kwargs
    params = defaults
    network = Sequential()
    # create first layer
    network.add(convolutional.Convolution2D(
    input_shape=(6, 10, 10),
    nb_filter=128,
    nb_row=5,
    nb_col=5,
    init='uniform',
    activation='relu',
    border_mode='same',
    dim_ordering='th'
    ))

    # create all other layers
    for i in range(2, 13):
    # use filter_width_K if it is there, otherwise use 3
    filter_key = "filter_width_%d" % i
    filter_width = params.get(filter_key, 3)

    # use filters_per_layer_K if it is there, otherwise use default value
    filter_count_key = "filters_per_layer_%d" % i
    filter_nb = params.get(filter_count_key, 128)

    network.add(convolutional.Convolution2D(
    nb_filter=filter_nb,
    nb_row=filter_width,
    nb_col=filter_width,
    init='uniform',
    activation='relu',
    border_mode='same',
    dim_ordering='th'))

    # the last layer maps each <filters_per_layer> feature to a number
    network.add(convolutional.Convolution2D(
    nb_filter=1,
    nb_row=1,
    nb_col=1,
    init='uniform',
    border_mode='same',
    dim_ordering='th'))
    # reshape output to be board x board
    network.add(Flatten())
    # softmax makes it into a probability distribution
    network.add(Activation('softmax'))
    # display your network summary
    network.summary()

    关于python - keras:ValueError:检查模型目标时出错:预期activation_1具有形状(无,60)但得到形状为(10、100)的数组,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48860388/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com