gpt4 book ai didi

python - 分配形状为 [] 且类型为 float Keras 的张量时出现 ResourceExhaustedError

转载 作者:太空宇宙 更新时间:2023-11-03 14:29:27 25 4
gpt4 key购买 nike

我的输入是299,299,3

我的显卡是 1070(8 GB 内存)

其他规范:Python 3.6、Keras 2.xx、Tensorflow-backend(1.4)、Windows 7

即使批量大小为 1 也不起作用。

我觉得我的卡应该处理一批尺寸为一的 --

这是我的代码:

   def full_model():
#model layers
input_img = Input(shape=(299, 299, 3))

tower_1 = Conv2D(64, (1, 1,), padding='same', activation='relu')(input_img)
tower_1 = Conv2D(64, (3, 3), padding='same', activation='relu')(tower_1)

tower_2 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_2 = Conv2D(64, (5, 5), padding='same', activation='relu')(tower_2)

concatenated_layer = keras.layers.concatenate([tower_1, tower_2], axis=3)

bottleneck = MaxPooling2D((2, 2), strides=(2, 2), padding='same')(concatenated_layer)
flatten = Flatten()(bottleneck)
dense_1 = Dense(500, activation = 'relu')(flatten)
predictions = Dense(12, activation = 'softmax')(dense_1)


model = Model(inputs= input_img, output = predictions)
SGD =keras.optimizers.SGD(lr=0.1, momentum=0.0, decay=0.0, nesterov=False)
model.compile(optimizer=SGD,
loss='categorical_crossentropy',
metrics=['accuracy'])

return model




hdf5_path =r'C:\Users\Moondra\Desktop\Keras Applications\training.hdf5'
model = full_model()


def run_model( hdf5_path,
epochs = 10,
steps_per_epoch =8,
classes =12,
batch_size =1, model= model ):



for i in range(epochs):
batches = loading_hdf5_files.load_batches(batch_size =1,
hdf5_path=hdf5_path ,
classes = classes)
for i in range(steps_per_epoch):
x,y = next(batches)
#plt.imshow(x[0])
#plt.show()
x = (x/255).astype('float32')
print(x.shape)
data =model.train_on_batch(x,y)
print('loss : {:.5}, accuracy : {:.2%}'.format(*data))

return model

我似乎无法处理哪怕是一批尺寸为一的。

这是错误的最后一部分:

ResourceExhaustedError (see above for traceback): OOM when allocating tensor of shape [] and type float
[[Node: conv2d_4/random_uniform/sub = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [] values: 0.0866025388>, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]

最佳答案

事实证明我的参数太多了。

运行print(model.summary())后,我有十亿多个参数。

我增加了 MaxPooling 的大小,没有再出现任何问题。

关于python - 分配形状为 [] 且类型为 float Keras 的张量时出现 ResourceExhaustedError,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47405091/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com