gpt4 book ai didi

Keras模型ValueError : Error when checking model target:

转载 作者:行者123 更新时间:2023-12-02 02:03:21 27 4
gpt4 key购买 nike

我已经很多年没有编码了,请原谅我。我正在尝试做一些可能是不可能的事情。我有 38 个人们执行相同基本 Action 的视频。我想训练模型来识别那些做得正确和不正确的人。我现在使用颜色,因为灰度也不起作用,我想像我使用的示例一样进行测试。我使用 an example, link 中定义的模型.

凯拉斯,Anaconda 64 中的 Python3.5, tensorflow 后端,在 Windows 10(64 位)上

我希望尝试不同的模型来解决这个问题,并使用灰度来减少内存,但无法迈出第一步!

谢谢!!!

这是我的代码:

import time
import numpy as np
import sys
import os
import cv2
import keras
import tensorflow as tf

from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, BatchNormalization
from keras.layers import Conv3D, Conv2D, MaxPooling2D, GRU, ConvLSTM2D, TimeDistributed


y_cat = np.zeros(40,np.float)
good = "Good"
bad = "Bad"


batch_size = 32
num_classes = 1
epochs = 1
nvideos = 38
nframes = 130
nrows = 240
ncols = 320
nchan = 3

x_learn = np.zeros((nvideos,nframes,nrows,ncols,nchan),np.int32)
x_learn = np.load(".\\train\\datasetcolor.npy")

with open(".\\train\\tags.txt") as ft:
y_learn = ft.readlines()
y_learn = [x.strip() for x in y_learn]
ft.close()

# transform string tags to numeric.
for i in range (0,len(y_learn)):
if (y_learn[i] == good): y_cat[i] = 1
elif (y_learn[i] == bad): y_cat[i] = 0


#build model
# duplicating from https://github.com/fchollet/keras/blob/master/examples/conv_lstm.py
model = Sequential()
model.image_dim_ordering = 'tf'
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
input_shape=(nframes,nrows,ncols,nchan),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(Conv3D(filters=1, kernel_size=(3, 3, 3),
activation='sigmoid',
padding='same', data_format='channels_last'))
model.compile(loss='binary_crossentropy', optimizer='adadelta')


print(model.summary())

# fit with first 3 videos because I don't have the horsepower yet
history = model.fit(x_learn[:3], y_learn[:3],
batch_size=batch_size,
epochs=epochs)

print (history)

结果:

<小时/>
Layer (type)                 Output Shape              Param #   
=================================================================
conv_lst_m2d_5 (ConvLSTM2D) (None, 130, 240, 320, 40) 62080
_________________________________________________________________
batch_normalization_5 (Batch (None, 130, 240, 320, 40) 160
_________________________________________________________________
conv_lst_m2d_6 (ConvLSTM2D) (None, 130, 240, 320, 40) 115360
_________________________________________________________________
batch_normalization_6 (Batch (None, 130, 240, 320, 40) 160
_________________________________________________________________
conv_lst_m2d_7 (ConvLSTM2D) (None, 130, 240, 320, 40) 115360
_________________________________________________________________
batch_normalization_7 (Batch (None, 130, 240, 320, 40) 160
_________________________________________________________________
conv_lst_m2d_8 (ConvLSTM2D) (None, 130, 240, 320, 40) 115360
_________________________________________________________________
batch_normalization_8 (Batch (None, 130, 240, 320, 40) 160
_________________________________________________________________
conv3d_1 (Conv3D) (None, 130, 240, 320, 1) 1081
=================================================================
Total params: 409,881.0
Trainable params: 409,561
Non-trainable params: 320.0
_________________________________________________________________
None
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-d909d285f474> in <module>()
82 history = model.fit(x_learn[:3], y_learn[:3],
83 batch_size=batch_size,
---> 84 epochs=epochs)
85
86 print (history)

ValueError: Error when checking model target: expected conv3d_1 to have 5 dimensions, but got array with shape (3, 1)

最佳答案

“目标”表示问题出在模型的输出y_learn的格式中。

数组y_learn应该与模型输出的形状完全相同,因为模型输出的是“猜测”,而y_learn是正确答案”。只有在维度相同的情况下,系统才能将猜测与正确答案进行比较。

看看区别:

  • 模型输出(见摘要):(None,130,240,320,1)
  • y_learn:(无,1)

其中“None”是批量大小。您给出了 y_learn[:3],那么本次训练的批量大小为 3。

为了正确纠正它,我们需要了解 y_learn 是什么。
如果我没理解错的话,每个视频只有一个数字,0 或 1。如果是这样,你的 y_learn 完全没问题,你需要的是你的模型输出类似 (None,1) 的内容。

一种非常简单的方法(也许不是最好的,我在这里无法提供更多帮助......)是添加一个仅包含一个神经元的最终密集层:

model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))

现在,当您执行 model.summary() 时,您将看到最终输出为 (None,1)

关于Keras模型ValueError : Error when checking model target:,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43899248/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com