gpt4 book ai didi

python - Tensorflow 值错误 : Too many vaues to unpack (expected 2)

转载 作者:太空宇宙 更新时间:2023-11-04 02:46:29 24 4
gpt4 key购买 nike

我已经在 Reddit、Stack Overflow、技术论坛、文档、GitHub 问题等上查找过这个问题,但仍然无法解决这个问题。

作为引用,我在 Windows 10 64 位上使用 Python 3 TensorFlow

我正在尝试在 Tensorflow 中使用我自己的数据集(300 张猫图片,512x512,.png 格式)来训练它了解猫的样子。如果这可行,我将用其他动物和最终物体训练它。

我似乎无法弄清楚为什么会出现错误 ValueError: too many values to unpack (expected 2)。错误出现在 images,labal = create_batches(10) 行中,它指向我的函数 create_batches(见下文)。我不知道是什么原因造成的,因为我是 TensorFlow 的新手。我正在尝试基于 MNIST 数据集制作自己的神经网络。代码如下:

import tensorflow as tf
import numpy as np
import os
import sys
import cv2


content = []
labels_list = []
with open("data/cats/files.txt") as ff:
for line in ff:
line = line.rstrip()
content.append(line)

with open("data/cats/labels.txt") as fff:
for linee in fff:
linee = linee.rstrip()
labels_list.append(linee)

def create_batches(batch_size):
images = []
for img in content:
#f = open(img,'rb')
#thedata = f.read().decode('utf8')
thedata = cv2.imread(img)
thedata = tf.contrib.layers.flatten(thedata)
images.append(thedata)
images = np.asarray(images)

labels =tf.convert_to_tensor(labels_list,dtype=tf.string)

print(content)
#print(labels_list)

while(True):
for i in range(0,298,10):
yield images[i:i+batch_size],labels_list[i:i+batch_size]


imgs = tf.placeholder(dtype=tf.float32,shape=[None,262144])
lbls = tf.placeholder(dtype=tf.float32,shape=[None,10])

W = tf.Variable(tf.zeros([262144,10]))
b = tf.Variable(tf.zeros([10]))

y_ = tf.nn.softmax(tf.matmul(imgs,W) + b)

cross_entropy = tf.reduce_mean(-tf.reduce_sum(lbls * tf.log(y_),reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)

sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for i in range(10000):
images,labal = create_batches(10)
sess.run(train_step, feed_dict={imgs:images, lbls: labal})

correct_prediction = tf.equal(tf.argmax(y_,1),tf.argmax(lbls,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

print(sess.run(accuracy, feed_dict={imgs:content, lbls:labels_list}))

错误:

Traceback (most recent call last):
File "B:\Josh\Programming\Python\imgpredict\predict.py", line 54, in <module>

images,labal = create_batches(2)
ValueError: too many values to unpack (expected 2)
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
(A few hundred lines of this)
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile

我的 GitHub link如果有人需要,请链接。项目文件夹是“imgpredict”。

最佳答案

您以错误的方式产生结果:

yield(images[i:i+batch_size]) #,labels_list[i:i+batch_size])

它给你一个产生的值,但是当你调用你的方法时,你期望产生两个值:

images,labal = create_batches(10)

要么产生两个值,例如:

yield (images[i:i+batch_size] , labels_list[i:i+batch_size])

(取消注释)或只期待一个。

编辑:您应该在产量和接收结果时使用括号,如下所示:

#when yielding, remember that yield returns a Generator, therefore the ()
yield (images[i:i+batch_size] , labels_list[i:i+batch_size])

#When receiving also, even though this is not correct
(images,labal) = create_batches(10)

但是这不是我使用yield 选项的方式;通常会遍历您返回生成器的方法,在您的情况下,它应该看起来像这样:

#do the training several times as you have
for i in range(10000):
#now here you should iterate over your generator, in order to gain its benefits
#that is you dont load the entire result set into memory
#remember to receive with () as mentioned
for (images, labal) in create_batches(10):
#do whatever you want with that data
sess.run(train_step, feed_dict={imgs:images, lbls: labal})

您还可以检查this关于 yield 和生成器的用户的问题。

关于python - Tensorflow 值错误 : Too many vaues to unpack (expected 2),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45022315/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com