gpt4 book ai didi

python - 如何将 Tensorflow 数据集与 OpenCV 预处理结合使用?

转载 作者:行者123 更新时间:2023-12-01 08:04:26 24 4
gpt4 key购买 nike

我正在创建一个用于文本识别的管道,我想使用 Tensorflow Dtatasets 通过 OpenCV 进行一些预处理来加载数据

我正在关注本教程 https://www.tensorflow.org/guide/datasets#applying_arbitrary_python_logic_with_tfpy_func我有这个预处理功能:

def preprocess(path, imgSize=(1024, 64), dataAugmentation=False):

img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)

kernel = np.ones((3, 3), np.uint8)
th, img = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY_INV +
cv2.THRESH_OTSU)
img = cv2.dilate(img, kernel, iterations=1)

# create target image and copy sample image into it
(wt, ht) = imgSize
(h, w) = img.shape
fx = w / wt
fy = h / ht
f = max(fx, fy)
newSize = (max(min(wt, int(w / f)), 1),
max(min(ht, int(h / f)), 1)) # scale according to f (result at
least 1 and at most wt or ht)
img = cv2.resize(img, newSize)

# add random padding to fit the target size if data augmentation is true
# otherwise add padding to the right
if newSize[1] == ht:
if dataAugmentation:
padding_width_left = np.random.random_integers(0, wt-newSize[0])
img = cv2.copyMakeBorder(img, 0, 0, padding_width_left, wt-newSize[0]-padding_width_left, cv2.BORDER_CONSTANT, None, (0, 0))
else:
img = cv2.copyMakeBorder(img, 0, 0, 0, wt - newSize[0], cv2.BORDER_CONSTANT, None, (0, 0))
else:
img = cv2.copyMakeBorder(img, int(np.floor((ht - newSize[1])/2)), int(np.ceil((ht - newSize[1])/2)), 0, 0, cv2.BORDER_CONSTANT, None, (0, 0))

# transpose for TF
img = cv2.transpose(img)

return img

但是如果我用这个

list_images = os.listdir(images_path)
image_paths = []
for i in range(len(list_images)):
image_paths.append("iam-database/images/" + list_images[i])

dataset = tf.data.Dataset.from_tensor_slices(image_paths)
dataset = dataset.map(lambda filename: tuple(tf.py_function(preprocess, [filename], [tf.uint8])))
print(dataset)

我的形状未知,似乎预处理函数未解析。我该怎么办?

最佳答案

为了在数据集 API 管道内运行此预处理函数,您需要用 tf.py_function 包装它,它是已弃用的 py_func 的后继者。主要区别在于它可以放置在 GPU 上并且可以与急切张量一起工作。您可以在文档中阅读更多内容。

def preprocess(path, imgSize = (1024, 64), dataAugmentation = False):
path = path.numpy().decode("utf-8") # .numpy() retrieves data from eager tensor
img = cv2.imread(path)
...
return img

此时 img 是一个 .其余功能由您决定

此解析函数是数据集管道的包装器。它接收文件名作为张量,其中包含字节串。

def parse_func(filename):
out = tf.py_function(preprocess, [filename], tf.uint8)
return out


dataset = tf.data.Dataset.from_tensor_slices(path)
dataset = dataset.map(pf).batch(1)
iterator = dataset.make_one_shot_iterator()
sess = tf.Session()
print(sess.run(iterator.get_next()))

关于python - 如何将 Tensorflow 数据集与 OpenCV 预处理结合使用?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55606909/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com