gpt4 book ai didi

python-3.x - Keras 图像预处理

转载 作者:行者123 更新时间:2023-11-30 08:33:28 24 4
gpt4 key购买 nike

我的训练图像是相关 HR 图像的缩小版本。因此,输入和输出图像的尺寸不同。目前,我使用的是由 13 张图像组成的手工制作的样本,但最终我希望能够使用 500 幅左右的 HR(高分辨率)图像数据集。然而,这个数据集没有相同尺寸的图像,所以我猜我必须裁剪它们才能获得统一的尺寸。

我目前已设置此代码:它需要一堆 512x512x3 图像并应用一些转换来增强数据(翻转)。因此,我获得了 HR 形式的 39 个基本图像集,然后将它们缩小了 4 倍,从而获得了由 39 个维度 128x128x3 图像组成的训练集。

import numpy as np

from keras.preprocessing.image import ImageDataGenerator

import matplotlib.image as mpimg
import skimage
from skimage import transform

from constants import data_path
from constants import img_width
from constants import img_height

from model import setUpModel


def setUpImages():

train = []
finalTest = []

sample_amnt = 11
max_amnt = 13

# Extracting images (512x512)
for i in range(sample_amnt):
train.append(mpimg.imread(data_path + str(i) + '.jpg'))

for i in range(max_amnt-sample_amnt):
finalTest.append(mpimg.imread(data_path + str(i+sample_amnt) + '.jpg'))

# # TODO: https://keras.io/preprocessing/image/
# ImageDataGenerator(featurewise_center=False, samplewise_center=False, featurewise_std_normalization=False,
# samplewise_std_normalization=False, zca_whitening=False, zca_epsilon=1e-06, rotation_range=0,
# width_shift_range=0.0, height_shift_range=0.0, brightness_range=None, shear_range=0.0,
# zoom_range=0.0, channel_shift_range=0.0, fill_mode='nearest', cval=0.0, horizontal_flip=False,
# vertical_flip=False, rescale=None, preprocessing_function=None, data_format=None,
# validation_split=0.0, dtype=None)

# Augmenting data
trainData = dataAugmentation(train)
testData = dataAugmentation(finalTest)

setUpData(trainData, testData)


def setUpData(trainData, testData):

# print(type(trainData)) # <class 'numpy.ndarray'>
# print(len(trainData)) # 64
# print(type(trainData[0])) # <class 'numpy.ndarray'>
# print(trainData[0].shape) # (1400, 1400, 3)
# print(trainData[len(trainData)//2-1].shape) # (1400, 1400, 3)
# print(trainData[len(trainData)//2].shape) # (350, 350, 3)
# print(trainData[len(trainData)-1].shape) # (350, 350, 3)

# TODO: substract mean of all images to all images

# Separating the training data
Y_train = trainData[:len(trainData)//2] # First half is the unaltered data
X_train = trainData[len(trainData)//2:] # Second half is the deteriorated data

# Separating the testing data
Y_test = testData[:len(testData)//2] # First half is the unaltered data
X_test = testData[len(testData)//2:] # Second half is the deteriorated data

# Adjusting shapes for Keras input # TODO: make into a function ?
X_train = np.array([x for x in X_train])
Y_train = np.array([x for x in Y_train])
Y_test = np.array([x for x in Y_test])
X_test = np.array([x for x in X_test])

# # Sanity check: display four images (2x HR/LR)
# plt.figure(figsize=(10, 10))
# for i in range(2):
# plt.subplot(2, 2, i + 1)
# plt.imshow(Y_train[i], cmap=plt.cm.binary)
# for i in range(2):
# plt.subplot(2, 2, i + 1 + 2)
# plt.imshow(X_train[i], cmap=plt.cm.binary)
# plt.show()

setUpModel(X_train, Y_train, X_test, Y_test)


# TODO: possibly remove once Keras Preprocessing is integrated?
def dataAugmentation(dataToAugment):
print("Starting to augment data")
arrayToFill = []

# faster computation with values between 0 and 1 ?
dataToAugment = np.divide(dataToAugment, 255.)

# TODO: switch from RGB channels to CbCrY
# # TODO: Try GrayScale
# trainingData = np.array(
# [(cv2.cvtColor(np.uint8(x * 255), cv2.COLOR_BGR2GRAY) / 255).reshape(350, 350, 1) for x in trainingData])
# validateData = np.array(
# [(cv2.cvtColor(np.uint8(x * 255), cv2.COLOR_BGR2GRAY) / 255).reshape(1400, 1400, 1) for x in validateData])

# adding the normal images (8)
for i in range(len(dataToAugment)):
arrayToFill.append(dataToAugment[i])
# vertical axis flip (-> 16)
for i in range(len(arrayToFill)):
arrayToFill.append(np.fliplr(arrayToFill[i]))
# horizontal axis flip (-> 32)
for i in range(len(arrayToFill)):
arrayToFill.append(np.flipud(arrayToFill[i]))

# downsizing by scale of 4 (-> 64 images of 128x128x3)
for i in range(len(arrayToFill)):
arrayToFill.append(skimage.transform.resize(
arrayToFill[i],
(img_width/4, img_height/4),
mode='reflect',
anti_aliasing=True))

# # Sanity check: display the images
# plt.figure(figsize=(10, 10))
# for i in range(64):
# plt.subplot(8, 8, i + 1)
# plt.imshow(arrayToFill[i], cmap=plt.cm.binary)
# plt.show()

return np.array(arrayToFill)

我的问题是:就我而言,我可以使用 Keras 提供的预处理工具吗?理想情况下,我希望能够输入不同尺寸的高质量图像,将它们裁剪(而不是缩小它们)为 512x512x3,并通过翻转等数据增强它们。减去平均值也是我想要实现的目标的一部分。该集将代表我的验证集。

重复使用验证集,我想将所有图像缩小 4 倍,这将生成我的训练集。

然后可以适本地分割这两个集合,最终获得著名的 X_train Y_train X_test Y_test .

我只是犹豫是否要放弃迄今为止为预处理我的迷你样本所做的所有工作,但我在想如果这一切都可以通过一个内置函数来完成,也许我应该这样做开始吧。

这是我的第一个 ML 项目,因此我不太了解 Keras,并且文档并不总是最清晰的。我在想,事实上我正在使用大小不同的 X 和 Y,也许这个函数不适用于我的项目。

谢谢! :)

最佳答案

是的,您可以使用keras预处理功能。下面的一些片段可以帮助您...

def cropping_function(x):
...
return cropped_image

X_image_gen = ImageDataGenerator(preprocessing_function = cropping_function,
horizontal_flip = True,
vertical_flip=True)
X_train_flow = X_image_gen.flow(X_train, batch_size = 16, seed = 1)
Y_image_gen = ImageDataGenerator(horizontal_flip = True,
vertical_flip=True)
Y_train_flow = Y_image_gen.flow(y_train, batch_size = 16, seed = 1)
train_flow = zip(X_train_flow,Y_train_flow)
model.fit_generator(train_flow)

关于python-3.x - Keras 图像预处理,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52462058/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com