- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我用了template keras 中的自定义图像生成器,以便我可以使用 hdf5 文件作为输入。最初,代码给出了“形状”错误,因此我只在 this post 之后包含 fromtensorflow.python.keras.utils.data_utils import Sequence
。现在我以这种形式使用它,你也可以在我的 colab notebook 中看到:
from numpy.random import uniform, randint
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
import numpy as np
from tensorflow.python.keras.utils.data_utils import Sequence
class CustomImagesGenerator(Sequence):
def __init__(self, x, zoom_range, shear_range, rescale, horizontal_flip, batch_size):
self.x = x
self.zoom_range = zoom_range
self.shear_range = shear_range
self.rescale = rescale
self.horizontal_flip = horizontal_flip
self.batch_size = batch_size
self.__img_gen = ImageDataGenerator()
self.__batch_index = 0
def __len__(self):
# steps_per_epoch, if unspecified, will use the len(generator) as a number of steps.
# hence this
return np.floor(self.x.shape[0]/self.batch_size)
# @property
# def shape(self):
# return self.x.shape
def next(self):
return self.__next__()
def __next__(self):
start = self.__batch_index*self.batch_size
stop = start + self.batch_size
self.__batch_index += 1
if stop > len(self.x):
raise StopIteration
transformed = np.array(self.x[start:stop]) # loads from hdf5
for i in range(len(transformed)):
zoom = uniform(self.zoom_range[0], self.zoom_range[1])
transformations = {
'zx': zoom,
'zy': zoom,
'shear': uniform(-self.shear_range, self.shear_range),
'flip_horizontal': self.horizontal_flip and bool(randint(0,2))
}
transformed[i] = self.__img_gen.apply_transform(transformed[i], transformations)
import pdb;pdb.set_trace()
return transformed * self.rescale
我用以下方式调用生成器:
import h5py
import tables
in_hdf5_file = tables.open_file("gdrive/My Drive/Colab Notebooks/dataset.hdf5", mode='r')
images = in_hdf5_file.root.train_img
my_gen = CustomImagesGenerator(
images,
zoom_range=[0.8, 1],
batch_size=32,
shear_range=6,
rescale=1./255,
horizontal_flip=False
)
classifier.fit_generator(my_gen, steps_per_epoch=100, epochs=1, verbose=1)
导入Sequence
解决了“shape”错误,但现在我收到错误:
Exception in thread Thread-5: Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/usr/lib/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/data_utils.py", line 742, in _run sequence = list(range(len(self.sequence))) TypeError: 'numpy.float64' object cannot be interpreted as an integer
我该如何解决这个问题?我怀疑这可能又是 keras 包中的冲突,并且不知道如何解决它。
最佳答案
在您的情况下使用 model.fit()
的示例:
from tensorflow.keras.utils import to_categorical
import tensorflow as tf
import tables
#define your model
...
#load your data from an hdf5 file
in_hdf5_file = tables.open_file("path/to/your/dataset.hdf5", mode='r')
x = in_hdf5_file.root.train_img[:]
y = in_hdf5_file.root.train_labels[:]
yourModel.fit(x, to_categorical(y, 3), epochs=2, batch_size=5)
有关更多信息,请阅读我对您的原始帖子的评论,或者随时询问。
编辑:我修复了你的生成器,现在它只需要 hdf5 文件的路径
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import *
from tensorflow.keras.utils import to_categorical
import numpy as np
from tensorflow.python.keras.utils.data_utils import Sequence
import tensorflow as tf
import tables
#define your model
...
#training
def h5data_generator(path, batch_size=1):
batch_index = 0
while 1:
with tables.open_file(path, mdoe='r') as f:
x = f.root.train_img[batch_index:batch_index + batch_size]
y = f.root.train_labels[batch_index:batch_index + batch_size]
if batch_index >= x.shape[0]:
batch_index = 0
batch_index += 1
yield (x, to_categorical(y, 3))
del x
del y
my_gen = h5data_generator("path/to/your/dataset.hdf5")
yourModel.fit_generator(my_gen, steps_per_epoch=100, epochs=20, verbose=1)
您的生成器的问题是步骤中的数据输出错误,它没有输出 (x, y)
,它不可能,它正在输出 x
>(您的情况下的图像),也因为它使用 Sequential
keras 尝试将其解释为使用其 api 的对象(不是您的生成器中的情况)。而且它不一定是一个类
,它需要是 a python generator ,如 keras it self(doc string of fit_generator()
) 中的示例所示,
fit_generator.__doc__
:
Fits the model on data yielded batch-by-batch by a Python generator.
The generator is run in parallel to the model, for efficiency.
For instance, this allows you to do real-time data augmentation
on images on CPU in parallel to training your model on GPU.
The use of `keras.utils.Sequence` guarantees the ordering
and guarantees the single use of every input per epoch when
using `use_multiprocessing=True`.
Arguments:
generator: A generator or an instance of `Sequence`
(`keras.utils.Sequence`)
object in order to avoid duplicate data
when using multiprocessing.
The output of the generator must be either
- a tuple `(inputs, targets)`
- a tuple `(inputs, targets, sample_weights)`.
This tuple (a single output of the generator) makes a single batch.
Therefore, all arrays in this tuple must have the same length (equal
to the size of this batch). Different batches may have different
sizes.
For example, the last batch of the epoch is commonly smaller than
the
others, if the size of the dataset is not divisible by the batch
size.
The generator is expected to loop over its data
indefinitely. An epoch finishes when `steps_per_epoch`
batches have been seen by the model.
steps_per_epoch: Total number of steps (batches of samples)
to yield from `generator` before declaring one epoch
finished and starting the next epoch. It should typically
be equal to the number of samples of your dataset
divided by the batch size.
Optional for `Sequence`: if unspecified, will use
the `len(generator)` as a number of steps.
epochs: Integer, total number of iterations on the data.
verbose: Verbosity mode, 0, 1, or 2.
callbacks: List of callbacks to be called during training.
validation_data: This can be either
- a generator for the validation data
- a tuple (inputs, targets)
- a tuple (inputs, targets, sample_weights).
validation_steps: Only relevant if `validation_data`
is a generator. Total number of steps (batches of samples)
to yield from `generator` before stopping.
Optional for `Sequence`: if unspecified, will use
the `len(validation_data)` as a number of steps.
validation_freq: Only relevant if validation data is provided. Integer
or `collections.Container` instance (e.g. list, tuple, etc.). If an
integer, specifies how many training epochs to run before a new
validation run is performed, e.g. `validation_freq=2` runs
validation every 2 epochs. If a Container, specifies the epochs on
which to run validation, e.g. `validation_freq=[1, 2, 10]` runs
validation at the end of the 1st, 2nd, and 10th epochs.
class_weight: Dictionary mapping class indices to a weight
for the class.
max_queue_size: Integer. Maximum size for the generator queue.
If unspecified, `max_queue_size` will default to 10.
workers: Integer. Maximum number of processes to spin up
when using process-based threading.
If unspecified, `workers` will default to 1. If 0, will
execute the generator on the main thread.
use_multiprocessing: Boolean.
If `True`, use process-based threading.
If unspecified, `use_multiprocessing` will default to `False`.
Note that because this implementation relies on multiprocessing,
you should not pass non-picklable arguments to the generator
as they can't be passed easily to children processes.
shuffle: Boolean. Whether to shuffle the order of the batches at
the beginning of each epoch. Only used with instances
of `Sequence` (`keras.utils.Sequence`).
Has no effect when `steps_per_epoch` is not `None`.
initial_epoch: Epoch at which to start training
(useful for resuming a previous training run)
Returns:
A `History` object.
Example:
```python
def generate_arrays_from_file(path):
while 1:
f = open(path)
for line in f:
# create numpy arrays of input data
# and labels, from each line in the file
x1, x2, y = process_line(line)
yield ({'input_1': x1, 'input_2': x2}, {'output': y})
f.close()
model.fit_generator(generate_arrays_from_file('/my_file.txt'),
steps_per_epoch=10000, epochs=10)
```
Raises:
ValueError: In case the generator yields data in an invalid format.
有关更多信息,请查看 keras 的 github 页面,fit_generator()
to be exact ,或者再次随时询问。
编辑2:您还可以将batch_size
传递给h5data_generator()
,这将设置单步从数据集中提取数据的批量大小。
关于python - 为什么 keras 中的自定义图像生成器会出现错误 "object cannot be interpreted as an integer"?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57714233/
我有兴趣在 tf.keras 中训练一个模型,然后用 keras 加载它。我知道这不是高度建议,但我对使用 tf.keras 来训练模型很感兴趣,因为 tf.keras 更容易构建输入管道 我想利用
我进行了大量搜索,但仍然无法弄清楚如何编写具有多个交互输出的自定义损失函数。 我有一个神经网络定义为: def NeuralNetwork(): inLayer = Input((2,));
我正在阅读一篇名为 Differential Learning Rates 的文章在 Medium 上,想知道这是否可以应用于 Keras。我能够找到在 pytorch 中实现的这项技术。这可以在 K
我正在实现一个神经网络分类器,以打印我正在使用的这个神经网络的损失和准确性: score = model.evaluate(x_test, y_test, verbose=False) model.m
我最近在查看模型摘要时遇到了这个问题。 我想知道,[(None, 16)] 和有什么区别?和 (None, 16) ?为什么输入层有这样的输入形状? 来源:model.summary() can't
我正在尝试使用 Keras 创建自定义损失函数。我想根据输入计算损失函数并预测神经网络的输出。 我尝试在 Keras 中使用 customloss 函数。我认为 y_true 是我们为训练提供的输出,
我有一组样本,每个样本都是一组属性的序列(例如,一个样本可以包含 10 个序列,每个序列具有 5 个属性)。属性的数量总是固定的,但序列的数量(时间戳)可能因样本而异。我想使用这个样本集在 Keras
Keras 在训练集和测试集文件夹中发现了错误数量的类。我有 3 节课,但它一直说有 4 节课。有人可以帮我吗? 这里的代码: cnn = Sequential() cnn.add(Conv2D(32
我想编写一个自定义层,在其中我可以在两次运行之间将变量保存在内存中。例如, class MyLayer(Layer): def __init__(self, out_dim = 51, **kwarg
我添加了一个回调来降低学习速度: keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=100,
在 https://keras.io/layers/recurrent/我看到 LSTM 层有一个 kernel和一个 recurrent_kernel .它们的含义是什么?根据我的理解,我们需要 L
问题与标题相同。 我不想打开 Python,而是使用 MacOS 或 Ubuntu。 最佳答案 Python 库作者将版本号放入 .__version__ 。您可以通过在命令行上运行以下命令来打印它:
Keras 文档并不清楚这实际上是什么。我知道我们可以用它来将输入特征空间压缩成更小的空间。但从神经设计的角度来看,这是如何完成的呢?它是一个自动编码器,RBM吗? 最佳答案 据我所知,嵌入层是一个简
我想实现[http://ydwen.github.io/papers/WenECCV16.pdf]中解释的中心损失]在喀拉斯 我开始创建一个具有 2 个输出的网络,例如: inputs = Input
我正在尝试实现多对一模型,其中输入是大小为 的词向量d .我需要输出一个大小为 的向量d 在 LSTM 结束时。 在此 question ,提到使用(对于多对一模型) model = Sequenti
我有不平衡的训练数据集,这就是我构建自定义加权分类交叉熵损失函数的原因。但问题是我的验证集是平衡的,我想使用常规的分类交叉熵损失。那么我可以在 Keras 中为验证集传递不同的损失函数吗?我的意思是用
DL 中的一项常见任务是将输入样本归一化为零均值和单位方差。可以使用如下代码“手动”执行规范化: mean = np.mean(X, axis = 0) std = np.std(X, axis =
我正在尝试学习 Keras 并使用 LSTM 解决分类问题。我希望能够绘制 准确率和损失,并在训练期间更新图。为此,我正在使用 callback function . 由于某种原因,我在回调中收到的准
在 Keras 内置函数中嵌入使用哪种算法?Word2vec?手套?其他? https://keras.io/layers/embeddings/ 最佳答案 简短的回答是都不是。本质上,GloVe 的
我有一个使用 Keras 完全实现的 LSTM RNN,我想使用梯度剪裁,梯度范数限制为 5(我正在尝试复制一篇研究论文)。在实现神经网络方面,我是一个初学者,我将如何实现? 是否只是(我正在使用 r
我是一名优秀的程序员,十分优秀!