- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我想用 tflearn 进行 k 折交叉验证。因此我想重置网络k次。但是,我认为我需要重置图表(例如 tf.reset_default_graph()
),但我不确定,也不知道如何使用 tflearn 执行此操作。
对于以下内容,您需要 hasy_tools.py
#!/usr/bin/env python
"""
Trains a simple convnet on the HASY dataset.
Gets to 76.78% test accuracy after 1 epoch.
573 seconds per epoch on a GeForce 940MX GPU.
# WARNING: THIS IS NOT WORKING RIGHT NOW
"""
import os
import hasy_tools as ht
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.layers.core import input_data, fully_connected, dropout
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.estimator import regression
batch_size = 128
nb_epoch = 1
# input image dimensions
img_rows, img_cols = 32, 32
accuracies = []
for fold in range(1, 4):
tf.reset_default_graph()
# Load data
dataset_path = os.path.join(os.path.expanduser("~"), 'hasy')
hasy_data = ht.load_data(fold=fold,
normalize=True,
one_hot=True,
dataset_path=dataset_path,
flatten=False)
train_x = hasy_data['train']['X'][:1000]
train_y = hasy_data['train']['y'][:1000]
test_x = hasy_data['test']['X']
test_y = hasy_data['test']['y']
# Define model
network = input_data(shape=[None, img_rows, img_cols, 1], name='input')
network = conv_2d(network, 32, 3, activation='prelu')
network = conv_2d(network, 64, 3, activation='prelu')
network = max_pool_2d(network, 2)
network = dropout(network, keep_prob=0.25)
network = fully_connected(network, 1024, activation='tanh')
network = dropout(network, keep_prob=0.5)
network = fully_connected(network, 369, activation='softmax')
# Train model
network = regression(network, optimizer='adam', learning_rate=0.001,
loss='categorical_crossentropy', name='target')
model = tflearn.DNN(network, tensorboard_verbose=0)
model.fit({'input': train_x}, {'target': train_y}, n_epoch=nb_epoch,
validation_set=({'input': test_x}, {'target': test_y}),
snapshot_step=100, show_metric=True, run_id='convnet_mnist',
batch_size=batch_size)
# Serialize model
model.save('cv-model-fold-%i.tflearn' % fold)
# Evaluate model
score = model.evaluate(test_x, test_y)
print('Test accuarcy: %0.4f%%' % (score[0] * 100))
accuracies.append(score[0])
accuracies = np.array(accuracies)
print(("CV Accuracy. mean={mean:0.2f}%%\t ({min:0.2f}%% - {max:0.2f}%%)"
).format(mean=accuracies.mean() * 100,
min=accuracies.min() * 100,
max=accuracies.max() * 100))
仅使用一次折叠运行代码效果很好,但使用多次折叠时我得到:
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x5556c9100 of size 33554432
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x5576c9100 of size 66481920
I tensorflow/core/common_runtime/bfc_allocator.cc:687] Free at 0x506d7cc00 of size 256
I tensorflow/core/common_runtime/bfc_allocator.cc:687] Free at 0x506d7d100 of size 62720
I tensorflow/core/common_runtime/bfc_allocator.cc:687] Free at 0x507573700 of size 798208
I tensorflow/core/common_runtime/bfc_allocator.cc:693] Summary of in-use Chunks by size:
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 345 Chunks of size 256 totalling 86.2KiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 1 Chunks of size 1024 totalling 1.0KiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 23 Chunks of size 1280 totalling 28.8KiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 21 Chunks of size 1536 totalling 31.5KiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 20 Chunks of size 4096 totalling 80.0KiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 16 Chunks of size 73728 totalling 1.12MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 20 Chunks of size 131072 totalling 2.50MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 1 Chunks of size 188928 totalling 184.5KiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 17 Chunks of size 262144 totalling 4.25MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 2 Chunks of size 446464 totalling 872.0KiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 1 Chunks of size 515584 totalling 503.5KiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 1 Chunks of size 524288 totalling 512.0KiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 16 Chunks of size 1511424 totalling 23.06MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 6 Chunks of size 16777216 totalling 96.00MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 1 Chunks of size 23026176 totalling 21.96MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 6 Chunks of size 33554432 totalling 192.00MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 1 Chunks of size 66481920 totalling 63.40MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 15 Chunks of size 67108864 totalling 960.00MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 1 Chunks of size 67183872 totalling 64.07MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:700] Sum Total of in-use chunks: 1.40GiB
I tensorflow/core/common_runtime/bfc_allocator.cc:702] Stats:
Limit: 1500971008
InUse: 1500109824
MaxInUse: 1500109824
NumAllocs: 43767
MaxAllocSize: 844062464
W tensorflow/core/common_runtime/bfc_allocator.cc:274] **************************************************************************************************xx
W tensorflow/core/common_runtime/bfc_allocator.cc:275] Ran out of memory trying to allocate 8.00MiB. See logs for memory state.
W tensorflow/core/framework/op_kernel.cc:975] Resource exhausted: OOM when allocating tensor with shape[128,16,16,64]
--
Traceback (most recent call last):
File "tflearn_hasy_cv.py", line 60, in <module>
batch_size=batch_size)
File "/home/moose/GitHub/tflearn/tflearn/models/dnn.py", line 188, in fit
run_id=run_id)
File "/home/moose/GitHub/tflearn/tflearn/helpers/trainer.py", line 277, in fit
show_metric)
File "/home/moose/GitHub/tflearn/tflearn/helpers/trainer.py", line 684, in _train
feed_batch)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 766, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 964, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1014, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1034, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[128,16,16,64]
[[Node: MaxPool2D/MaxPool = MaxPool[T=DT_FLOAT, data_format="NHWC", ksize=[1, 2, 2, 1], padding="SAME", strides=[1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](Conv2D_1/PReLU/add)]]
[[Node: Crossentropy/Mean/_19 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_920_Crossentropy/Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op u'MaxPool2D/MaxPool', defined at:
File "tflearn_hasy_cv.py", line 47, in <module>
network = max_pool_2d(network, 2)
File "/home/moose/GitHub/tflearn/tflearn/layers/conv.py", line 363, in max_pool_2d
inference = tf.nn.max_pool(incoming, kernel, strides, padding)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1617, in max_pool
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 1598, in _max_pool
data_format=data_format, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2240, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1128, in __init__
self._traceback = _extract_stack()
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[128,16,16,64]
[[Node: MaxPool2D/MaxPool = MaxPool[T=DT_FLOAT, data_format="NHWC", ksize=[1, 2, 2, 1], padding="SAME", strides=[1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/gpu:0"](Conv2D_1/PReLU/add)]]
[[Node: Crossentropy/Mean/_19 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_920_Crossentropy/Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
编辑:我在 Keras 中遇到了同样的问题。
最佳答案
我在构建神经网络之前、for
循环之前输入了代码 tf.reset_default_graph()
。
您还应该在 for 循环之外定义模型,就在它之前。只有训练应该在 kfolds
循环中。
关于machine-learning - 如何与 tflearn 进行交叉验证?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42173704/
进程虚拟机和系统虚拟机有什么区别? 我的猜测是,进程 VM 没有为该操作系统的整个应用程序提供一种操作系统,而是为某些特定应用程序提供环境。 系统虚拟机为操作系统提供了一个安装环境,就像 Virtua
我写了一个 C# windows 应用程序表单,它在客户端机器上运行并连接到另一台机器上的 SQL 服务器。在 C# 中建立连接时,我使用了像这样的 dll 1)microsoft.sqlserver
作为我作业的一部分,我正在处理几个数据集,并通过线性回归查找它们的训练错误。我想知道标准化是否对训练误差有影响?对于标准化前后的数据集,我的相关性和 RMSE 是相等的。 谢谢 最佳答案 很容易证明,
我在公司数据中心的 linux VM 上安装了 docker-engine。我在 Windows 上安装了 docker-machine。我想通过我的 Windows 机器管理这个 docker-en
我在我的 PC 上运行 SAS Enterprise Guide 以连接到位于我们网络内的服务器上的 SAS 实例。 我正在编写一个将在服务器上运行的 SAS 程序,该程序将使用 ODS 将 HTML
我正在创建一个包含 ASP.Net HttpModule 和 HttpHandler 的强签名类库。 我已经为我的库创建了一个 visual studio 安装项目,该项目在 GAC 中安装了该库,但
我试过 docker-machine create -d none --url tcp://:2376 remote并复制 {ca,key,cert}.pem (客户端证书)到机器目录。然后我做了 e
请注意 : 这个问题不是关于 LLVM IR , 但 LLVM 的 MIR ,一种低于前一种的内部中间表示。 本文档关于 LLVM Machine code description classes ,
我理解图灵机的逻辑。当给出图灵机时,我可以理解它是如何工作的以及它是如何停止的。但是当它被要求构造图灵机,难度更大。 有什么简单的方法可以找到问题的答案,例如: Construct a Turing
我不确定我是否理解有限状态机和状态机之间是否有区别?我是不是想得太难了? 最佳答案 I'm not sure I understand if there is a difference between
我遵循 docker 入门教程并到达第 4 部分,您需要使用 virtualbox ( https://docs.docker.com/get-started/part4/#create-a-clus
我使用 Virtual Machine Manager 通过 QEMU-KVM 运行多个客户操作系统。我在某处读到,通过输入 ctrl+alt+2 应该会弹出监视器控制台。它不工作或禁用。有什么办法可
当我尝试在项目中包含 libc.lib 时,会出现此错误,即使我的 Windows 是 32 位,也会出现此错误。不知道我是否必须从某个地方下载它或什么。 最佳答案 您正在尝试链接为 IA64 架构编
生成模型和判别模型似乎可以学习条件 P(x|y) 和联合 P(x,y) 概率分布。但从根本上讲,我无法说服自己“学习概率分布”意味着什么。 最佳答案 这意味着您的模型要么充当训练样本的分布估计器,要么
我正在使用 visual studio 2012.我得到了错误 LNK1112: module machine type 'x64' conflicts with target machine typ
使用 start|info|stop|delete 参数运行 boot2docker导致错误消息: snowch$ boot2docker start error in run: Failed to
到目前为止,我一直只在本地使用 Vagrant,现在我想使用 Azure 作为提供程序来创建 VM,但不幸的是,我遇到了错误,可以在通过链接访问的图像上看到该错误。我明白它说的是什么,但我完全不知道如
这个问题在这里已经有了答案: 关闭 10 年前。 Possible Duplicate: linking problem: fatal error LNK1112: module machine t
我正在使用 Nodejs 的 dgram 模块运行一个简单的 UDP 服务器。相关代码很简单: server = dgram.createSocket('udp4'); serve
嗨,我收到以下错误,导致构建失败,但在 bin 中创建了 Wix 安装程序 MSI。我怎样才能避免这些错误或抑制? 错误 LGHT0204:ICE57:组件 'cmp52CD5A4CB5D668097
我是一名优秀的程序员,十分优秀!