- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我的训练模型是 U-Net 的修改版本,目标是向 512x1808 图像添加过滤器。
我遇到的问题是,当我调用 model.predict
时,我总是遇到内存不足的问题,但是当我调用 model.fit
来训练数据。
这是模型:
## create model
IMG_WIDTH = width_padded #2048
IMG_HEIGHT = height #512
IMG_CHANNELS = 3 #RGB
inputs = tf.keras.layers.Input((IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
#encoder path
c1a = tf.keras.layers.Conv2D(16, (3,3), activation='relu', kernel_initializer='he_normal', padding='same')(inputs)
c1a = tf.keras.layers.Dropout(0.1)(c1a)
c1a = tf.keras.layers.Conv2D(16, (3,3), activation='relu', kernel_initializer='he_normal', padding='same')(c1a)
p1a = tf.keras.layers.MaxPooling2D((2,2))(c1a)
c1b = tf.keras.layers.Conv2D(16, (3,3), activation='relu', kernel_initializer='he_normal', padding='same')(c1a)
c1b = tf.keras.layers.Dropout(0.1)(c1b)
c1b = tf.keras.layers.Conv2D(16, (3,3), activation='relu', kernel_initializer='he_normal', padding='same')(c1b)
p1b = tf.keras.layers.MaxPooling2D((2,2))(c1b)
c1 = tf.keras.layers.Conv2D(16, (3,3), activation='relu', kernel_initializer='he_normal', padding='same')(c1b)
c1 = tf.keras.layers.Dropout(0.1)(c1)
c1 = tf.keras.layers.Conv2D(16, (3,3), activation='relu', kernel_initializer='he_normal', padding='same')(c1)
p1 = tf.keras.layers.MaxPooling2D((2,2))(c1)
c2 = tf.keras.layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p1)
c2 = tf.keras.layers.Dropout(0.1)(c2)
c2 = tf.keras.layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c2)
p2 = tf.keras.layers.MaxPooling2D((2, 2))(c2)
c3 = tf.keras.layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p2)
c3 = tf.keras.layers.Dropout(0.2)(c3)
c3 = tf.keras.layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c3)
p3 = tf.keras.layers.MaxPooling2D((2, 2))(c3)
c4 = tf.keras.layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p3)
c4 = tf.keras.layers.Dropout(0.2)(c4)
c4 = tf.keras.layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c4)
p4 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(c4)
c5 = tf.keras.layers.Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(p4)
c5 = tf.keras.layers.Dropout(0.3)(c5)
c5 = tf.keras.layers.Conv2D(256, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c5)
#decoder path
u6 = tf.keras.layers.Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(c5)
u6 = tf.keras.layers.concatenate([u6, c4])
c6 = tf.keras.layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u6)
c6 = tf.keras.layers.Dropout(0.2)(c6)
c6 = tf.keras.layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c6)
u7 = tf.keras.layers.Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(c6)
u7 = tf.keras.layers.concatenate([u7, c3])
c7 = tf.keras.layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u7)
c7 = tf.keras.layers.Dropout(0.2)(c7)
c7 = tf.keras.layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c7)
u8 = tf.keras.layers.Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(c7)
u8 = tf.keras.layers.concatenate([u8, c2])
c8 = tf.keras.layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u8)
c8 = tf.keras.layers.Dropout(0.1)(c8)
c8 = tf.keras.layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c8)
u9 = tf.keras.layers.Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same')(c8)
u9 = tf.keras.layers.concatenate([u9, c1a], axis=3)
c9 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(u9)
c9 = tf.keras.layers.Dropout(0.1)(c9)
c9 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same')(c9)
outputs = tf.keras.layers.Conv2D(3, (1, 1), activation='sigmoid')(c9)
model = tf.keras.Model(inputs=[inputs], outputs=[outputs])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
训练电话:
train_in, train_out, test_in, test_out = dataset['train_in'], dataset['train_out'], dataset['test_in'], dataset['test_out']
results = model.fit(train_in, train_out, validation_split=0.1, batch_size=1, epochs=5, callbacks=callbacks)
预测调用:
preds_train = new_model.predict(dataset["test_in"])
plt.imshow(preds_train)
和错误信息:
020-06-18 17:07:30.212907: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-06-18 17:07:31.394251: W tensorflow/stream_executor/cuda/redzone_allocator.cc:312] Internal: Invoking ptxas not supported on Windows
Relying on driver to perform ptx compilation. This message will be only logged once.
2020-06-18 17:07:31.457663: W tensorflow/core/common_runtime/bfc_allocator.cc:239] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.39GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-06-18 17:07:31.469003: W tensorflow/core/common_runtime/bfc_allocator.cc:239] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.39GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-06-18 17:07:31.744585: W tensorflow/core/common_runtime/bfc_allocator.cc:239] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.02GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-06-18 17:07:31.753330: W tensorflow/core/kernels/conv_ops.cc:1014] Failed to allocate memory for convolution redzone checking; skipping this check. This is benign and only means that we won't check cudnn for out-of-bounds reads and writes. This message will only be printed once.
2020-06-18 17:07:42.400811: W tensorflow/core/common_runtime/bfc_allocator.cc:419] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.00GiB (rounded to 2147483648). Current allocation summary follows.
2020-06-18 17:07:42.406709: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (256): Total Chunks: 74, Chunks in use: 73. 18.5KiB allocated for chunks. 18.3KiB in use in bin. 8.0KiB client-requested in use in bin.
2020-06-18 17:07:42.413456: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (512): Total Chunks: 15, Chunks in use: 15. 7.5KiB allocated for chunks. 7.5KiB in use in bin. 7.5KiB client-requested in use in bin.
2020-06-18 17:07:42.430835: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (1024): Total Chunks: 10, Chunks in use: 10. 12.5KiB allocated for chunks. 12.5KiB in use in bin. 12.1KiB client-requested in use in bin.
2020-06-18 17:07:42.445773: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (2048): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-06-18 17:07:42.457267: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (4096): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-06-18 17:07:42.466993: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (8192): Total Chunks: 21, Chunks in use: 21. 189.5KiB allocated for chunks. 189.5KiB in use in bin. 186.0KiB client-requested in use in
bin.
2020-06-18 17:07:42.473657: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (16384): Total Chunks: 6, Chunks in use: 6. 108.0KiB allocated for chunks. 108.0KiB in use in bin. 108.0KiB client-requested in use in bin.
2020-06-18 17:07:42.479695: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (32768): Total Chunks: 9, Chunks in use: 9. 312.0KiB allocated for chunks. 312.0KiB in use in bin. 312.0KiB client-requested in use in bin.
2020-06-18 17:07:42.493219: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (65536): Total Chunks: 6, Chunks in use: 6. 432.0KiB allocated for chunks. 432.0KiB in use in bin. 432.0KiB client-requested in use in bin.
2020-06-18 17:07:42.500604: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (131072): Total Chunks: 10, Chunks in use: 9. 1.45MiB allocated for chunks. 1.22MiB in use in bin. 1.22MiB client-requested in use in bin.
2020-06-18 17:07:42.509323: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (262144): Total Chunks: 6, Chunks in use: 6. 1.69MiB allocated for chunks. 1.69MiB in use in bin. 1.69MiB client-requested in use
in bin.
2020-06-18 17:07:42.516243: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (524288): Total Chunks: 9, Chunks in use: 9. 5.20MiB allocated for chunks. 5.20MiB in use in bin. 4.88MiB client-requested in use
in bin.
2020-06-18 17:07:42.531832: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (1048576): Total Chunks: 6, Chunks in use: 5. 7.16MiB allocated for chunks. 5.63MiB in use in bin. 5.63MiB client-requested in use
in bin.
2020-06-18 17:07:42.542340: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (2097152): Total Chunks: 4, Chunks in use: 4. 10.46MiB allocated for chunks. 10.46MiB in use in bin. 7.88MiB client-requested in use in bin.
2020-06-18 17:07:42.548178: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (4194304): Total Chunks: 1, Chunks in use: 0. 4.00MiB allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-06-18 17:07:42.565222: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (8388608): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-06-18 17:07:42.571730: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (16777216): Total Chunks: 1, Chunks in use: 1. 24.00MiB allocated for chunks. 24.00MiB in use in bin. 24.00MiB client-requested in use in bin.
2020-06-18 17:07:42.585429: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (33554432): Total Chunks: 2, Chunks in use: 0. 72.00MiB allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-06-18 17:07:42.591711: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (67108864): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-06-18 17:07:42.599001: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (134217728): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-06-18 17:07:42.608031: I tensorflow/core/common_runtime/bfc_allocator.cc:869] Bin (268435456): Total Chunks: 5, Chunks in use: 2. 6.00GiB allocated for chunks. 4.00GiB in use in bin. 4.00GiB client-requested in use
in bin.
2020-06-18 17:07:42.614548: I tensorflow/core/common_runtime/bfc_allocator.cc:885] Bin for 2.00GiB was 256.00MiB, Chunk State:
2020-06-18 17:07:42.625221: I tensorflow/core/common_runtime/bfc_allocator.cc:891] Size: 512.00MiB | Requested Size: 384.00MiB | in_use: 0 | bin_num: 20
2020-06-18 17:07:42.633311: I tensorflow/core/common_runtime/bfc_allocator.cc:891] Size: 512.00MiB | Requested Size: 384.00MiB | in_use: 0 | bin_num: 20
2020-06-18 17:07:42.638322: I tensorflow/core/common_runtime/bfc_allocator.cc:891] Size: 1.00GiB | Requested Size: 0B | in_use: 0 | bin_num: 20, prev: Size: 2.00GiB | Requested Size: 2.00GiB | in_use: 1 | bin_num: -1
2020-06-18 17:07:42.649208: I tensorflow/core/common_runtime/bfc_allocator.cc:898] Next region of size 1048576
2020-06-18 17:07:42.660619: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000710E00000 next 1 of size 1280
2020-06-18 17:07:42.663664: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000710E00500 next 2 of size 256
2020-06-18 17:07:42.675899: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000710E00600 next 173 of size 256
2020-06-18 17:07:42.679636: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000710E00700 next 4 of size 256
...
0000000710EBE900 next 48 of size 256
2020-06-18 17:07:42.895535: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000710EBEA00 next 49 of size 18432
2020-06-18 17:07:42.898600: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000710EC3200 next 50 of size 256
2020-06-18 17:07:42.907778: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000710EC3300 next 51 of size 9216
2020-06-18 17:07:42.911681: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000710EC5700 next 52 of size 256
2020-06-18 17:07:42.915017: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000710EC5800 next 53 of size 256
2020-06-18 17:07:42.925573: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000710EC5900 next 54 of size 256
2020-06-18 17:07:42.928572: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000710EC5A00 next 55 of size 256
2020-06-18 17:07:42.931868: I tensorflow/core/common_runtime/bfc_allocator.cc:905] Free at 0000000710EC5B00 next 175 of size 256
2020-06-18 17:07:42.940794: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000710EC5C00 next 179 of size 256
2020-06-18 17:07:42.944828: I tensorflow/core/common_runtime/bfc_allocator.cc:905] Free at 0000000710EC5D00 next 18446744073709551615 of size 238336
2020-06-18 17:07:42.948296: I tensorflow/core/common_runtime/bfc_allocator.cc:898] Next region of size 2097152
2020-06-18 17:07:42.959328: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000711000000 next 18446744073709551615 of size 2097152
2020-06-18 17:07:42.963454: I tensorflow/core/common_runtime/bfc_allocator.cc:898] Next region of size 4194304
2020-06-18 17:07:42.966035: I tensorflow/core/common_runtime/bfc_allocator.cc:905] Free at 0000000711200000 next 18446744073709551615 of size 4194304
2020-06-18 17:07:42.974175: I tensorflow/core/common_runtime/bfc_allocator.cc:898] Next region of size 8388608
2020-06-18 17:07:42.977823: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000711600000 next 28 of size 512
2020-06-18 17:07:42.981141: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000711600200 next 32 of size 589824
2020-06-18 17:07:42.991636: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000711690200 next 34 of size 131072
2020-06-18 17:07:42.995357: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007116B0200 next 38 of size 294912
2020-06-18 17:07:42.998502: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007116F8200 next 39 of size 147456
2020-06-18 17:07:43.006865: I tensorflow/core/common_runtime/bfc_allocator.cc:905] Free at 000000071171C200 next 56 of size 1606400
2020-06-18 17:07:43.011424: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007118A4500 next 112 of size 256
2020-06-18 17:07:43.015342: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007118A4600 next 58 of size 256
2020-06-18 17:07:43.027436: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007118A4700 next 60 of size 11008
2020-06-18 17:07:43.031001: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007118A7200 next 59 of size 256
2020-06-18 17:07:43.040683: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007118A7300 next 62 of size 9216
2020-06-18 17:07:43.044864: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007118A9700 next 61 of size 256
2020-06-18 17:07:43.047870: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007118A9800 next 64 of size 9216
2020-06-18 17:07:43.058766: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007118ABC00 next 63 of size 256
2020-06-18 17:07:43.107214: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007118AE600 next 76 of size 512
2020-06-18 17:07:43.111990: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007118AE800 next 78 of size 512
2020-06-18 17:07:43.115083: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007118AEA00 next 80 of size 1024
2020-06-18 17:07:43.126468: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007118AEE00 next 82 of size 1024
2020-06-18 17:07:43.129575: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007118AF200 next 113 of size 524288
2020-06-18 17:07:43.140727: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 000000071192F200 next 85 of size 1179648
2020-06-18 17:07:43.144652: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000711A4F200 next 124 of size 1179648
2020-06-18 17:07:43.148012: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000711B6F200 next 18446744073709551615 of size 2690560
2020-06-18 17:07:43.159518: I tensorflow/core/common_runtime/bfc_allocator.cc:898] Next region of size 16777216
2020-06-18 17:07:43.163025: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000711E00000 next 84 of size 512
2020-06-18 17:07:43.166416: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000711E00200 next 86 of size 512
2020-06-18 17:07:43.174307: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000711E00400 next 87 of size 589824
2020-06-18 17:07:43.177822: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000711E90400 next 88 of size 512
2020-06-18 17:07:43.181091: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000711E90600 next 89 of size 131072
2020-06-18 17:07:43.192275: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000711EB0600 next 90 of size 256
2020-06-18 17:07:43.195648: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000711EB0700 next 91 of size 294912
...
2020-06-18 17:07:43.775961: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000712824700 next 15 of size 36864
2020-06-18 17:07:43.779806: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 000000071282D700 next 17 of size 73728
2020-06-18 17:07:43.791130: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 000000071283F700 next 19 of size 147456
...
2020-06-18 17:07:43.815590: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000712A5B700 next 18446744073709551615 of size 3819776
2020-06-18 17:07:43.828008: I tensorflow/core/common_runtime/bfc_allocator.cc:898] Next region of size 33554432
2020-06-18 17:07:43.830675: I tensorflow/core/common_runtime/bfc_allocator.cc:905] Free at 0000000712E00000 next 18446744073709551615 of size 33554432
2020-06-18 17:07:43.844974: I tensorflow/core/common_runtime/bfc_allocator.cc:898] Next region of size 67108864
2020-06-18 17:07:43.848356: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000714E00000 next 178 of size 25165824
2020-06-18 17:07:43.860066: I tensorflow/core/common_runtime/bfc_allocator.cc:905] Free at 0000000716600000 next 18446744073709551615 of size 41943040
2020-06-18 17:07:43.863692: I tensorflow/core/common_runtime/bfc_allocator.cc:898] Next region of size 536870912
2020-06-18 17:07:43.866341: I tensorflow/core/common_runtime/bfc_allocator.cc:905] Free at 0000000718E00000 next 18446744073709551615 of size 536870912
2020-06-18 17:07:43.875482: I tensorflow/core/common_runtime/bfc_allocator.cc:898] Next region of size 536870912
2020-06-18 17:07:43.879021: I tensorflow/core/common_runtime/bfc_allocator.cc:905] Free at 0000000738E00000 next 18446744073709551615 of size 536870912
2020-06-18 17:07:43.882678: I tensorflow/core/common_runtime/bfc_allocator.cc:898] Next region of size 2147483648
2020-06-18 17:07:43.893653: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 0000000758E00000 next 18446744073709551615 of size 2147483648
2020-06-18 17:07:43.897406: I tensorflow/core/common_runtime/bfc_allocator.cc:898] Next region of size 3223725568
2020-06-18 17:07:43.910485: I tensorflow/core/common_runtime/bfc_allocator.cc:905] InUse at 00000007EC400000 next 183 of size 2147483648
2020-06-18 17:07:43.914438: I tensorflow/core/common_runtime/bfc_allocator.cc:905] Free at 000000086C400000 next 18446744073709551615 of size 1076241920
2020-06-18 17:07:43.925183: I tensorflow/core/common_runtime/bfc_allocator.cc:914] Summary of in-use Chunks by size:
2020-06-18 17:07:43.928750: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 73 Chunks of size 256 totalling 18.3KiB
2020-06-18 17:07:43.931900: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 15 Chunks of size 512 totalling 7.5KiB
2020-06-18 17:07:43.943729: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 6 Chunks of size 1024 totalling 6.0KiB
2020-06-18 17:07:43.946789: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 1 Chunks of size 1280 totalling 1.3KiB
2020-06-18 17:07:43.949717: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 3 Chunks of size 1792 totalling 5.3KiB
2020-06-18 17:07:43.961057: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 3 Chunks of size 8192 totalling 24.0KiB
2020-06-18 17:07:43.964209: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 16 Chunks of size 9216 totalling 144.0KiB
2020-06-18 17:07:43.977017: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 2 Chunks of size 11008 totalling 21.5KiB
2020-06-18 17:07:43.980707: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 6 Chunks of size 18432 totalling 108.0KiB
2020-06-18 17:07:43.993934: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 3 Chunks of size 32768 totalling 96.0KiB
2020-06-18 17:07:43.997585: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 6 Chunks of size 36864 totalling 216.0KiB
2020-06-18 17:07:44.009845: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 6 Chunks of size 73728 totalling 432.0KiB
2020-06-18 17:07:44.013957: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 3 Chunks of size 131072 totalling 384.0KiB
2020-06-18 17:07:44.024972: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 6 Chunks of size 147456 totalling 864.0KiB
2020-06-18 17:07:44.029296: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 6 Chunks of size 294912 totalling 1.69MiB
2020-06-18 17:07:44.033041: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 1 Chunks of size 524288 totalling 512.0KiB
2020-06-18 17:07:44.042342: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 1 Chunks of size 580608 totalling 567.0KiB
2020-06-18 17:07:44.045601: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 6 Chunks of size 589824 totalling 3.38MiB
2020-06-18 17:07:44.048650: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 1 Chunks of size 804608 totalling 785.8KiB
2020-06-18 17:07:44.060413: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 5 Chunks of size 1179648 totalling 5.63MiB
2020-06-18 17:07:44.063842: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 1 Chunks of size 2097152 totalling 2.00MiB
2020-06-18 17:07:44.066961: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 1 Chunks of size 2359296 totalling 2.25MiB
2020-06-18 17:07:44.077720: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 1 Chunks of size 2690560 totalling 2.57MiB
2020-06-18 17:07:44.081227: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 1 Chunks of size 3819776 totalling 3.64MiB
2020-06-18 17:07:44.091829: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 1 Chunks of size 25165824 totalling 24.00MiB
2020-06-18 17:07:44.095323: I tensorflow/core/common_runtime/bfc_allocator.cc:917] 2 Chunks of size 2147483648 totalling 4.00GiB
2020-06-18 17:07:44.098446: I tensorflow/core/common_runtime/bfc_allocator.cc:921] Sum Total of in-use chunks: 4.05GiB
2020-06-18 17:07:44.109398: I tensorflow/core/common_runtime/bfc_allocator.cc:923] total_region_allocated_bytes_: 6578120192 memory_limit_: 6578120295 available bytes: 103 curr_region_allocation_bytes_: 4294967296
2020-06-18 17:07:44.114692: I tensorflow/core/common_runtime/bfc_allocator.cc:929] Stats:
Limit: 6578120295
InUse: 4346599680
MaxInUse: 4799586304
NumAllocs: 1210
MaxAllocSize: 2164260864
2020-06-18 17:07:44.130208: W tensorflow/core/common_runtime/bfc_allocator.cc:424] **________________******************************************************************________________
2020-06-18 17:07:44.140209: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at conv_ops.cc:501 : Resource exhausted: OOM when allocating tensor with shape[32,16,512,2048] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
2020-06-18 17:07:44.149440: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Resource exhausted: OOM when allocating tensor with shape[32,16,512,2048] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node model/StatefulPartitionedCall/StatefulPartitionedCall/conv2d_3/StatefulPartitionedCall/Conv2D}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Traceback (most recent call last):
File "c:/Users/shini/Documents/GitHub/skinner/load_test_sample.py", line 126, in <module>
preds_train = new_model.predict(dataset["test_in"])
File "C:\Users\shini\anaconda3\envs\hope2\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 909, in predict
use_multiprocessing=use_multiprocessing)
File "C:\Users\shini\anaconda3\envs\hope2\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 462, in predict
steps=steps, callbacks=callbacks, **kwargs)
File "C:\Users\shini\anaconda3\envs\hope2\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 444, in _model_iteration
total_epochs=1)
File "C:\Users\shini\anaconda3\envs\hope2\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 123, in run_one_epoch
batch_outs = execution_function(iterator)
File "C:\Users\shini\anaconda3\envs\hope2\lib\site-packages\tensorflow_core\python\keras\engine\training_v2_utils.py", line 86, in execution_function
distributed_function(input_fn))
File "C:\Users\shini\anaconda3\envs\hope2\lib\site-packages\tensorflow_core\python\eager\def_function.py", line 457, in __call__
result = self._call(*args, **kwds)
File "C:\Users\shini\anaconda3\envs\hope2\lib\site-packages\tensorflow_core\python\eager\def_function.py", line 526, in _call
return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds) # pylint: disable=protected-access
File "C:\Users\shini\anaconda3\envs\hope2\lib\site-packages\tensorflow_core\python\eager\function.py", line 1141, in _filtered_call
self.captured_inputs)
File "C:\Users\shini\anaconda3\envs\hope2\lib\site-packages\tensorflow_core\python\eager\function.py", line 1224, in _call_flat
ctx, args, cancellation_manager=cancellation_manager)
File "C:\Users\shini\anaconda3\envs\hope2\lib\site-packages\tensorflow_core\python\eager\function.py", line 511, in call
ctx=ctx)
File "C:\Users\shini\anaconda3\envs\hope2\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[32,16,512,2048] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node model/StatefulPartitionedCall/StatefulPartitionedCall/conv2d_3/StatefulPartitionedCall/Conv2D}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[Op:__inference_distributed_function_8302]
Function call stack:
distributed_function
我已经尝试使用 tf.config.experimental.set_memory_growth(gpu[0], True)
来解决它但无济于事。我想说模型太大或类似的东西,但如果是这样的话,我希望训练会失败。如果这很重要,dataset["test_in"]
是 np.memmap 类型。但我真的很难过
最佳答案
如果你需要一个批量大小的训练,你也应该使用这样的批量大小进行测试/评估,我不认为你这样做,你应该设置 batch_size
参数您的 model.predict
调用:
preds_train = new_model.predict(dataset["test_in"], batch_size=1)
关于python - model.predict 导致 oom 问题,但 model.fit 不会 :,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62454676/
我正在尝试使用 Spark 从 Cassandra 读取数据。 DataFrame rdf = sqlContext.read().option("keyspace", "readypulse
这是代码: void i_log_ (int error, const char * file, int line, const char * fmt, ...) { /* Get erro
我必须调试一个严重依赖 Gtk 的程序。问题是由于某些原因,在使用 GtkWindow 对象时开始出现许多运行时警告。问题是,即使 Gtk 提示严重错误,它也不会因这些错误而中止。我没有代码库的更改历
我正在尝试从已有效编译和链接的程序中检索二进制文件。我已经通过 GL_PROGRAM_BINARY_LENGTH 收到了它的长度。该文档说有两个实例可能会发生 GL_INVALID_OPERATION
我有一个托管在 Azure 环境中的服务。我正在使用控制台应用程序使用该服务。这样做时,我得到了异常: "The requested service, 'http://xxxx-d.yyyy.be/S
我有以下代码,它被 SEGV 信号杀死。使用调试器表明它被 main() 中的第一个 sem_init() 杀死。如果我注释掉第一个 sem_init() ,第二个会导致同样的问题。我试图弄清楚是什么
目前我正在编写一个应用程序(目标 iOS 6,启用 ARC),它使用 JSON 进行数据传输,使用核心数据进行持久存储。 JSON 数据由 PHP 脚本通过 json_encode 从 MySQL 数
我对 Xamarin.Forms 还是很陌生。我在出现的主页上有一个非常简单的功能 async public Task BaseAppearing() { if (UserID
这是我的代码的简化版本。 public class MainActivity extends ActionBarActivity { private ArrayList entry = new Arr
我想弄明白为什么我的两个 Java 库很难很好地协同工作。这是场景: 库 1 有一个类 A,其构造函数如下: public A(Object obj) { /* boilerplate */ } 在以
如果网站不需要身份验证,我的代码可以正常工作,如果需要,则在打印“已创建凭据”后会立即出现 EXC_BAD_ACCESS 错误。我不会发布任何内容,并且此代码是直接从文档中复制的 - 知道出了什么问题
我在使用 NSArray 填充 UITableView 时遇到问题。我确信我正在做一些愚蠢的事情,但我无法弄清楚。当我尝试进行简单的计数时,我得到了 EXC_BAD_ACCESS,我知道这是因为我试图
我在 UITableViewCell 上有一个 UITextField,在另一个单元格上有一个按钮。 我单击 UITextField(出现键盘)。 UITextField 调用了以下方法: - (BO
我有一个应用程序出现间歇性崩溃。崩溃日志显示了一个堆栈跟踪,这对我来说很难破译,因此希望其他人看到了这一点并能为我指出正确的方向。 基本上,应用程序在启动时执行反向地理编码请求,以在标签中显示用户的位
我开发了一个 CGImage,当程序使用以下命令将其显示在屏幕上时它工作正常: [output_view.layer performSelectorOnMainThread:@selector(set
我正在使用新的 EncryptedSharedPreferences以谷歌推荐的方式上课: private fun securePrefs(context: Context): SharedPrefe
我有一个中继器,里面有一些控件,其中一个是文本框。我正在尝试使用 jquery 获取文本框,我的代码如下所示: $("#").click(function (event) {}); 但我总是得到 nu
在以下场景中观察到 TTS 初始化错误,太随机了。 已安装 TTS 引擎,存在语音集,并且可以从辅助功能选项中播放示例 tts。 TTS 初始化在之前初始化和播放的同一设备上随机失败。 在不同的设备(
maven pom.xml org.openjdk.jol jol-core 0.10 Java 类: public class MyObjectData { pr
在不担心冲突的情况下,可以使用 MD5 作为哈希值,字符串长度最多为多少? 这可能是通过为特定字符集中的每个可能的字符串生成 MD5 哈希来计算的,长度不断增加,直到哈希第二次出现(冲突)。没有冲突的
我是一名优秀的程序员,十分优秀!