- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我注意到一个问题,在 evaluation() 期间,我没有看到基于 fit() 中结果的预期结果。我在网上发现了很多讨论,其中人们有类似的问题。例如,this open issue 将 dropout layers 和 batch normalization 讨论为可能的原因,但也有人注意到可能存在与 dropout 和 batch normalization 分开的问题。对于初学者来说,甚至很难知道到底是什么问题。
我使用的网络架构确实包含批量归一化,但我不确定这是否是问题所在。
这个demo的数据可以下载here .
这个脚本清楚地展示了我遇到的问题:
import random
import os
import matplotlib.image as mpimg
import cv2
import tensorflow as tf
tf.compat.v1.enable_eager_execution()
HEIGHT_WIDTH = 299
BATCH_SIZE = 10
VERBOSE = 2
SANITY_SWITCH = False
print('starting script')
net = tf.keras.applications.InceptionResNetV2(
include_top=True,
weights=None, # 'imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=2, # 1000,
classifier_activation='softmax'
)
print_output = True
def utility_metric(y_true, y_pred):
global print_output
if print_output:
print(f'y_true:{y_true.numpy()}')
print(f'y_pred:{y_pred.numpy()}')
print_output = False
return 0
net.compile(
optimizer='ADAM',
loss='sparse_categorical_crossentropy',
metrics=['accuracy', utility_metric]
)
net.run_eagerly = True
class_map = {'dog': 0, 'cat': 1}
def preprocess(file):
imdata = mpimg.imread(file)
imdata = cv2.resize(imdata, dsize=(HEIGHT_WIDTH, HEIGHT_WIDTH), interpolation=cv2.INTER_LINEAR)
imdata.shape = (HEIGHT_WIDTH, HEIGHT_WIDTH, 3)
imdata /= 127.5
imdata -= 1.
return imdata, class_map[os.path.basename(os.path.dirname(file))]
train_data = [f'data/Training/cat/{x}' for x in os.listdir('data/Training/cat')] + [f'data/Training/dog/{x}' for x in os.listdir('data/Training/dog')]
test_data = [f'data/Testing/cat/{x}' for x in os.listdir('data/Testing/cat')] + [f'data/Testing/dog/{x}' for x in os.listdir('data/Testing/dog')]
random.shuffle(train_data)
random.shuffle(test_data)
if SANITY_SWITCH:
tmp_data = train_data
train_data = test_data
test_data = tmp_data
def get_gen(data):
def gen():
pairs = []
i = 0
for im_file in data:
i += 1
if i <= BATCH_SIZE:
pairs += [preprocess(im_file)]
if i == BATCH_SIZE:
yield (
[pair[0] for pair in pairs],
[pair[1] for pair in pairs]
)
pairs.clear()
i = 0
return gen
def get_ds(data):
return tf.data.Dataset.from_generator(
get_gen(data),
(tf.float32, tf.int64),
output_shapes=(
tf.TensorShape((BATCH_SIZE, HEIGHT_WIDTH, HEIGHT_WIDTH, 3)),
tf.TensorShape(([BATCH_SIZE]))
)
)
print('starting training')
net.fit(
get_ds(train_data),
epochs=5,
verbose=VERBOSE,
use_multiprocessing=True,
workers=16,
batch_size=BATCH_SIZE,
shuffle=False
)
print('starting testing')
print_output = True
net.evaluate(
get_ds(test_data),
verbose=VERBOSE,
batch_size=BATCH_SIZE,
use_multiprocessing=True,
workers=16,
)
print('script complete')
完整输出在这里:
starting script
2020-12-22 15:29:33.896474: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-12-22 15:29:34.184215: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:
pciBusID: 0000:04:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:34.186083: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 1 with properties:
pciBusID: 0000:05:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:34.188086: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 2 with properties:
pciBusID: 0000:08:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:34.190088: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 3 with properties:
pciBusID: 0000:09:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:34.192124: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 4 with properties:
pciBusID: 0000:84:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:34.194144: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 5 with properties:
pciBusID: 0000:85:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:34.196095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 6 with properties:
pciBusID: 0000:88:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:34.197451: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 7 with properties:
pciBusID: 0000:89:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:34.208178: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-12-22 15:29:34.301110: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-12-22 15:29:34.348641: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-12-22 15:29:34.370185: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-12-22 15:29:34.459524: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-12-22 15:29:34.471473: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-12-22 15:29:34.599447: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-12-22 15:29:34.634806: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0, 1, 2, 3, 4, 5, 6, 7
2020-12-22 15:29:34.635371: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2020-12-22 15:29:34.680254: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2000105000 Hz
2020-12-22 15:29:34.687348: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x561e331d4820 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-12-22 15:29:34.687415: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-12-22 15:29:35.617673: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:
pciBusID: 0000:04:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:35.619368: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 1 with properties:
pciBusID: 0000:05:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:35.621161: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 2 with properties:
pciBusID: 0000:08:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:35.622953: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 3 with properties:
pciBusID: 0000:09:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:35.624745: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 4 with properties:
pciBusID: 0000:84:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:35.626508: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 5 with properties:
pciBusID: 0000:85:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:35.628264: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 6 with properties:
pciBusID: 0000:88:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:35.629460: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 7 with properties:
pciBusID: 0000:89:00.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2020-12-22 15:29:35.629581: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-12-22 15:29:35.629633: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-12-22 15:29:35.629685: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-12-22 15:29:35.629733: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-12-22 15:29:35.629788: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-12-22 15:29:35.629837: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-12-22 15:29:35.629886: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-12-22 15:29:35.657298: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0, 1, 2, 3, 4, 5, 6, 7
2020-12-22 15:29:35.659638: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-12-22 15:29:35.678371: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-12-22 15:29:35.678447: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 1 2 3 4 5 6 7
2020-12-22 15:29:35.678500: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N Y Y Y N N N N
2020-12-22 15:29:35.678538: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 1: Y N Y Y N N N N
2020-12-22 15:29:35.678569: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 2: Y Y N Y N N N N
2020-12-22 15:29:35.678597: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 3: Y Y Y N N N N N
2020-12-22 15:29:35.678624: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 4: N N N N N Y Y Y
2020-12-22 15:29:35.678652: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 5: N N N N Y N Y Y
2020-12-22 15:29:35.678678: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 6: N N N N Y Y N Y
2020-12-22 15:29:35.678705: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 7: N N N N Y Y Y N
2020-12-22 15:29:35.703703: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10689 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:04:00.0, compute capability: 3.7)
2020-12-22 15:29:35.711407: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 8534 MB memory) -> physical GPU (device: 1, name: Tesla K80, pci bus id: 0000:05:00.0, compute capability: 3.7)
2020-12-22 15:29:35.716593: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 10689 MB memory) -> physical GPU (device: 2, name: Tesla K80, pci bus id: 0000:08:00.0, compute capability: 3.7)
2020-12-22 15:29:35.721879: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 10689 MB memory) -> physical GPU (device: 3, name: Tesla K80, pci bus id: 0000:09:00.0, compute capability: 3.7)
2020-12-22 15:29:35.726952: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:4 with 10689 MB memory) -> physical GPU (device: 4, name: Tesla K80, pci bus id: 0000:84:00.0, compute capability: 3.7)
2020-12-22 15:29:35.732126: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:5 with 10689 MB memory) -> physical GPU (device: 5, name: Tesla K80, pci bus id: 0000:85:00.0, compute capability: 3.7)
2020-12-22 15:29:35.736838: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:6 with 10689 MB memory) -> physical GPU (device: 6, name: Tesla K80, pci bus id: 0000:88:00.0, compute capability: 3.7)
2020-12-22 15:29:35.740357: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:7 with 108 MB memory) -> physical GPU (device: 7, name: Tesla K80, pci bus id: 0000:89:00.0, compute capability: 3.7)
2020-12-22 15:29:35.746472: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x561e387dea00 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-12-22 15:29:35.746517: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla K80, Compute Capability 3.7
2020-12-22 15:29:35.746537: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (1): Tesla K80, Compute Capability 3.7
2020-12-22 15:29:35.746577: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (2): Tesla K80, Compute Capability 3.7
2020-12-22 15:29:35.746594: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (3): Tesla K80, Compute Capability 3.7
2020-12-22 15:29:35.746614: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (4): Tesla K80, Compute Capability 3.7
2020-12-22 15:29:35.746645: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (5): Tesla K80, Compute Capability 3.7
2020-12-22 15:29:35.746664: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (6): Tesla K80, Compute Capability 3.7
2020-12-22 15:29:35.746694: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (7): Tesla K80, Compute Capability 3.7
starting training
Epoch 1/5
2020-12-22 15:29:48.307104: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-12-22 15:29:51.694232: W tensorflow/stream_executor/gpu/asm_compiler.cc:81] Running ptxas --version returned 256
2020-12-22 15:29:51.796020: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: ptxas exited with non-zero error code 256, output:
Relying on driver to perform ptx compilation.
Modify $PATH to customize ptxas location.
This message will be only logged once.
2020-12-22 15:29:52.577156: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
y_true:[[1.]
[1.]
[0.]
[1.]
[1.]
[1.]
[1.]
[0.]
[1.]
[1.]]
y_pred:[[0.58956003 0.41043994]
[0.63762885 0.36237112]
[0.53731585 0.46268415]
[0.5393683 0.4606317 ]
[0.90735996 0.09264001]
[0.552977 0.44702297]
[0.7115651 0.28843486]
[0.4068687 0.59313136]
[0.5482196 0.4517804 ]
[0.4330527 0.56694734]]
72/72 - 81s - loss: 0.9134 - accuracy: 0.5417 - utility_metric: 0.0000e+00
Epoch 2/5
72/72 - 81s - loss: 0.7027 - accuracy: 0.5847 - utility_metric: 0.0000e+00
Epoch 3/5
72/72 - 83s - loss: 0.6851 - accuracy: 0.5819 - utility_metric: 0.0000e+00
Epoch 4/5
72/72 - 83s - loss: 0.6810 - accuracy: 0.5944 - utility_metric: 0.0000e+00
Epoch 5/5
72/72 - 83s - loss: 0.6895 - accuracy: 0.5625 - utility_metric: 0.0000e+00
starting testing
y_true:[[1.]
[1.]
[0.]
[0.]
[0.]
[1.]
[1.]
[0.]
[0.]
[1.]]
y_pred:[[0.39538118 0.6046188 ]
[0.39505056 0.6049495 ]
[0.39406297 0.605937 ]
[0.3947329 0.60526717]
[0.3935887 0.60641134]
[0.39452523 0.60547477]
[0.39451653 0.6054835 ]
[0.39475334 0.60524666]
[0.39559898 0.604401 ]
[0.3951175 0.60488254]]
90/90 - 37s - loss: 0.7157 - accuracy: 0.5000 - utility_metric: 0.0000e+00
script complete
要关注的输出部分是准确性:
训练时期 1:0.5417
训练时期 2:0.5847
训练周期 3:0.5819
训练周期 4:0.5944
第 5 轮训练:0.5625
评价:0.5000
我还在两种情况下包含了网络的原始输出。训练期间的一个:
y_true:[[1.]
[1.]
[0.]
[1.]
[1.]
[1.]
[1.]
[0.]
[1.]
[1.]]
y_pred:[[0.58956003 0.41043994]
[0.63762885 0.36237112]
[0.53731585 0.46268415]
[0.5393683 0.4606317 ]
[0.90735996 0.09264001]
[0.552977 0.44702297]
[0.7115651 0.28843486]
[0.4068687 0.59313136]
[0.5482196 0.4517804 ]
[0.4330527 0.56694734]]
还有一个在测试期间:
y_true:[[1.]
[1.]
[0.]
[0.]
[0.]
[1.]
[1.]
[0.]
[0.]
[1.]]
y_pred:[[0.39538118 0.6046188 ]
[0.39505056 0.6049495 ]
[0.39406297 0.605937 ]
[0.3947329 0.60526717]
[0.3935887 0.60641134]
[0.39452523 0.60547477]
[0.39451653 0.6054835 ]
[0.39475334 0.60524666]
[0.39559898 0.604401 ]
[0.3951175 0.60488254]]
我觉得很困惑,为什么在测试期间,图像之间的输出差异似乎很小。这似乎与问题的根源有关,但我不知道是什么原因导致的。
我现在已经多次运行这个脚本,有些事情是一致的。评估期间的准确性总是完全偶然的。在评估期间 y_pred 始终存在低变化,并且所有输出似乎都是相同的标签(例如,在评估期间模型可能将每个输入图像报告为“狗”)。
有时在训练期间,准确率会超过 60%。这不影响问题。我可以继续增加数据集的大小和 epoch 的数量,并尝试改善训练结果,但我害怕在没有首先理解为什么评估结果如此奇怪的情况下继续前进。
最佳答案
我最近遇到了与 MobileNetV3Large model 非常相似的问题.
问题是在设置 weights=None
时,它会重置所有参数,包括评估期间使用的 BatchNormalization 指标。
不仅如此,正如一位 friend 向我指出的那样,默认的 BatchNormalization 动量设置为 0.999,这意味着仅在评估期间使用的 BatchNormalization 参数(在训练期间它使用批量均值/方差)移动非常非常缓慢.
如果您在几个时期内训练数百万步,那也没关系。对于一个小数据集,这些参数没有显着变化,评价都被打破了。
如果您的问题和我的一样,快速解决方法是将所有 BatchNormalization 层的动量设置为 0.9。这可以通过这个简单的递归函数来实现:
def SetBatchNormalizationMomentum(model, new_value, prefix='', verbose=False):
for ii, layer in enumerate(model.layers):
if hasattr(layer, 'layers'):
SetBatchNormalizationMomentum(layer, new_value, f'{prefix}Layer {ii}/', verbose)
continue
elif isinstance(layer, tf.keras.layers.BatchNormalization):
if verbose:
print(f'{prefix}Layer {ii}: name={layer.name} momentum={layer.momentum} --> set momentum={new_value}')
layer.momentum = new_value
我希望这对您也有帮助 - 它在这里起作用。
(已编辑).: 在 MobileNet 中设置 BatchNorm 动量的代码 here .
关于tensorflow - fit() 按预期工作,但随后在 evaluate() 模型中随机执行,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/65415799/
可不可以命名为MVVM模型?因为View通过查看模型数据。 View 是否应该只与 ViewModelData 交互?我确实在某处读到正确的 MVVM 模型应该在 ViewModel 而不是 Mode
我正在阅读有关设计模式的文章,虽然作者们都认为观察者模式很酷,但在设计方面,每个人都在谈论 MVC。 我有点困惑,MVC 图不是循环的,代码流具有闭合拓扑不是很自然吗?为什么没有人谈论这种模式: mo
我正在开发一个 Sticky Notes 项目并在 WPF 中做 UI,显然将 MVVM 作为我的架构设计选择。我正在重新考虑我的模型、 View 和 View 模型应该是什么。 我有一个名为 Not
不要混淆:How can I convert List to Hashtable in C#? 我有一个模型列表,我想将它们组织成一个哈希表,以枚举作为键,模型列表(具有枚举的值)作为值。 publi
我只是花了一些时间阅读这些术语(我不经常使用它们,因为我们没有任何 MVC 应用程序,我通常只说“模型”),但我觉得根据上下文,这些意味着不同的东西: 实体 这很简单,它是数据库中的一行: 2) In
我想知道你们中是否有人知道一些很好的教程来解释大型应用程序的 MVVM。我发现关于 MVVM 的每个教程都只是基础知识解释(如何实现模型、 View 模型和 View ),但我对在应用程序页面之间传递
我想realm.delete() 我的 Realm 中除了一个模型之外的所有模型。有什么办法可以不列出所有这些吗? 也许是一种遍历 Realm 中当前存在的所有类型的方法? 最佳答案 您可以从您的 R
我正在尝试使用 alias 指令模拟一个 Eloquent 模型,如下所示: $transporter = \Mockery::mock('alias:' . Transporter::class)
我正在使用 stargazer 创建我的 plm 汇总表。 library(plm) library(pglm) data("Unions", package = "pglm") anb1 <- pl
我读了几篇与 ASP.NET 分层架构相关的文章和问题,但是读得太多后我有点困惑。 UI 层是在 ASP.NET MVC 中开发的,对于数据访问,我在项目中使用 EF。 我想通过一个例子来描述我的问题
我收到此消息错误: Inceptionv3.mlmodel: unable to read document 我下载了最新版本的 xcode。 9.4 版测试版 (9Q1004a) 最佳答案 您没有
(同样,一个 MVC 验证问题。我知道,我知道......) 我想使用 AutoMapper ( http://automapper.codeplex.com/ ) 来验证我的创建 View 中不在我
需要澄清一件事,现在我正在处理一个流程,其中我有两个 View 模型,一个依赖于另一个 View 模型,为了处理这件事,我尝试在我的基本 Activity 中注入(inject)两个 View 模型,
如果 WPF MVVM 应该没有代码,为什么在使用 ICommand 时,是否需要在 Window.xaml.cs 代码中实例化 DataContext 属性?我已经并排观看并关注了 YouTube
当我第一次听说 ASP.NET MVC 时,我认为这意味着应用程序由三个部分组成:模型、 View 和 Controller 。 然后我读到 NerdDinner并学习了存储库和 View 模型的方法
Platform : ubuntu 16.04 Python version: 3.5.2 mmdnn version : 0.2.5 Source framework with version :
我正在学习本教程:https://www.raywenderlich.com/160728/object-oriented-programming-swift ...并尝试对代码进行一些个人调整,看看
我正试图围绕 AngularJS。我很喜欢它,但一个核心概念似乎在逃避我——模型在哪里? 例如,如果我有一个显示多个交易列表的应用程序。一个列表向服务器查询匹配某些条件的分页事务集,另一个列表使用不同
我在为某个应用程序找出最佳方法时遇到了麻烦。我不太习惯取代旧 TLA(三层架构)的新架构,所以这就是我的来源。 在为我的应用程序(POCO 类,对吧??)设计模型和 DAL 时,我有以下疑问: 我的模
我有两个模型:Person 和 Department。每个人可以在一个部门工作。部门可以由多人管理。我不确定如何在 Django 模型中构建这种关系。 这是我不成功的尝试之一 [models.py]:
我是一名优秀的程序员,十分优秀!