- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
根据 the tensorflow documentation ,我尝试在 tensorflow 2.0 中以 keras 风格使用自动混合精度(AMP)。这是我的代码:
#!/usr/bin/env python
# coding: utf-8
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow_hub as hub
import tensorflow.keras.mixed_precision.experimental as mixed_precision
import tensorflow.keras.layers as layers
import numpy as np
import tensorflow as tf
# we can use mixed precision with the following line
policy = mixed_precision.Policy('mixed_float16')
# policy = mixed_precision.Policy('float32')
mixed_precision.set_policy(policy)
print('Compute dtype: %s' % policy.compute_dtype)
print('Variable dtype: %s' % policy.variable_dtype)
num_samples = 1024
batch_size = 16
max_seq_len = 128
num_class = 16
epochs = 3
vocab_size = 30522
# BERT_PATH = 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/1'
BERT_PATH = '../input/bert-base-from-tfhub/bert_en_uncased_L-12_H-768_A-12'
def bert_model():
input_ids = tf.keras.Input((max_seq_len,), dtype=tf.int32, name='input_ids')
input_masks = tf.keras.Input((max_seq_len,), dtype=tf.int32, name='input_masks')
input_segments = tf.keras.Input((max_seq_len,), dtype=tf.int32, name='input_segments')
bert_layer = hub.KerasLayer(BERT_PATH, trainable=True)
print('bert_layer._dtype_policy:', bert_layer._dtype_policy)
print('bert_layer._compute_dtype:', bert_layer._compute_dtype)
print('bert_layer._dtype:', bert_layer._dtype)
_, bert_sequence_output = bert_layer([input_ids, input_masks, input_segments])
print("bert_sequence_output.dtype:", bert_sequence_output.dtype)
x = layers.GlobalAveragePooling1D()(bert_sequence_output)
logits = layers.Dense(num_class, name="logits")(x)
print("logits.dtype:", logits.dtype)
# when using mixed precision, regardless of what your model ends in, make sure the output is float32.
output = layers.Activation('sigmoid', dtype='float32', name='output')(logits)
print('output.dtype:', output.dtype)
model = tf.keras.models.Model(inputs=[input_ids, input_masks, input_segments], outputs=output)
return model
# make dummy inputs
train_X = []
train_X.append(np.random.randint(0, vocab_size, size=(num_samples, max_seq_len))) # ids
train_X.append(np.zeros(shape=(num_samples, max_seq_len))) # masks
train_X.append(np.zeros(shape=(num_samples, max_seq_len))) # segments
train_Y = np.random.randn(num_samples, num_class) # labels
model = bert_model()
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4)
model.compile(loss="binary_crossentropy", optimizer=optimizer)
model.fit(train_X, train_Y, epochs=epochs, verbose=1, batch_size=batch_size)
bert_sequence_output.dtype
应该是
float16
,因为它是使用
bert_layer
的层(即
mixed_float16
)的输出政策。
bert_sequence_output.dtype
是
float32
,这里是完整的日志:
ssh://xiepengyu@192.168.0.200:22/home/xiepengyu/miniconda3/envs/tf2/bin/python -u /home/xiepengyu/google_quest/scripts/multi_bert_aug_mixed_precision_test.py
2020-01-05 11:30:50.951010: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-01-05 11:30:51.380306: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/xiepengyu/cuda/cuda-10.1/lib64:$LD_LIBRARY_PATH
2020-01-05 11:30:51.380387: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/xiepengyu/cuda/cuda-10.1/lib64:$LD_LIBRARY_PATH
2020-01-05 11:30:51.380399: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2020-01-05 11:30:52.292392: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-01-05 11:30:52.635553: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:03:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
2020-01-05 11:30:52.635599: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-01-05 11:30:52.637236: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-01-05 11:30:52.638264: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-01-05 11:30:52.638493: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-01-05 11:30:52.640188: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-01-05 11:30:52.641278: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-01-05 11:30:52.644628: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-01-05 11:30:52.650678: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-01-05 11:30:52.650998: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-01-05 11:30:52.658229: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3499720000 Hz
2020-01-05 11:30:52.658878: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x562d05824cc0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-01-05 11:30:52.658896: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-01-05 11:30:52.871435: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x562d058cb200 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-01-05 11:30:52.871481: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1
2020-01-05 11:30:52.875039: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:03:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
2020-01-05 11:30:52.875109: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-01-05 11:30:52.875137: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-01-05 11:30:52.875149: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-01-05 11:30:52.875161: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-01-05 11:30:52.875172: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-01-05 11:30:52.875183: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-01-05 11:30:52.875195: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-01-05 11:30:52.876635: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-01-05 11:30:53.444364: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-01-05 11:30:53.444427: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0
2020-01-05 11:30:53.444436: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N
2020-01-05 11:30:53.450671: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10392 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
Compute dtype: float16
Variable dtype: float32
bert_layer._dtype_policy: <Policy "mixed_float16", loss_scale=DynamicLossScale(current_loss_scale=32768.0, num_good_steps=0, initial_loss_scale=32768.0, increment_period=2000, multiplier=2.0)>
bert_layer._compute_dtype: float16
bert_layer._dtype: float32
bert_sequence_output.dtype: <dtype: 'float32'>
logits.dtype: <dtype: 'float16'>
output.dtype: <dtype: 'float32'>
Train on 1024 samples
Epoch 1/3
2020-01-05 11:31:06.079381: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 1161 in the outer inference context.
/home/xiepengyu/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/framework/indexed_slices.py:433: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
2020-01-05 11:31:08.348584: W tensorflow/core/common_runtime/shape_refiner.cc:88] Function instantiation has undefined input shape at index: 1161 in the outer inference context.
2020-01-05 11:31:18.719649: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
1024/1024 [==============================] - 34s 33ms/sample - loss: 0.0720
Epoch 2/3
1024/1024 [==============================] - 15s 15ms/sample - loss: 0.0185
Epoch 3/3
1024/1024 [==============================] - 15s 15ms/sample - loss: 0.0042
Process finished with exit code 0
float32
时,几个
print
给我这些信息(日志的其他部分与
mixed_float16
相同):
Compute dtype: float32
Variable dtype: float32
bert_layer._dtype_policy: <Policy "float32", loss_scale=None>
bert_layer._compute_dtype: float32
bert_layer._dtype: float32
bert_sequence_output.dtype: <dtype: 'float32'>
logits.dtype: <dtype: 'float32'>
output.dtype: <dtype: 'float32'>
mixed_float16
其他自定义层中的策略剂量工作,例如名为“logits”的 Dense 层,因为它的输出有一个数据类型 float16
. mixed_float16
,但不知何故,从 bert_sequence_output.dtype
的数据类型来看,它似乎不起作用是 float32
.另一个证据是 GPU 内存使用(由 Bert 层中的变量决定)在两种情况下几乎相同。 float32
,所以我们不能使用
mixed_float
改变其行为的政策。
这样对吗?还有什么可能导致问题以及如何解决?
最佳答案
GTX 1080 Ti 不支持混合精度训练。您需要一个 NVIDIA RTX 显卡。 2000 系列具有张量核心,因此支持混合精度。
关于python - 如何在带有 hub.KerasLayer 的 tensorflow 2.0 中使用自动混合精度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59596940/
我正在处理一组标记为 160 个组的 173k 点。我想通过合并最接近的(到 9 或 10 个组)来减少组/集群的数量。我搜索过 sklearn 或类似的库,但没有成功。 我猜它只是通过 knn 聚类
我有一个扁平数字列表,这些数字逻辑上以 3 为一组,其中每个三元组是 (number, __ignored, flag[0 or 1]),例如: [7,56,1, 8,0,0, 2,0,0, 6,1,
我正在使用 pipenv 来管理我的包。我想编写一个 python 脚本来调用另一个使用不同虚拟环境(VE)的 python 脚本。 如何运行使用 VE1 的 python 脚本 1 并调用另一个 p
假设我有一个文件 script.py 位于 path = "foo/bar/script.py"。我正在寻找一种在 Python 中通过函数 execute_script() 从我的主要 Python
这听起来像是谜语或笑话,但实际上我还没有找到这个问题的答案。 问题到底是什么? 我想运行 2 个脚本。在第一个脚本中,我调用另一个脚本,但我希望它们继续并行,而不是在两个单独的线程中。主要是我不希望第
我有一个带有 python 2.5.5 的软件。我想发送一个命令,该命令将在 python 2.7.5 中启动一个脚本,然后继续执行该脚本。 我试过用 #!python2.7.5 和http://re
我在 python 命令行(使用 python 2.7)中,并尝试运行 Python 脚本。我的操作系统是 Windows 7。我已将我的目录设置为包含我所有脚本的文件夹,使用: os.chdir("
剧透:部分解决(见最后)。 以下是使用 Python 嵌入的代码示例: #include int main(int argc, char** argv) { Py_SetPythonHome
假设我有以下列表,对应于及时的股票价格: prices = [1, 3, 7, 10, 9, 8, 5, 3, 6, 8, 12, 9, 6, 10, 13, 8, 4, 11] 我想确定以下总体上最
所以我试图在选择某个单选按钮时更改此框架的背景。 我的框架位于一个类中,并且单选按钮的功能位于该类之外。 (这样我就可以在所有其他框架上调用它们。) 问题是每当我选择单选按钮时都会出现以下错误: co
我正在尝试将字符串与 python 中的正则表达式进行比较,如下所示, #!/usr/bin/env python3 import re str1 = "Expecting property name
考虑以下原型(prototype) Boost.Python 模块,该模块从单独的 C++ 头文件中引入类“D”。 /* file: a/b.cpp */ BOOST_PYTHON_MODULE(c)
如何编写一个程序来“识别函数调用的行号?” python 检查模块提供了定位行号的选项,但是, def di(): return inspect.currentframe().f_back.f_l
我已经使用 macports 安装了 Python 2.7,并且由于我的 $PATH 变量,这就是我输入 $ python 时得到的变量。然而,virtualenv 默认使用 Python 2.6,除
我只想问如何加快 python 上的 re.search 速度。 我有一个很长的字符串行,长度为 176861(即带有一些符号的字母数字字符),我使用此函数测试了该行以进行研究: def getExe
list1= [u'%app%%General%%Council%', u'%people%', u'%people%%Regional%%Council%%Mandate%', u'%ppp%%Ge
这个问题在这里已经有了答案: Is it Pythonic to use list comprehensions for just side effects? (7 个答案) 关闭 4 个月前。 告
我想用 Python 将两个列表组合成一个列表,方法如下: a = [1,1,1,2,2,2,3,3,3,3] b= ["Sun", "is", "bright", "June","and" ,"Ju
我正在运行带有最新 Boost 发行版 (1.55.0) 的 Mac OS X 10.8.4 (Darwin 12.4.0)。我正在按照说明 here构建包含在我的发行版中的教程 Boost-Pyth
学习 Python,我正在尝试制作一个没有任何第 3 方库的网络抓取工具,这样过程对我来说并没有简化,而且我知道我在做什么。我浏览了一些在线资源,但所有这些都让我对某些事情感到困惑。 html 看起来
我是一名优秀的程序员,十分优秀!