gpt4 book ai didi

python - TensorFlow 错误 : tensorflow/core/framework/op_kernel. cc :1767] OP_REQUIRES failed at conv_ops. cc:539:资源耗尽

转载 作者:行者123 更新时间:2023-12-04 11:40:08 34 4
gpt4 key购买 nike

我正在尝试使用我使用 Label-img 标记的样本来训练对象检测算法。我的图像尺寸为 1100 x 1100 像素。我使用的算法是在 TensorFlow 2 Detection Model Zoo 上找到的 Faster R-CNN Inception ResNet V2 1024x1024。我的操作规范如下:

  • TensorFlow 2.3.1
  • Python 3.8.6
  • GPU:NVIDIA GEFORCE RTX 2060(笔记本电脑具有 16 GB RAM 和 6 个处理核心)
  • CUDA:10.1
  • cuDNN:7.6
  • Anaconda 3 命令提示符

  • .config 文件如下:
    # Faster R-CNN with Inception Resnet v2 (no atrous)
    # Sync-trained on COCO (with 8 GPUs) with batch size 16 (800x1333 resolution)
    # Initialized from Imagenet classification checkpoint
    # TF2-Compatible, *Not* TPU-Compatible
    #
    # Achieves 39.6 mAP on COCO

    model {
    faster_rcnn {
    num_classes: 1
    image_resizer {
    keep_aspect_ratio_resizer {
    min_dimension: 800
    max_dimension: 1333
    pad_to_max_dimension: true
    }
    }
    feature_extractor {
    type: 'faster_rcnn_inception_resnet_v2_keras'
    }
    first_stage_anchor_generator {
    grid_anchor_generator {
    scales: [0.25, 0.5, 1.0, 2.0]
    aspect_ratios: [0.5, 1.0, 2.0]
    height_stride: 16
    width_stride: 16
    }
    }
    first_stage_box_predictor_conv_hyperparams {
    op: CONV
    regularizer {
    l2_regularizer {
    weight: 0.0
    }
    }
    initializer {
    truncated_normal_initializer {
    stddev: 0.01
    }
    }
    }
    first_stage_nms_score_threshold: 0.0
    first_stage_nms_iou_threshold: 0.7
    first_stage_max_proposals: 300
    first_stage_localization_loss_weight: 2.0
    first_stage_objectness_loss_weight: 1.0
    initial_crop_size: 17
    maxpool_kernel_size: 1
    maxpool_stride: 1
    second_stage_box_predictor {
    mask_rcnn_box_predictor {
    use_dropout: false
    dropout_keep_probability: 1.0
    fc_hyperparams {
    op: FC
    regularizer {
    l2_regularizer {
    weight: 0.0
    }
    }
    initializer {
    variance_scaling_initializer {
    factor: 1.0
    uniform: true
    mode: FAN_AVG
    }
    }
    }
    }
    }
    second_stage_post_processing {
    batch_non_max_suppression {
    score_threshold: 0.0
    iou_threshold: 0.6
    max_detections_per_class: 100
    max_total_detections: 100
    }
    score_converter: SOFTMAX
    }
    second_stage_localization_loss_weight: 2.0
    second_stage_classification_loss_weight: 1.0
    }
    }

    train_config: {
    batch_size: 1
    num_steps: 200000
    optimizer {
    momentum_optimizer: {
    learning_rate: {
    cosine_decay_learning_rate {
    learning_rate_base: 0.008
    total_steps: 200000
    warmup_learning_rate: 0.0
    warmup_steps: 5000
    }
    }
    momentum_optimizer_value: 0.9
    }
    use_moving_average: false
    }
    gradient_clipping_by_norm: 10.0
    fine_tune_checkpoint_version: V2
    fine_tune_checkpoint: "pre-trained-models/faster_rcnn_inception_resnet_v2_1024x1024_coco17_tpu-8/checkpoint/ckpt-0"
    fine_tune_checkpoint_type: "detection"
    data_augmentation_options {
    random_horizontal_flip {
    }
    }

    data_augmentation_options {
    random_adjust_hue {
    }
    }

    data_augmentation_options {
    random_adjust_contrast {
    }
    }

    data_augmentation_options {
    random_adjust_saturation {
    }
    }

    data_augmentation_options {
    random_square_crop_by_scale {
    scale_min: 0.6
    scale_max: 1.3
    }
    }
    }
    train_input_reader: {
    label_map_path: "annotations/label_map.pbtxt"
    tf_record_input_reader {
    input_path: "annotations/train.record"
    }
    }

    eval_config: {
    metrics_set: "coco_detection_metrics"
    use_moving_averages: false
    batch_size: 1;
    }

    eval_input_reader: {
    label_map_path: "annotations/label_map.pbtxt"
    shuffle: false
    num_epochs: 1
    tf_record_input_reader {
    input_path: "annotations/test.record"
    }
    }
    运行约5分钟后抛出以下错误:
    2020-11-16 16:52:14.415133: W tensorflow/core/framework/op_kernel.cc:1767] OP_REQUIRES failed at conv_ops.cc:539 : Resource exhausted: OOM when allocating tensor with shape[64,288,9,9] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
    Traceback (most recent call last):
    File "model_main_tf2.py", line 113, in <module>
    tf.compat.v1.app.run()
    File "C:\Users\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
    File "C:\Users\user\anaconda3\envs\object_detection_api\lib\site-packages\absl\app.py", line 303, in run
    _run_main(main, args)
    File "C:\Users\user\anaconda3\envs\object_detection_api\lib\site-packages\absl\app.py", line 251, in _run_main
    sys.exit(main(argv))
    File "model_main_tf2.py", line 104, in main
    model_lib_v2.train_loop(
    File "C:\Users\user\anaconda3\envs\object_detection_api\lib\site-packages\object_detection\model_lib_v2.py", line 639, in train_loop
    loss = _dist_train_step(train_input_iter)
    File "C:\Users\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\eager\def_function.py", line 780, in __call__
    result = self._call(*args, **kwds)
    File "C:\Users\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\eager\def_function.py", line 840, in _call
    return self._stateless_fn(*args, **kwds)
    File "C:\Users\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\eager\function.py", line 2829, in __call__
    return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
    File "C:\Users\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\eager\function.py", line 1843, in _filtered_call
    return self._call_flat(
    File "C:\Users\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\eager\function.py", line 1923, in _call_flat
    return self._build_call_outputs(self._inference_function.call(
    File "C:\Users\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\eager\function.py", line 545, in call
    outputs = execute.execute(
    File "C:\Users\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\eager\execute.py", line 59, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
    tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
    (0) Resource exhausted: OOM when allocating tensor with shape[64,256,17,17] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
    [[node functional_3/conv2d_160/Conv2D (defined at \site-packages\object_detection\meta_architectures\faster_rcnn_meta_arch.py:1149) ]]
    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    [[Identity_1/_432]]
    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    (1) Resource exhausted: OOM when allocating tensor with shape[64,256,17,17] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
    [[node functional_3/conv2d_160/Conv2D (defined at \site-packages\object_detection\meta_architectures\faster_rcnn_meta_arch.py:1149) ]]
    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    0 successful operations.
    0 derived errors ignored. [Op:__inference__dist_train_step_79248]

    Errors may have originated from an input operation.
    Input Source operations connected to node functional_3/conv2d_160/Conv2D:
    MaxPool2D/MaxPool (defined at \site-packages\object_detection\meta_architectures\faster_rcnn_meta_arch.py:1973)

    Input Source operations connected to node functional_3/conv2d_160/Conv2D:
    MaxPool2D/MaxPool (defined at \site-packages\object_detection\meta_architectures\faster_rcnn_meta_arch.py:1973)

    Function call stack:
    _dist_train_step -> _dist_train_step
    这个问题的一个常见解决方案是减少你的批量大小,但我已经把它减少到 1。 是我处理内存不足的问题, 还是有其他方法可以解决这个问题?
    注意:这是在抛出异常之前给出的输出:
    2020-11-16 16:52:14.409101: I tensorflow/core/common_runtime/bfc_allocator.cc:1046] Stats:
    Limit: 4817616896
    InUse: 4809875456
    MaxInUse: 4817131776
    NumAllocs: 11104
    MaxAllocSize: 4129325056
    Reserved: 0
    PeakReserved: 0
    LargestFreeBlock: 0

    2020-11-16 16:52:14.413310: W tensorflow/core/common_runtime/bfc_allocator.cc:439] ****************************************************************************************************

    最佳答案

    看看这个线程(通过你的帖子,我认为你已经阅读了):
    Resource exhausted: OOM when allocating tensor only on gpu
    两种可能的解决方案是将 config.gpu_options.per_process_gpu_memory_fraction 更改为更大的数字。
    其他解决方案是重新安装cuda。
    您可以使用 nvidia docker。然后您可以快速地在版本之间切换。
    https://github.com/NVIDIA/nvidia-docker
    您可以更改 cuda 版本并查看错误是否仍然存在。

    关于python - TensorFlow 错误 : tensorflow/core/framework/op_kernel. cc :1767] OP_REQUIRES failed at conv_ops. cc:539:资源耗尽,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64867031/

    34 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com