gpt4 book ai didi

python-3.x - 为什么我在 Tensorflow 中构建自定义优化器时得到 "NotImplementedError()"

转载 作者:行者123 更新时间:2023-12-04 17:30:22 25 4
gpt4 key购买 nike

我正在研究图像分类,我正在尝试在 Tensorflow 中实现自定义优化器(基于 ELSEVIER 上发表的一篇论文),

我尝试修改代码如下:我还有一些其他功能,但它们都与预处理和模型架构等相关。我的优化器代码如下;

import os
os.environ['TF_KERAS'] = '1'
from tensorflow import keras
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
import cv2
import imutils
import matplotlib.pyplot as plt
from os import listdir
from sklearn.metrics import confusion_matrix,classification_report
import logging, warnings
import numpy as np
from tensorflow.python.training import optimizer
from tensorflow.python.ops import math_ops, state_ops, control_flow_ops, variable_scope
from tensorflow.python.framework import ops

class BPVAM(optimizer.Optimizer):
"""Back-propagation algorithm with variable adaptive momentum.

Variables are updated in two steps:
1) v(t + 1) = alpha * v(t)- lr * g(t)
2) w(t + 1) = w(t) + v(t + 1)

where
- v(t + 1): delta for update at step t + 1
- w(t + 1): weights at step t + 1 (after update)
- g(t): gradients at step t.
- lr: learning rate
- alpha: momentum parameter

In the algorithm alpha is not fixed. It is variable and it is parametrized by:
alpha(t) = lambda / (1 - beta ^ t)
"""

def __init__(
self,
lr: float = 0.001,
lam: float = 0.02,
beta: float = 0.998,
use_locking: bool = False,
name: str = 'BPVAM'
):
"""
Args:
lr: learning rate
lam: momentum parameter
beta: momentum parameter
use_locking:
name:
"""
super(BPVAM, self).__init__(use_locking, name)

self._lr = lr
self._lambda = lam
self._beta = beta

self._lr_tensor = None
self._lambda_tensor = None
self._beta_tensor = None

def _create_slots(self, var_list):
for v in var_list:
self._zeros_slot(v, 'v', self._name)
self._get_or_make_slot(v,
ops.convert_to_tensor(self._beta),
'beta',
self._name)

def _prepare(self):
self._lr_tensor = ops.convert_to_tensor(self._lr, name='lr')
self._lambda_tensor = ops.convert_to_tensor(self._lambda, name='lambda')

def _apply_dense(self, grad, var):
lr_t = math_ops.cast(self._lr_tensor, var.dtype.base_dtype)
lambda_t = math_ops.cast(self._lambda_tensor, var.dtype.base_dtype)

v = self.get_slot(var, 'v')
betas = self.get_slot(var, 'beta')

beta_t = state_ops.assign(betas, betas * betas)

alpha = lambda_t / (1 - beta_t)

v_t = state_ops.assign(v, alpha * v - lr_t * grad)

var_update = state_ops.assign_add(var, v_t, use_locking=self._use_locking)

return control_flow_ops.group(*[beta_t, v_t, var_update])

在我创建优化器并运行之后;

myopt = BPVAM()
model.compile(optimizer= myopt, loss='sparse_categorical_crossentropy', metrics=['accuracy'])

我收到此错误消息;

Traceback (most recent call last):
File "/Users/classification.py", line 264, in <module>model.fit(x=X_train,y=y_train, batch_size=32, epochs=50, validation_data=(X_val, y_val))

File"/Users/ venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training.py", line 780, in fit steps_name='steps_per_epoch')

File"/Users/venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training_arrays.py", line 157, in model_iteration f = _make_execution_function(model, mode)

File"/Users/ venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training_arrays.py", line 532, in _make_execution_function return model._make_execution_function(mode)

File"/Users/ venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training.py", line 2276, in _make_execution_function self._make_train_function()

File"/Users/ venv/lib/python3.7/sitepackages/tensorflow/python/keras/engine/training.py", line 2219, in _make_train_function params=self._collected_trainable_weights, loss=self.total_loss)

File "/Users/ venv/lib/python3.7/site-packages/tensorflow/python/keras/optimizers.py", line 753, in get_updates grads, global_step=self.iterations)

File "/Users/ venv/lib/python3.7/site-packages/tensorflow/python/training/optimizer.py", line 614, in apply_gradients update_ops.append(processor.update_op(self, grad))

File "/Users/venv/lib/python3.7/site-packages/tensorflow/python/training/optimizer.py", line 171, in update_op update_op = optimizer._resource_apply_dense(g, self._v)

File "/Users/venv/lib/python3.7/site-packages/tensorflow/python/training/optimizer.py", line 954, in _resource_apply_dense

raise NotImplementedError()
NotImplementedError

我不明白问题出在哪里。我正在使用 Tensorflow 1.14.0 版本和 python 3.7。我创建了虚拟环境并尝试了其他 tensorflow 和 python 版本,但它仍然不起作用。

最佳答案

为了使用继承自 tensorflow.python.training.optimizer.Optimizer 的类,您必须至少实现以下方法:

  • _apply_dense
  • _resource_apply_dense
  • _apply_sparse

查看 source code of the Optimizer获取更多信息。

由于您尝试实现自定义动量方法,您可能希望直接子类化 MomentumOptimizer

关于python-3.x - 为什么我在 Tensorflow 中构建自定义优化器时得到 "NotImplementedError()",我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60247826/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com