gpt4 book ai didi

machine-learning - keras 中是否有基于精度或召回率而不是损失的优化器?

转载 作者:行者123 更新时间:2023-11-30 08:24:41 25 4
gpt4 key购买 nike

我正在开发一个只有两个类的分割神经网络,0 和 1(0 是背景,1 是我想在图像上找到的对象)。在每张图像上,大约有 80% 为 1,20% 为 0。如您所见,数据集不平衡,导致结果错误。我的准确率是 85%,损失也很低,但这只是因为我的模型擅长寻找背景!

我希望优化器基于另一个指标,例如精度或召回率,这在这种情况下更有用。

有人知道如何实现吗?

最佳答案

您不使用精度或召回率来进行优化。您只需将它们作为有效分数进行跟踪即可获得最佳权重。不要混合损失、优化器、指标和其他。它们的目的不同。

THRESHOLD = 0.5
def precision(y_true, y_pred, threshold_shift=0.5-THRESHOLD):

# just in case
y_pred = K.clip(y_pred, 0, 1)

# shifting the prediction threshold from .5 if needed
y_pred_bin = K.round(y_pred + threshold_shift)

tp = K.sum(K.round(y_true * y_pred_bin)) + K.epsilon()
fp = K.sum(K.round(K.clip(y_pred_bin - y_true, 0, 1)))

precision = tp / (tp + fp)
return precision


def recall(y_true, y_pred, threshold_shift=0.5-THRESHOLD):

# just in case
y_pred = K.clip(y_pred, 0, 1)

# shifting the prediction threshold from .5 if needed
y_pred_bin = K.round(y_pred + threshold_shift)

tp = K.sum(K.round(y_true * y_pred_bin)) + K.epsilon()
fn = K.sum(K.round(K.clip(y_true - y_pred_bin, 0, 1)))

recall = tp / (tp + fn)
return recall


def fbeta(y_true, y_pred, beta = 2, threshold_shift=0.5-THRESHOLD):
# just in case
y_pred = K.clip(y_pred, 0, 1)

# shifting the prediction threshold from .5 if needed
y_pred_bin = K.round(y_pred + threshold_shift)

tp = K.sum(K.round(y_true * y_pred_bin)) + K.epsilon()
fp = K.sum(K.round(K.clip(y_pred_bin - y_true, 0, 1)))
fn = K.sum(K.round(K.clip(y_true - y_pred, 0, 1)))

precision = tp / (tp + fp)
recall = tp / (tp + fn)

beta_squared = beta ** 2
return (beta_squared + 1) * (precision * recall) / (beta_squared * precision + recall)


def model_fit(X,y,X_test,y_test):
class_weight={
1: 1/(np.sum(y) / len(y)),
0:1}
np.random.seed(47)
model = Sequential()
model.add(Dense(1000, input_shape=(X.shape[1],)))
model.add(Activation('relu'))
model.add(Dropout(0.35))
model.add(Dense(500))
model.add(Activation('relu'))
model.add(Dropout(0.35))
model.add(Dense(250))
model.add(Activation('relu'))
model.add(Dropout(0.35))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy', optimizer='adamax',metrics=[fbeta,precision,recall])
model.fit(X, y,validation_data=(X_test,y_test), epochs=200, batch_size=50, verbose=2,class_weight = class_weight)
return model

关于machine-learning - keras 中是否有基于精度或召回率而不是损失的优化器?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52041931/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com