gpt4 book ai didi

python - 在不使用 sklearn 的情况下使用 SGD(LogLoss 随着每个时期的增加而增加)

转载 作者:行者123 更新时间:2023-12-05 07:04:08 26 4
gpt4 key购买 nike

def train(X_train,y_train,X_test,y_test,epochs,alpha,eta0):
w,b = initialize_weights(X_train[0])
loss_test=[]
N=len(X_train)
for i in range(0,epochs):
print(i)
for j in range(N-1):
grad_dw=gradient_dw(X_train[j],y_train[j],w,b,alpha,N)
grad_db=gradient_db(X_train[j],y_train[j],w,b)
w=np.array(w)+(alpha*(np.array(grad_dw)))
b=b+(alpha*(grad_db))
predict2 = []
for m in range(len(y_test)):
z=np.dot(w[0],X_test[m])+b
if sigmoid(z) == 0: # sigmoid(w,x,b) returns 1/(1+exp(-(dot(x,w)+b)))
predict2.append(0.000001)
elif sigmoid(z) == 1:
predict2.append(0.99999)
else:
predict2.append(sigmoid(z))

loss_test.append(logloss(y_test,predict2))
return w,b,loss_test

我的梯度dw函数

def gradient_dw(x,y,w,b,alpha,N):
dw=[]
for i in range(len(x)):
dw.append((x[i]*(y-1/(1+np.exp(abs(w.T[0][i]*x[i]+b)))))+(alpha/N)*(w.T[0][i]))
return dw

我的梯度数据库函数:

 def gradient_db(x,y,w,b):
db=0
for i in range(len(x)):
db=(y-1/(1+np.exp(abs(w.T[0][i]*x[i]+b))))
return db

我的损失函数:

def logloss(y_true,y_pred):
loss=0
for i in range(len(y_true)):
loss+=((y_true[i]*math.log10(y_pred[i]))+((1-y_true[i])*math.log10(1-y_pred[i])))
loss=-1*(1/len(y_true))*loss
return loss

我的问题是在每个时代之后我的损失都在增加。为什么?

任何帮助将不胜感激

谢谢

最佳答案

  1. The problem was of weight function

  2. i was taking weight array as of dim(15,1)

  3. but it should be (15)

  4. So all the changes need to be done according with it in this code

  5. Thank You

关于python - 在不使用 sklearn 的情况下使用 SGD(LogLoss 随着每个时期的增加而增加),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63009169/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com