gpt4 book ai didi

python - 断言错误 : Wrong values for d ['w' ] | deeplearning specialization

转载 作者:行者123 更新时间:2023-12-05 09:32:44 32 4
gpt4 key购买 nike

我正在完成深度学习特化的第一门类(class),其中第一项编程任务是从头开始构建逻辑回归模型。由于这是我第一次从头开始构建模型,而且我花了一些时间来消化高等数学,所以我有很多错误。其中,我发现了一个我完全无法修复并且无法理解的问题。这是一个断言错误,说 dw 的形状(成本相对于重量的导数)实际上是错误的。

代码:

import numpy as np 

def sigmoid(x):
return 1 / 1 + np.exp(x)

def propagate(w, b, X, Y):
m = X.shape[1]
A = sigmoid(np.dot(w.T,X) + b)
cost = np.sum(np.abs(Y * np.log(A) + (1-Y)*(np.log(1-A))) / m)
dw = np.dot(X,(A-Y).T) / m
db = np.sum(A - Y) /m
cost = np.squeeze(np.array(cost))
grads = {"dw": dw,"db": db}
return grads, cost

def optimize(w, b, X, Y, num_iterations=100, learning_rate=0.009, print_cost=False):
w = copy.deepcopy(w)
b = copy.deepcopy(b)
costs = []
for i in range(num_iterations):
grads, cost = propagate(w, b ,X, Y)
dw = grads["dw"]
db = grads["db"]
w = w - learning_rate * grads["dw"]
b = b - learning_rate * grads["db"]
if i % 100 == 0:
costs.append(cost)
if print_cost:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,"db": db}
return params, grads, costs

def predict(w, b, X):
m = X.shape[1]
Y_prediction = np.zeros((1, m))
w = w.reshape(x[0], 1)
A = sigmoid(np.dot(w.T, X) + b)
for i in range(A.shape[1]):
if A[0, i] > 0.5:
Y_prediction[0,i] = 1.0
else:
Y_prediction[0,i] = 0.0
return Y_prediction

def model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.5, print_cost=False):
w = np.zeros(shape=(X_train.shape[0],1))
b = np.zeros(shape=(1,1))
params, gards, costs = optimize(w, b, X_train, Y_train)
b = params["b"]
w = params["w"]
Y_prediction_train = predict(w, b, X_train)
Y_prediction_test = predict(w, b, X_test)
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d

model_test(model)

model_test 函数在类(class)中的任何地方都没有定义,我认为它是我猜的练习中的内置函数。但问题是:

---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-36-7f17a31b22cb> in <module>
----> 1 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
117 assert type(d['w']) == np.ndarray, f"Wrong type for d['w']. {type(d['w'])} != np.ndarray"
118 assert d['w'].shape == (X.shape[0], 1), f"Wrong shape for d['w']. {d['w'].shape} != {(X.shape[0], 1)}"
--> 119 assert np.allclose(d['w'], expected_output['w']), f"Wrong values for d['w']. {d['w']} != {expected_output['w']}"
120
121 assert np.allclose(d['b'], expected_output['b']), f"Wrong values for d['b']. {d['b']} != {expected_output['b']}"

---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-36-7f17a31b22cb> in <module>
----> 1 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
117 assert type(d['w']) == np.ndarray, f"Wrong type for d['w']. {type(d['w'])} != np.ndarray"
118 assert d['w'].shape == (X.shape[0], 1), f"Wrong shape for d['w']. {d['w'].shape} != {(X.shape[0], 1)}"
--> 119 assert np.allclose(d['w'], expected_output['w']), f"Wrong values for d['w']. {d['w']} != {expected_output['w']}"
120
121 assert np.allclose(d['b'], expected_output['b']), f"Wrong values for d['b']. {d['b']} != {expected_output['b']}"

AssertionError: Wrong values for d['w']. [[ 0.28154433]
[-0.11519574]
[ 0.13142694]
[ 0.20526551]] != [[ 0.00194946]
[-0.0005046 ]
[ 0.00083111]
[ 0.00143207]]

此时我完全迷路了,我不知道该怎么做..

最佳答案

问题出在这一行:

params, gards, costs = optimize(w, b, X_train, Y_train)

您仍然需要为优化函数指定参数。忽略最后一个参数将使模型使用默认值,这些值不等于模型中指定的参数。所以上面一行应该是:

params, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost=print_cost)

关于python - 断言错误 : Wrong values for d ['w' ] | deeplearning specialization,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/67730694/

32 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com