尝试在 Python 中实现逻辑回归:
下面是成本函数:
def costFunction(theta_array):
m = len(X1)
theta_matrix = np.transpose(np.mat(theta_array))
H_x = 1 / (1 + np.exp(-X_matrix * theta_matrix))
J_theta = ((sum(np.multiply((-Y_matrix), np.log(H_x)) - np.multiply((1 - Y_matrix), np.log(1 - H_x)))) / m )[0, 0]
return J_theta
下面是梯度函数:
def gradientDesc(theta_tuple):
theta_matrix = np.transpose(np.mat(theta_tuple))
H_x = 1 / (1 + np.exp(-X_matrix * theta_matrix))
G_theta0 = (sum(np.multiply(H_x - Y_matrix, X_matrix[:, 0])) / m)[0, 0]
G_theta1 = (sum(np.multiply(H_x - Y_matrix, X_matrix[:, 1])) / m)[0, 0]
G_theta2 = (sum(np.multiply(H_x - Y_matrix, X_matrix[:, 2])) / m)[0, 0]
return np.array((G_theta0, G_theta1, G_theta2))
然后我运行optimize.fmin_bfgs函数,如下所示:
initial_theta = np.zeros((3, 1))
theta_tuple = (0, 0, 0)
theta_optimize = op.fmin_bfgs(costFunction, initial_theta, gradientDesc, args = (theta_tuple))
然后我收到以下错误:
**TypeError: gradientDesc() takes exactly 1 argument (4 given)**
谁能告诉我如何解决? :)谢谢!
对于 args
参数,您应该使用逗号指定单项元组(也称为 singleton );否则括号只会对表达式进行分组。
更改:
theta_optimize = op.fmin_bfgs(costFunction, initial_theta, gradientDesc, args = (theta_tuple))
至:
theta_optimize = op.fmin_bfgs(costFunction, initial_theta, gradientDesc, args = (theta_tuple,))
此外,您的gradientDesc
应该接受每个 documentation 的附加参数。 .
更改:
def gradientDesc(theta_tuple):
至:
def gradientDesc(x, theta_tuple):
我是一名优秀的程序员,十分优秀!