gpt4 book ai didi

machine-learning - 我无法让我的 tensorflow 梯度下降线性回归算法工作

转载 作者:行者123 更新时间:2023-11-30 09:08:21 24 4
gpt4 key购买 nike

我正在尝试编写一个简单的 tensorflow 线性回归模型,该模型采用波士顿住房数据的子集,特别是房间数(RM)列作为自变量,中位价格 (MEDV) 作为因变量,并对其应用梯度下降算法。

但是,当我运行它时,优化器似乎不起作用。成本永远不会减少,重量实际上会以错误的方向增加。

这是我构建的各种图

  1. x 和 y 的散点图

  2. PCA分析图

  3. 原始数据拟合

  4. 测试数据拟合。

图片在这里:

https://imgur.com/a/yVHC9

我的程序的输出如下所示:

Epoch: 0050 cost= 6393135366144.000000000 W = 110392.0 b = 456112.0

Epoch: 0100 cost= 6418308005888.000000000 W = 111131.0 b = 459181.0

Epoch: 0150 cost= 6418496225280.000000000 W = 111136.0 b = 459203.0

Epoch: 0200 cost= 6418497798144.000000000 W = 111136.0 b = 459203.0

...

Epoch: 1000 cost= 6418497798144.000000000 W = 111136.0 b = 459203.0

请注意,成本永远不会减少,事实上,重量在应该减少时会略有增加。

我不知道为什么会发生这种情况。据我所知,数据似乎是合理的,但我不知道为什么优化器不起作用。代码本身只是一个标准的 tensorflow 线性回归示例,我从互联网上下载并针对我的数据集进行了修改。

import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.mlab import PCA
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import tensorflow as tf
import sys
from sklearn import model_selection
from sklearn import preprocessing
np.set_printoptions(precision=3,suppress=True)

def pca(dataset):

plt.scatter(dataset[:,0],dataset[:,1])
plt.plot()
plt.show()
results = PCA(dataset)
x = []
y = []

for item in results.Y:
x.append(item[0])
y.append(item[1])

plt.close('all')
fig1 = plt.figure()
pltData = [x,y]
plt.scatter(pltData[0],pltData[1],c='b')
xAxisLine = ((min(pltData[0]),max(pltData[0])),(0,0),(0,0))
yAxisLine = ((min(pltData[1]),max(pltData[1])),(0,0),(0,0))
plt.xlabel('RM')
plt.ylabel('MEDV')
plt.show()


rng = np.random
# learning_rate is the alpha value that we pass to the gradient descent algorithm.
learning_rate = 0.1


# How many cycles we're going to run to try and get our optimum fit.
training_epochs = 1000
display_step = 50

# We're going to pull in a the csv file and extract the X value (RM) and Y value (MEDV)

boston_dataset = pd.read_csv('data/housing.csv')
label = boston_dataset['MEDV']
features = boston_dataset['RM'].reshape(-1,1)
dataset = np.asarray(boston_dataset['RM'])
dataset = np.column_stack((np.asarray(boston_dataset['RM']),np.asarray(boston_dataset['MEDV'])))

pca(dataset)


train_X, test_X, train_Y, test_Y = model_selection.train_test_split(features, label, test_size = 0.33,
random_state = 5)


scaler = preprocessing.StandardScaler()
train_X = scaler.fit_transform(train_X)
# This is the total number of data samples that we're going to run through.
n_samples = train_X.shape[0]

# Variable placeholders.
X = tf.placeholder('float')
Y = tf.placeholder('float')

W = tf.Variable(rng.randn(), name = 'weight')
b = tf.Variable(rng.randn(), name = 'bias')

# Here we describe our training model. It's a linear regression model using the standard y = mx + b
# point slope formula. We calculate the cost by using least mean squares.

# This is our prediction algorithm: y = mx + b
prediction = tf.add(tf.multiply(X,W),b)

# Let's now calculate the cost of the prediction algorithm using least mean squares

training_cost = tf.reduce_sum(tf.pow(prediction-Y,2))/(2 * n_samples)
# This is our gradient descent optimizer algorithm. We're passing in alpha, our learning rate
# and we want the minimum value of the training cost.
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(training_cost)

init = tf.global_variables_initializer()

# Now we'll run our training data through our model.
with tf.Session() as tf_session:

# Initialize all of our tensorflow variables.
tf_session.run(init)

# We'll run the data through for 1000 times (The value of training_epochs).

for epoch in range(training_epochs):

# For each training cycle, pass in the x and y values to our optimizer algorithm to calculate the cost.
for (x,y) in zip(train_X,train_Y):
tf_session.run(optimizer,feed_dict = {X: x, Y: y})

# For every fifty cycles, let's check and see how we're doing.
if (epoch + 1 ) % 50 == 0:
c = tf_session.run(training_cost,feed_dict = {X: train_X, Y: train_Y})
print ('Epoch: ', '%04d' %
(epoch+1),'cost=','{:.9f}'.format(c), \
'W = ',tf_session.run(W), 'b = ',tf_session.run(b))


print ('Optimization finished')
print ('Training cost = ',training_cost,' W = ',tf_session.run(W), ' b = ', tf_session.run(b),'\n')

plt.plot(train_X, train_Y, 'ro',label='Original data')

plt.plot(train_X,tf_session.run(W) * train_X + tf_session.run(b), label = 'Fitted line')
plt.legend()
plt.show()

# We're now going to run test data to see how well our trained model works.

print ('Testing...(mean square loss comparison)')
testing_cost = tf_session.run(tf.reduce_sum(tf.pow(prediction - Y, 2)) / (2 * test_Y.shape[0]), feed_dict = {X: test_X, Y: test_Y})
print ('Testing cost = ',testing_cost)
print ('Absolute mean square loss difference: ', abs(training_cost - testing_cost))

plt.plot(test_X,test_Y,'bo',label='Testing data')

plt.plot(test_X,tf_session.run(W) * test_X + tf_session.run(b), label = 'Fitted line')
plt.legend()
plt.show()
`

我真的不知道为什么优化器不能正常工作因此,如果有人能指出我正确的方向,我将非常感激。

谢谢

最佳答案

这可能与你的学习率有关。尝试减少它或在几个时期后更新。

例如,如果您使用 100 个 epoch,请尝试将学习率设置为 0.01,并在 30 个 epoch 后将其降低到 0.001,然后在超过 30 或 40 个 epoch 后再次降低到 0.0001。

您可以检查 AlexNet 等常见架构的学习率更新,以便您有一个想法。

祝你好运

关于machine-learning - 我无法让我的 tensorflow 梯度下降线性回归算法工作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46575238/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com