gpt4 book ai didi

python - 如何解决与输入大小 (torch.Size([1])) 不同的 UserWarning : Using a target size (torch. Size([]))?

转载 作者:行者123 更新时间:2023-12-04 15:27:50 30 4
gpt4 key购买 nike

我正在尝试运行我购买的一本关于 Pytorch 强化学习的书中的代码。
代码应该按照本书工作,但对我来说,模型没有收敛,奖励仍然为负。它还会收到以下用户警告:

/home/user/.local/lib/python3.6/site-packages/ipykernel_launcher.py:30: UserWarning: Using a target size (torch.Size([])) that is different to the input size (torch.Size([1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.

我是 Pytorch 的完全初学者,但我认为 size([]) 不是有效的张量大小?我认为代码中出了点问题,但是在尝试通过它一段时间后,我还没有找到任何东西。前段时间我也给图书出版商发了消息,但很遗憾没有收到他们的回复。

这就是为什么我想在这里问是否有人见过这个错误并且可能知道如何解决它?

该代码用于在山地车健身房环境中实现 A2C 强化学习。我也可以在这里找到: https://github.com/PacktPublishing/PyTorch-1.x-Reinforcement-Learning-Cookbook/blob/master/Chapter08/chapter8/actor_critic_mountaincar.py
'''
Source codes for PyTorch 1.0 Reinforcement Learning (Packt Publishing)
Chapter 8: Implementing Policy Gradients and Policy Optimization
Author: Yuxi (Hayden) Liu
'''

import torch
import gym
import torch.nn as nn
import torch.nn.functional as F


env = gym.make('MountainCarContinuous-v0')


class ActorCriticModel(nn.Module):
def __init__(self, n_input, n_output, n_hidden):
super(ActorCriticModel, self).__init__()
self.fc = nn.Linear(n_input, n_hidden)
self.mu = nn.Linear(n_hidden, n_output)
self.sigma = nn.Linear(n_hidden, n_output)
self.value = nn.Linear(n_hidden, 1)
self.distribution = torch.distributions.Normal

def forward(self, x):
x = F.relu(self.fc(x))
mu = 2 * torch.tanh(self.mu(x))
sigma = F.softplus(self.sigma(x)) + 1e-5
dist = self.distribution(mu.view(1, ).data, sigma.view(1, ).data)
value = self.value(x)
return dist, value


class PolicyNetwork():
def __init__(self, n_state, n_action, n_hidden, lr=0.001):
self.model = ActorCriticModel(n_state, n_action, n_hidden)
self.optimizer = torch.optim.Adam(self.model.parameters(), lr)


def update(self, returns, log_probs, state_values):
"""
Update the weights of the Actor Critic network given the training samples
@param returns: return (cumulative rewards) for each step in an episode
@param log_probs: log probability for each step
@param state_values: state-value for each step
"""
loss = 0
for log_prob, value, Gt in zip(log_probs, state_values, returns):
advantage = Gt - value.item()
policy_loss = - log_prob * advantage

value_loss = F.smooth_l1_loss(value, Gt)

loss += policy_loss + value_loss

self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()


def predict(self, s):
"""
Compute the output using the continuous Actor Critic model
@param s: input state
@return: Gaussian distribution, state_value
"""
self.model.training = False
return self.model(torch.Tensor(s))

def get_action(self, s):
"""
Estimate the policy and sample an action, compute its log probability
@param s: input state
@return: the selected action, log probability, predicted state-value
"""
dist, state_value = self.predict(s)
action = dist.sample().numpy()
log_prob = dist.log_prob(action[0])
return action, log_prob, state_value




def actor_critic(env, estimator, n_episode, gamma=1.0):
"""
continuous Actor Critic algorithm
@param env: Gym environment
@param estimator: policy network
@param n_episode: number of episodes
@param gamma: the discount factor
"""
for episode in range(n_episode):
log_probs = []
rewards = []
state_values = []
state = env.reset()

while True:
state = scale_state(state)
action, log_prob, state_value = estimator.get_action(state)
action = action.clip(env.action_space.low[0],
env.action_space.high[0])
next_state, reward, is_done, _ = env.step(action)

total_reward_episode[episode] += reward
log_probs.append(log_prob)
state_values.append(state_value)
rewards.append(reward)

if is_done:
returns = []

Gt = 0
pw = 0

for reward in rewards[::-1]:

Gt += gamma ** pw * reward
pw += 1
returns.append(Gt)

returns = returns[::-1]
returns = torch.tensor(returns)
returns = (returns - returns.mean()) / (returns.std() + 1e-9)


estimator.update(returns, log_probs, state_values)
print('Episode: {}, total reward: {}'.format(episode, total_reward_episode[episode]))

break

state = next_state


import sklearn.preprocessing
import numpy as np

state_space_samples = np.array(
[env.observation_space.sample() for x in range(10000)])
scaler = sklearn.preprocessing.StandardScaler()
scaler.fit(state_space_samples)


def scale_state(state):
scaled = scaler.transform([state])
return scaled[0]


n_state = env.observation_space.shape[0]
n_action = 1
n_hidden = 128
lr = 0.0003
policy_net = PolicyNetwork(n_state, n_action, n_hidden, lr)


n_episode = 200
gamma = 0.9
total_reward_episode = [0] * n_episode

actor_critic(env, policy_net, n_episode, gamma)

最佳答案

size([]) 是有效的,但它代表一个单一的值,而不是一个数组,而 size([1]) 是一个只包含一个 item 的一维数组。这就像将 5 与 [5] 进行比较。对此的一种解决方案是

            returns = returns[::-1]
returns_amount = len(returns)
returns = torch.tensor(returns)
returns = (returns - returns.mean()) / (returns.std() + 1e-9)
returns.resize_(returns_amount, 1)
这将返回转换为二维数组,因此您从中获得的每个 Gt 将是一维数组,而不是浮点数。

关于python - 如何解决与输入大小 (torch.Size([1])) 不同的 UserWarning : Using a target size (torch. Size([]))?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61912681/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com