gpt4 book ai didi

python - 如何正确优化 Actor 和评论家之间的共享网络?

转载 作者:太空宇宙 更新时间:2023-11-03 20:59:11 25 4
gpt4 key购买 nike

我正在构建一个 Actor 批评家强化学习算法来解决环境问题。我想使用单个编码器来查找我的环境的表示。

当我与 Actor 和评论家共享编码器时,我的网络没有学习任何内容:

class Encoder(nn.Module):
def __init__(self, state_dim):
super(Encoder, self).__init__()

self.l1 = nn.Linear(state_dim, 512)

def forward(self, state):
a = F.relu(self.l1(state))
return a

class Actor(nn.Module):
def __init__(self, state_dim, action_dim, max_action):
super(Actor, self).__init__()

self.l1 = nn.Linear(state_dim, 128)
self.l3 = nn.Linear(128, action_dim)

self.max_action = max_action

def forward(self, state):
a = F.relu(self.l1(state))
# a = F.relu(self.l2(a))
a = torch.tanh(self.l3(a)) * self.max_action
return a

class Critic(nn.Module):
def __init__(self, state_dim, action_dim):
super(Critic, self).__init__()

self.l1 = nn.Linear(state_dim + action_dim, 128)
self.l3 = nn.Linear(128, 1)

def forward(self, state, action):
state_action = torch.cat([state, action], 1)

q = F.relu(self.l1(state_action))
# q = F.relu(self.l2(q))
q = self.l3(q)
return q

但是,当我为 Actor 使用不同的编码器并为批评者使用不同的编码器时,它可以正常学习。

class Actor(nn.Module):
def __init__(self, state_dim, action_dim, max_action):
super(Actor, self).__init__()

self.l1 = nn.Linear(state_dim, 400)
self.l2 = nn.Linear(400, 300)
self.l3 = nn.Linear(300, action_dim)

self.max_action = max_action

def forward(self, state):
a = F.relu(self.l1(state))
a = F.relu(self.l2(a))
a = torch.tanh(self.l3(a)) * self.max_action
return a

class Critic(nn.Module):
def __init__(self, state_dim, action_dim):
super(Critic, self).__init__()

self.l1 = nn.Linear(state_dim + action_dim, 400)
self.l2 = nn.Linear(400, 300)
self.l3 = nn.Linear(300, 1)

def forward(self, state, action):
state_action = torch.cat([state, action], 1)

q = F.relu(self.l1(state_action))
q = F.relu(self.l2(q))
q = self.l3(q)
return q

我很确定这是因为优化器。在共享编码器代码中,我将其定义如下:

self.actor_optimizer = optim.Adam(list(self.actor.parameters())+
list(self.encoder.parameters()))
self.critic_optimizer = optim.Adam(list(self.critic.parameters()))
+list(self.encoder.parameters()))

在单独的编码器中,它只是:

self.actor_optimizer = optim.Adam((self.actor.parameters()))
self.critic_optimizer = optim.Adam((self.critic.parameters()))

由于 Actor 批评家算法,必须有两个优化器。

如何组合两个优化器来正确优化编码器?

最佳答案

我不确定您到底如何共享编码器。

但是,我建议您创建一个编码器实例并将其传递给 Actor 和评论家

encoder_net = Encoder(state_dim)
actor = Actor(encoder_net, state_dim, action_dim, max_action)
critic = Critic(encoder_net, state_dim)

在前向传递过程中,首先将状态批处理传递给编码器,然后传递给网络的其余部分,例如:

class Encoder(nn.Module):
def __init__(self, state_dim):
super(Encoder, self).__init__()

self.l1 = nn.Linear(state_dim, 512)

def forward(self, state):
a = F.relu(self.l1(state))
return a

class Actor(nn.Module):
def __init__(self, encoder, state_dim, action_dim, max_action):
super(Actor, self).__init__()
self.encoder = encoder

self.l1 = nn.Linear(512, 128)
self.l3 = nn.Linear(128, action_dim)

self.max_action = max_action

def forward(self, state):
state = self.encoder(state)
a = F.relu(self.l1(state))
# a = F.relu(self.l2(a))
a = torch.tanh(self.l3(a)) * self.max_action
return a

class Critic(nn.Module):
def __init__(self, encoder, state_dim):
super(Critic, self).__init__()
self.encoder = encoder

self.l1 = nn.Linear(512, 128)
self.l3 = nn.Linear(128, 1)

def forward(self, state):
state = self.encoder(state)

q = F.relu(self.l1(state))
# q = F.relu(self.l2(q))
q = self.l3(q)
return q

注意:批评网络现在是状态值函数 V(s) 的函数逼近器,而不是状态 Action 值函数 Q(s,a)。

通过此实现,您可以执行优化,而无需将编码器参数传递给优化器,如下所示:

self.actor_optimizer = optim.Adam((self.actor.parameters()))
self.critic_optimizer = optim.Adam((self.critic.parameters()))

因为编码器参数现在在两个网络之间共享。

希望这有帮助! :)

关于python - 如何正确优化 Actor 和评论家之间的共享网络?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55812434/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com