gpt4 book ai didi

python - 为什么我的 DQN 代理无法在非确定性环境中找到最优策略?

转载 作者:太空狗 更新时间:2023-10-29 17:51:19 28 4
gpt4 key购买 nike

编辑:以下似乎也是 FrozenLake-v0 的情况.请注意,我对简单的 Q 学习不感兴趣,因为我想看到适用于连续观察空间的解决方案。

我最近创建了 banana_gym OpenAI 环境。场景如下:

你有一根香蕉。它必须在 2 天内卖掉,因为它在第 3 天就会变坏。你可以选择价格x,但是香蕉只会以概率售出

enter image description here

奖励是x - 1。如果第三天没有卖掉香蕉,奖励是-1。 (直觉:你为香蕉支付了 1 欧元)。因此,环境是不确定的(随机的)。

Actions:您可以将价格设置为 {0.00, 0.10, 0.20, ..., 2.00}

中的任何值

观察:剩余时间(source)

我计算了最优策略:

Opt at step  1: price 1.50 has value -0.26 (chance: 0.28)
Opt at step 2: price 1.10 has value -0.55 (chance: 0.41)

这也符合我的直觉:首先尝试以更高的价格出售香蕉,因为您知道如果不出售香蕉,您还会再试一次。然后将价格降低到 0.00 以上。

最优策略计算

我很确定这个是正确的,但为了完整起见

#!/usr/bin/env python

"""Calculate the optimal banana pricing policy."""

import math
import numpy as np


def main(total_time_steps, price_not_sold, chance_to_sell):
"""
Compare the optimal policy to a given policy.

Parameters
----------
total_time_steps : int
How often the agent may offer the banana
price_not_sold : float
How much do we have to pay if we don't sell until
total_time_steps is over?
chance_to_sell : function
A function that takes the price as an input and outputs the
probabilty that a banana will be sold.
"""
r = get_optimal_policy(total_time_steps,
price_not_sold,
chance_to_sell)
enum_obj = enumerate(zip(r['optimal_prices'], r['values']), start=1)
for i, (price, value) in enum_obj:
print("Opt at step {:>2}: price {:>4.2f} has value {:>4.2f} "
"(chance: {:>4.2f})"
.format(i, price, value, chance_to_sell(price)))


def get_optimal_policy(total_time_steps,
price_not_sold,
chance_to_sell=None):
"""
Get the optimal policy for the Banana environment.

This means for each time step, calculate what is the smartest price
to set.

Parameters
----------
total_time_steps : int
price_not_sold : float
chance_to_sell : function, optional

Returns
-------
results : dict
'optimal_prices' : List of best prices to set at a given time
'values' : values of the value function at a given step with the
optimal policy
"""
if chance_to_sell is None:
chance_to_sell = get_chance
values = [None for i in range(total_time_steps + 1)]
optimal_prices = [None for i in range(total_time_steps)]

# punishment if a banana is not sold
values[total_time_steps] = (price_not_sold - 1)

for i in range(total_time_steps - 1, -1, -1):
opt_price = None
opt_price_value = None
for price in np.arange(0.0, 2.01, 0.10):
p_t = chance_to_sell(price)
reward_sold = (price - 1)
value = p_t * reward_sold + (1 - p_t) * values[i + 1]
if (opt_price_value is None) or (opt_price_value < value):
opt_price_value = value
opt_price = price
values[i] = opt_price_value
optimal_prices[i] = opt_price
return {'optimal_prices': optimal_prices,
'values': values}


def get_chance(x):
"""
Get probability that a banana will be sold at a given price x.

Parameters
----------
x : float

Returns
-------
chance_to_sell : float
"""
return (1 + math.exp(1)) / (1. + math.exp(x + 1))


if __name__ == '__main__':
total_time_steps = 2
main(total_time_steps=total_time_steps,
price_not_sold=0.0,
chance_to_sell=get_chance)

DQN + 策略提取

以下 DQN 代理(使用 Keras-RL 实现)适用于 CartPole-v0 环境,但会学习策略

1: Take action 19 (price= 1.90)
0: Take action 14 (price= 1.40)

用于 Banana 环境。它朝着正确的方向前进,但它始终学习该策略而不是不是最优策略:

为什么 DQN 智能体不学习最优策略?

执行:

$ python dqn.py --env Banana-v0 --steps 50000

dqn.py的代码:

#!/usr/bin/env python

import numpy as np
import gym
import gym_banana

from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
from keras.optimizers import Adam

from rl.agents.dqn import DQNAgent
from rl.policy import LinearAnnealedPolicy, EpsGreedyQPolicy
from rl.memory import EpisodeParameterMemory


def main(env_name, nb_steps):
# Get the environment and extract the number of actions.
env = gym.make(env_name)
np.random.seed(123)
env.seed(123)

nb_actions = env.action_space.n
input_shape = (1,) + env.observation_space.shape
model = create_nn_model(input_shape, nb_actions)

# Finally, we configure and compile our agent.
memory = EpisodeParameterMemory(limit=2000, window_length=1)

policy = LinearAnnealedPolicy(EpsGreedyQPolicy(), attr='eps', value_max=1.,
value_min=.1, value_test=.05,
nb_steps=1000000)
agent = DQNAgent(model=model, nb_actions=nb_actions, policy=policy,
memory=memory, nb_steps_warmup=50000,
gamma=.99, target_model_update=10000,
train_interval=4, delta_clip=1.)
agent.compile(Adam(lr=.00025), metrics=['mae'])
agent.fit(env, nb_steps=nb_steps, visualize=False, verbose=1)

# Get the learned policy and print it
policy = get_policy(agent, env)
for remaining_time, action in sorted(policy.items(), reverse=True):
print("{:>2}: Take action {:>2} (price={:>5.2f})"
.format(remaining_time, action, 2 / 20. * action))


def create_nn_model(input_shape, nb_actions):
"""
Create a neural network model which maps the input to actions.

Parameters
----------
input_shape : tuple of int
nb_actoins : int

Returns
-------
model : keras Model object
"""
model = Sequential()
model.add(Flatten(input_shape=input_shape))
model.add(Dense(32, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(nb_actions, activation='linear')) # important to be linear
print(model.summary())
return model


def get_policy(agent, env):
policy = {}
for x_in in range(env.TOTAL_TIME_STEPS):
action = agent.forward(np.array([x_in]))
policy[x_in] = action
return policy


def get_parser():
"""Get parser object for script xy.py."""
from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter
parser = ArgumentParser(description=__doc__,
formatter_class=ArgumentDefaultsHelpFormatter)
parser.add_argument("--env",
dest="environment",
help="OpenAI Gym environment",
metavar="ENVIRONMENT",
default="CartPole-v0")
parser.add_argument("--steps",
dest="steps",
default=10000,
type=int,
help="how steps are trained?")
return parser


if __name__ == "__main__":
args = get_parser().parse_args()
main(args.environment, args.steps)

最佳答案

如果我正确地解释了您的代码,在我看来您正在使用 50K 训练步骤:

$ python dqn.py --env Banana-v0 --steps 50000

但也有一个 50K 步的预热期,方法是将以下内容放入 DQNAgent 构造函数中:

nb_steps_warmup=50000

我相信这意味着您实际上根本没有进行任何训练,因为热身期仅用于在回放缓冲区中收集经验,对吗?如果是这样,解决方案可能就像减少热身步骤数或增加训练步骤数一样简单。

为了将来引用(或者以防万一我对上面代码的解释有误),我建议始终创建一个学习曲线图(y 轴为剧集奖励,x 轴为训练步骤)。这对于了解正在发生的事情总是很有用,并且可以帮助您专注于调试代码的重要部分。如果奖励根本没有增加,您就会知道它出于某种原因根本没有学习。如果它们确实增加了一段时间,但随后趋于稳定,您可以尝试降低学习率。如果它们确实增加并一直增加到最后,您知道它可能还没有收敛,您可以尝试增加训练步数或增加学习率。

关于python - 为什么我的 DQN 代理无法在非确定性环境中找到最优策略?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47259715/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com