- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我在下面创建了一个最小的可重现示例,它可以轻松地在新的 Google Colab 笔记本中运行。第一次安装完成后,只需 Runtime > Restart and Run All
即可生效。
我在下面制作了一个简单的轮盘游戏环境用于测试。对于观察空间,我创建了一个 gym.spaces.Dict()
,您将看到它(代码注释很好)。
它训练得很好,但是当它进入测试迭代时,我得到了错误:
ValueError Traceback (most recent call last)
<ipython-input-56-7c2cb900b44f> in <module>
6 obs = env.reset()
7 for i in range(1000):
----> 8 action, _state = model.predict(obs, deterministic=True)
9 obs, reward, done, info = env.step(action)
10 env.render()
ValueError: Error: Unexpected observation shape () for Box environment, please use (1,) or (n_env, 1) for the observation shape.
我在某处读到字典空间需要用 gym.wrappers.FlattenObservation 展平,所以我更改了这一行:
action, _state = model.predict(obs, deterministic=True)
...到:
action, _state = model.predict(FlattenObservation(obs), deterministic=True)
...导致此错误:
AttributeError Traceback (most recent call last)
<ipython-input-57-87824c61fc45> in <module>
6 obs = env.reset()
7 for i in range(1000):
----> 8 action, _state = model.predict(FlattenObservation(obs), deterministic=True)
9 obs, reward, done, info = env.step(action)
10 env.render()
AttributeError: 'collections.OrderedDict' object has no attribute 'observation_space'
我也试过这样做,结果和上一个一样的错误:
obs = env.reset()
obs = FlattenObservation(obs)
很明显我做错了什么,但我只是不知道它是什么,因为这将是我第一次使用 Dict
空间。
import os, sys
if not os.path.isdir('/usr/local/lib/python3.7/dist-packages/stable_baselines3'):
!pip3 install stable_baselines3
print("\n\n\n Stable Baselines3 has been installed, Restart and Run All now. DO NOT factory reset, or you'll have to start over\n")
sys.exit(0)
from random import randint
from numpy import inf, float32, array, int32, int64
import gym
from gym.wrappers import FlattenObservation
from stable_baselines3 import A2C, DQN, PPO
"""Roulette environment class"""
class Roulette_Environment(gym.Env):
metadata = {'render.modes': ['human', 'text']}
"""Initialize the environment"""
def __init__(self):
super(Roulette_Environment, self).__init__()
# Some global variables
self.max_table_limit = 1000
self.initial_bankroll = 2000
# Spaces
# Each number on roulette board can have 0-1000 units placed on it
self.action_space = gym.spaces.Box(low=0, high=1000, shape=(37,))
# We're going to keep track of how many times each number shows up
# while we're playing, plus our current bankroll and the max
# table betting limit so the agent knows how much $ in total is allowed
# to be placed on the table. Going to use a Dict space for this.
self.observation_space = gym.spaces.Dict(
{
"0": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"1": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"2": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"3": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"4": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"5": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"6": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"7": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"8": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"9": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"10": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"11": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"12": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"13": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"14": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"15": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"16": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"17": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"18": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"19": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"20": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"21": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"22": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"23": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"24": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"25": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"26": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"27": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"28": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"29": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"30": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"31": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"32": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"33": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"34": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"35": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"36": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"current_bankroll": gym.spaces.Box(low=-inf, high=inf, shape=(1,), dtype=int),
"max_table_limit": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
}
)
"""Reset the Environment"""
def reset(self):
self.current_bankroll = self.initial_bankroll
self.done = False
# Take a sample from the observation_space to modify the values of
self.current_state = self.observation_space.sample()
# Reset each number being tracked throughout gameplay to 0
for i in range(0, 37):
self.current_state[str(i)] = 0
# Reset our globals
self.current_state['current_bankroll'] = self.current_bankroll
self.current_state['max_table_limit'] = self.max_table_limit
return self.current_state
"""Step Through the Environment"""
def step(self, action):
# Convert actions to ints cuz they show up as floats,
# even when defined as ints in the environment.
# https://github.com/openai/gym/issues/3107
for i in range(len(action)):
action[i] = int(action[i])
self.current_action = action
# Subtract your bets from bankroll
sum_of_bets = sum([bet for bet in self.current_action])
# Spin the wheel
self.current_number = randint(a=0, b=36)
# Calculate payout/reward
self.reward = 36 * self.current_action[self.current_number] - sum_of_bets
self.current_bankroll += self.reward
# Update the current state
self.current_state['current_bankroll'] = self.current_bankroll
self.current_state[str(self.current_number)] += 1
# If we've doubled our money, or lost our money
if self.current_bankroll >= self.initial_bankroll * 2 or self.current_bankroll <= 0:
self.done = True
return self.current_state, self.reward, self.done, {}
"""Render the Environment"""
def render(self, mode='text'):
# Text rendering
if mode == "text":
print(f'Bets Placed: {self.current_action}')
print(f'Number rolled: {self.current_number}')
print(f'Reward: {self.reward}')
print(f'New Bankroll: {self.current_bankroll}')
env = Roulette_Environment()
model = PPO('MultiInputPolicy', env, verbose=1)
model.learn(total_timesteps=10000)
obs = env.reset()
# obs = FlattenObservation(obs)
for i in range(1000):
action, _state = model.predict(obs, deterministic=True)
# action, _state = model.predict(FlattenObservation(obs), deterministic=True)
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
最佳答案
不幸的是,stable-baselines3
对观察形式非常挑剔。
我前几天遇到了同样的问题。
一些文档以及 example model帮我解决了问题:
然而,values
的 Box
-空格 必须映射为 numpy.ndarrays
正确 dtypes
.Discrete
观察,观察也可以传递为 int
值(value)。但是,我不完全确定这是否仍然适用于多维 MultiDiscrete
-空格
您的示例的解决方案是每次通过以下方式重新分配 Dict 的值时替换代码:
self.current_state[key] = np.array([value], dtype=int)
下面是您的问题的有效实现(虽然我的系统安装了 Python=3.10
。但它应该也适用于较低版本)。
import os, sys
from random import randint
from numpy import inf, float32, array, int32, int64
import gym
from gym.wrappers import FlattenObservation
from stable_baselines3 import A2C, DQN, PPO
import numpy as np
"""Roulette environment class"""
class Roulette_Environment(gym.Env):
metadata = {'render.modes': ['human', 'text']}
"""Initialize the environment"""
def __init__(self):
super(Roulette_Environment, self).__init__()
# Some global variables
self.max_table_limit = 1000
self.initial_bankroll = 2000
# Spaces
# Each number on roulette board can have 0-1000 units placed on it
self.action_space = gym.spaces.Box(low=0, high=1000, shape=(37,))
# We're going to keep track of how many times each number shows up
# while we're playing, plus our current bankroll and the max
# table betting limit so the agent knows how much $ in total is allowed
# to be placed on the table. Going to use a Dict space for this.
self.observation_space = gym.spaces.Dict(
{
"0": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"1": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"2": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"3": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"4": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"5": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"6": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"7": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"8": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"9": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"10": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"11": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"12": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"13": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"14": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"15": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"16": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"17": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"18": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"19": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"20": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"21": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"22": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"23": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"24": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"25": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"26": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"27": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"28": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"29": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"30": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"31": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"32": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"33": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"34": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"35": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"36": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
"current_bankroll": gym.spaces.Box(low=-inf, high=inf, shape=(1,), dtype=int),
"max_table_limit": gym.spaces.Box(low=0, high=inf, shape=(1,), dtype=int),
}
)
"""Reset the Environment"""
def reset(self):
self.current_bankroll = self.initial_bankroll
self.done = False
# Take a sample from the observation_space to modify the values of
self.current_state = self.observation_space.sample()
# Reset each number being tracked throughout gameplay to 0
for i in range(0, 37):
self.current_state[str(i)] = np.array([0], dtype=int)
# Reset our globals
self.current_state['current_bankroll'] = np.array([self.current_bankroll], dtype=int)
self.current_state['max_table_limit'] = np.array([self.max_table_limit], dtype=int)
return self.current_state
"""Step Through the Environment"""
def step(self, action):
# Convert actions to ints cuz they show up as floats,
# even when defined as ints in the environment.
# https://github.com/openai/gym/issues/3107
for i in range(len(action)):
action[i] = int(action[i])
self.current_action = action
# Subtract your bets from bankroll
sum_of_bets = sum([bet for bet in self.current_action])
# Spin the wheel
self.current_number = randint(a=0, b=36)
# Calculate payout/reward
self.reward = 36 * self.current_action[self.current_number] - sum_of_bets
self.current_bankroll += self.reward
# Update the current state
self.current_state['current_bankroll'] = np.array([self.current_bankroll], dtype=int)
self.current_state[str(self.current_number)] += np.array([1], dtype=int)
# If we've doubled our money, or lost our money
if self.current_bankroll >= self.initial_bankroll * 2 or self.current_bankroll <= 0:
self.done = True
return self.current_state, self.reward, self.done, {}
"""Render the Environment"""
def render(self, mode='text'):
# Text rendering
if mode == "text":
print(f'Bets Placed: {self.current_action}')
print(f'Number rolled: {self.current_number}')
print(f'Reward: {self.reward}')
print(f'New Bankroll: {self.current_bankroll}')
env = Roulette_Environment()
model = PPO('MultiInputPolicy', env, verbose=1)
model.learn(total_timesteps=10)
obs = env.reset()
# obs = FlattenObservation(obs)
for i in range(1000):
action, _state = model.predict(obs, deterministic=True)
# action, _state = model.predict(FlattenObservation(obs), deterministic=True)
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
关于python-3.x - 稳定基线 3 的字典观察空间不起作用,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/73922332/
我正在处理一组标记为 160 个组的 173k 点。我想通过合并最接近的(到 9 或 10 个组)来减少组/集群的数量。我搜索过 sklearn 或类似的库,但没有成功。 我猜它只是通过 knn 聚类
我有一个扁平数字列表,这些数字逻辑上以 3 为一组,其中每个三元组是 (number, __ignored, flag[0 or 1]),例如: [7,56,1, 8,0,0, 2,0,0, 6,1,
我正在使用 pipenv 来管理我的包。我想编写一个 python 脚本来调用另一个使用不同虚拟环境(VE)的 python 脚本。 如何运行使用 VE1 的 python 脚本 1 并调用另一个 p
假设我有一个文件 script.py 位于 path = "foo/bar/script.py"。我正在寻找一种在 Python 中通过函数 execute_script() 从我的主要 Python
这听起来像是谜语或笑话,但实际上我还没有找到这个问题的答案。 问题到底是什么? 我想运行 2 个脚本。在第一个脚本中,我调用另一个脚本,但我希望它们继续并行,而不是在两个单独的线程中。主要是我不希望第
我有一个带有 python 2.5.5 的软件。我想发送一个命令,该命令将在 python 2.7.5 中启动一个脚本,然后继续执行该脚本。 我试过用 #!python2.7.5 和http://re
我在 python 命令行(使用 python 2.7)中,并尝试运行 Python 脚本。我的操作系统是 Windows 7。我已将我的目录设置为包含我所有脚本的文件夹,使用: os.chdir("
剧透:部分解决(见最后)。 以下是使用 Python 嵌入的代码示例: #include int main(int argc, char** argv) { Py_SetPythonHome
假设我有以下列表,对应于及时的股票价格: prices = [1, 3, 7, 10, 9, 8, 5, 3, 6, 8, 12, 9, 6, 10, 13, 8, 4, 11] 我想确定以下总体上最
所以我试图在选择某个单选按钮时更改此框架的背景。 我的框架位于一个类中,并且单选按钮的功能位于该类之外。 (这样我就可以在所有其他框架上调用它们。) 问题是每当我选择单选按钮时都会出现以下错误: co
我正在尝试将字符串与 python 中的正则表达式进行比较,如下所示, #!/usr/bin/env python3 import re str1 = "Expecting property name
考虑以下原型(prototype) Boost.Python 模块,该模块从单独的 C++ 头文件中引入类“D”。 /* file: a/b.cpp */ BOOST_PYTHON_MODULE(c)
如何编写一个程序来“识别函数调用的行号?” python 检查模块提供了定位行号的选项,但是, def di(): return inspect.currentframe().f_back.f_l
我已经使用 macports 安装了 Python 2.7,并且由于我的 $PATH 变量,这就是我输入 $ python 时得到的变量。然而,virtualenv 默认使用 Python 2.6,除
我只想问如何加快 python 上的 re.search 速度。 我有一个很长的字符串行,长度为 176861(即带有一些符号的字母数字字符),我使用此函数测试了该行以进行研究: def getExe
list1= [u'%app%%General%%Council%', u'%people%', u'%people%%Regional%%Council%%Mandate%', u'%ppp%%Ge
这个问题在这里已经有了答案: Is it Pythonic to use list comprehensions for just side effects? (7 个答案) 关闭 4 个月前。 告
我想用 Python 将两个列表组合成一个列表,方法如下: a = [1,1,1,2,2,2,3,3,3,3] b= ["Sun", "is", "bright", "June","and" ,"Ju
我正在运行带有最新 Boost 发行版 (1.55.0) 的 Mac OS X 10.8.4 (Darwin 12.4.0)。我正在按照说明 here构建包含在我的发行版中的教程 Boost-Pyth
学习 Python,我正在尝试制作一个没有任何第 3 方库的网络抓取工具,这样过程对我来说并没有简化,而且我知道我在做什么。我浏览了一些在线资源,但所有这些都让我对某些事情感到困惑。 html 看起来
我是一名优秀的程序员,十分优秀!