gpt4 book ai didi

python - 使用 Gym 训练神经网络

转载 作者:行者123 更新时间:2023-12-01 02:04:18 24 4
gpt4 key购买 nike

我对下面提供的代码有一些问题。我正在研究 python 3.6。我已经重新安装了 Python 和运行代码所需的所有模块。总的来说,我所做的一切都是基于这个tutorial .

问题描述:

当我运行此代码时,我收到以下警告并且根本没有输出。我不明白这些警告的含义以及如何解决它。我将不胜感激任何帮助。

Warning (from warnings module): File "D:\Users\Rafal\AppData\Local\Programs\Python\Python36\lib\site packages\h5py__init__.py", line 36 from ._conv import register_converters as _register_converters FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type

还有:

[33mWARN: gym.spaces.Box autodetected dtype as . Please provide explicit dtype.[0m

我运行的代码:

import gym
import random
import numpy as np
import tflearn
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
from statistics import median, mean
from collections import Counter

LR = 1e-3
env = gym.make("CartPole-v0")
env.reset()
goal_steps = 500
score_requirement = 50
initial_games = 10000


def initial_population():
# [OBS, MOVES]
training_data = []
# all scores:
scores = []
# just the scores that met our threshold:
accepted_scores = []
# iterate through however many games we want:
for _ in range(initial_games):
score = 0
# moves specifically from this environment:
game_memory = []
# previous observation that we saw
prev_observation = []
# for each frame in 200
for _ in range(goal_steps):
# choose random action (0 or 1)
action = random.randrange(0,2)
# do it!
observation, reward, done, info = env.step(action)

# notice that the observation is returned FROM the action
# so we'll store the previous observation here, pairing
# the prev observation to the action we'll take.
if len(prev_observation) > 0 :
game_memory.append([prev_observation, action])
prev_observation = observation
score+=reward
if done: break

# IF our score is higher than our threshold, we'd like to save
# every move we made
# NOTE the reinforcement methodology here.
# all we're doing is reinforcing the score, we're not trying
# to influence the machine in any way as to HOW that score is
# reached.
if score >= score_requirement:
accepted_scores.append(score)
for data in game_memory:
# convert to one-hot (this is the output layer for our neural network)
if data[1] == 1:
output = [0,1]
elif data[1] == 0:
output = [1,0]

# saving our training data
training_data.append([data[0], output])

# reset env to play again
env.reset()
# save overall scores
scores.append(score)

# just in case you wanted to reference later
training_data_save = np.array(training_data)
np.save('saved.npy',training_data_save)

# some stats here, to further illustrate the neural network magic!
print('Average accepted score:',mean(accepted_scores))
print('Median score for accepted scores:',median(accepted_scores))
print(Counter(accepted_scores))

return training_data

最佳答案

要回答与此错误有关的第二个问题:

gym.spaces.Box autodetected dtype as <class 'numpy.float32'>

转到下载的“gym”文件的目录。进入gym/spaces/并打开“box.py”文件。
在第 12 行附近,您应该看到:

def __init__(self,low.shape=None,high.shape=None,shape=None,dtype=None):

dtype=None更改为dtype=np.float32

这为我解决了错误。

关于python - 使用 Gym 训练神经网络,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49218443/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com