gpt4 book ai didi

python - 深度学习 Udacity 类(class) : Prob 2 assignment 1 (notMNIST)

转载 作者:太空狗 更新时间:2023-10-29 22:25:22 26 4
gpt4 key购买 nike

看完this并参加类(class),我正在努力解决作业 1 ( notMnist) 中的第二个问题:

Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.

这是我尝试过的:

import random
rand_smpl = [ train_datasets[i] for i in sorted(random.sample(xrange(len(train_datasets)), 1)) ]
print(rand_smpl)
filename = rand_smpl[0]
import pickle
loaded_pickle = pickle.load( open( filename, "r" ) )
image_size = 28 # Pixel width and height.
import numpy as np
dataset = np.ndarray(shape=(len(loaded_pickle), image_size, image_size),
dtype=np.float32)
import matplotlib.pyplot as plt

plt.plot(dataset[2])
plt.ylabel('some numbers')
plt.show()

但这就是我得到的:

enter image description here

这没有多大意义。老实说,我的代码也可能如此,因为我不确定如何解决这个问题!


泡菜是这样制作的:

image_size = 28  # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.

def load_letter(folder, min_num_images):
"""Load the data for a single letter label."""
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')

dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))

print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset

这个函数是这样调用的:

  dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)

这里的想法是:

Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.

We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.

最佳答案

按如下方式操作:

#define a function to conver label to letter
def letter(i):
return 'abcdefghij'[i]


# you need a matplotlib inline to be able to show images in python notebook
%matplotlib inline
#some random number in range 0 - length of dataset
sample_idx = np.random.randint(0, len(train_dataset))
#now we show it
plt.imshow(train_dataset[sample_idx])
plt.title("Char " + letter(train_labels[sample_idx]))

您的代码实际上更改了数据集的类型,它不是大小为 (220000, 28,28) 的 ndarray

一般来说,pickle 是一个包含一些对象的文件,而不是数组本身。您应该直接使用 pickle 中的对象来获取训练数据集(使用代码片段中的符号):

#will give you train_dataset and labels
train_dataset = loaded_pickle['train_dataset']
train_labels = loaded_pickle['train_labels']

更新:

根据@gsarmas 的请求,我的整个 Assignment1 解决方案的链接位于 here .

代码已注释且大部分不言自明,但如有任何问题,请随时通过您喜欢的任何方式在 github 上联系

关于python - 深度学习 Udacity 类(class) : Prob 2 assignment 1 (notMNIST),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38189153/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com