gpt4 book ai didi

python - python中numpy数组的随机大小分块

转载 作者:行者123 更新时间:2023-12-01 09:31:24 26 4
gpt4 key购买 nike

我想将索引数组划分为随机大小的 block (取自有限的可能大小范围),这些 block 也在彼此之间进行洗牌。我尝试了以下发现 here但它专注于同等大小的 block 。

a = np.arange(1, 100)

def chunk(xs, n): # to chunk the array xs in n parts
ys = list(xs)
random.shuffle(ys)
size = len(ys) // n
leftovers= ys[size*n:]
for c, xtra in enumerate(leftovers):
yield ys[c*size:(c+1)*size] + [ xtra ]
for c in xrange(c+1,n):
yield ys[c*size:(c+1)*size]
换句话说,我如何更改上述函数以具有一定数量的 block (随机数并在彼此之间进行洗牌),并具有从范围内随机获取的可变大小,例如 [5-10]

最佳答案

这会起作用:

from itertools import chain
import numpy as np

a = np.arange(1, 100)
def chunk(xs, nlow, nhigh, shuffle=True):
xs = np.asarray(xs)
if shuffle:
# shuffle, if you want
xs = xs.copy()
np.random.shuffle(xs)

# get at least enough random chunk sizes in the specified range, ie nlow <= n <= nhigh
ns = np.random.randint(nlow, nhigh+1, size=xs.size//nlow)
# add up the chunk sizes to get the indices at which we'll slice up the input array
ixs = np.add.accumulate(ns)
# truncate ixs so that its contents are all valid indices with respect to xs
ixs = ixs[:np.searchsorted(ixs, xs.size)]

# yield slices from the input array
for start,end in zip(chain([None], ixs), chain(ixs, [None])):
yield xs[start:end]

list(chunk(a, 5, 10))

输出:

[array([67, 79, 17, 62, 12, 37, 70, 24]),
array([98, 48, 88, 59, 47]),
array([52, 60, 89, 23, 43, 44]),
array([ 7, 27, 33, 74, 49, 2]),
array([ 6, 51, 40, 13, 56, 45]),
array([31, 3, 55, 10, 11, 46, 9, 42, 34]),
array([53, 22, 95, 41, 19, 32, 4, 69, 86]),
array([93, 68, 57, 65, 92, 76, 28, 63, 64, 58]),
array([91, 66, 18, 99, 21]),
array([36, 83, 15, 78, 1, 81, 97, 84]),
array([61, 71, 25, 94, 87, 20, 85, 38]),
array([ 8, 96, 75, 30, 77, 14, 72, 29]),
array([35, 90, 82, 73, 39, 5, 26, 50, 16]),
array([80, 54])]

编辑

我原来的答案没有对最终 block 的大小设置下限,所以有时它会小于指定的大小(尽管永远不会更大)。据我所知,没有直接的方法来处理这个问题。但是,一般来说,您可以通过拒绝来自该区域的任何样本来从随机分布中删除不需要的区域。换句话说,您可以通过丢弃任何建议的 block 来确保最后一个 block 足够大:

def getIxs(xsize, nlow, nhigh):
# get at least enough random chunk sizes in the specified range, ie nlow <= n <= nhigh
ns = np.random.randint(nlow, nhigh+1, size=xsize//nlow)

# add up the chunk sizes to get the indices at which we'll slice up the input array
ixs = np.add.accumulate(ns)

# truncate ixs so that its contents are all valid indices with respect to xs
ixs = ixs[:np.searchsorted(ixs, xsize)]

return ixs

def chunk(xs, nlow, nhigh):
xs = np.asarray(xs)

ixs = getIxs(xs.size, nlow, nhigh)

# rerun getIxs until the size of the final chunk is large enough
while (xs.size - ixs[-1]) < nlow:
ixs = getIxs(xs.size, nlow, nhigh)

# yield slices from the input array
for start,end in zip(chain([None], ixs), chain(ixs, [None])):
yield xs[start:end]

这种方法应该保留每个 block 大小的整体随机性。

关于python - python中numpy数组的随机大小分块,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49943442/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com