gpt4 book ai didi

python - Python中的概率分布

转载 作者:IT老高 更新时间:2023-10-28 20:44:05 32 4
gpt4 key购买 nike

我有一堆键,每个键都有一个似然变量。我想随机选择其中一个键,但我希望不太可能(键、值)被选择而不是不太可能(更可能)的对象。我想知道您是否有任何建议,最好是我可以使用的现有 python 模块,否则我需要自己制作。

我已经检查了随机模块;它似乎没有提供这个。

我必须为 1000 组不同的对象做出数百万次这样的选择,每组包含 2,455 个对象。每个集合将相互交换对象,因此随机选择器需要是动态的。 1000组2433个对象,即24.33亿个对象;低内存消耗至关重要。由于这些选择不是算法的主体,我需要这个过程非常快; CPU 时间是有限的。

谢谢

更新:

好的,我试着明智地考虑你的建议,但是时间太有限了……

我查看了二叉搜索树方法,它似乎太冒险(复杂且复杂)。其他建议都类似于 ActiveState 配方。我把它拿来并稍微修改了一下,希望能提高效率:

def windex(dict, sum, max):
'''an attempt to make a random.choose() function that makes
weighted choices accepts a dictionary with the item_key and
certainty_value as a pair like:
>>> x = [('one', 20), ('two', 2), ('three', 50)], the
maximum certainty value (max) and the sum of all certainties.'''
n = random.uniform(0, 1)
sum = max*len(list)-sum
for key, certainty in dict.iteritems():
weight = float(max-certainty)/sum
if n < weight:
break
n = n - weight
return key

我希望通过动态保持确定性总和和最大确定性来提高效率。欢迎任何进一步的建议。你们为我节省了很多时间和精力,同时提高了我的效率,这太疯狂了。谢谢!谢谢!谢谢!

更新2:

我决定让它一次选择更多选项,让它更有效率。这将在我的算法中导致可接受的精度损失,因为它本质上是动态的。不管怎样,这就是我现在所拥有的:

def weightedChoices(dict, sum, max, choices=10):
'''an attempt to make a random.choose() function that makes
weighted choices accepts a dictionary with the item_key and
certainty_value as a pair like:
>>> x = [('one', 20), ('two', 2), ('three', 50)], the
maximum certainty value (max) and the sum of all certainties.'''
list = [random.uniform(0, 1) for i in range(choices)]
(n, list) = relavate(list.sort())
keys = []
sum = max*len(list)-sum
for key, certainty in dict.iteritems():
weight = float(max-certainty)/sum
if n < weight:
keys.append(key)
if list: (n, list) = relavate(list)
else: break
n = n - weight
return keys
def relavate(list):
min = list[0]
new = [l - min for l in list[1:]]
return (min, new)

我还没试过。如果您有任何意见/建议,请不要犹豫。谢谢!

更新3:

我整天都在研究 Rex Logan 答案的任务定制版本。它实际上是一个特殊的字典类,而不是 2 个对象和权重数组;这使得事情变得相当复杂,因为 Rex 的代码会生成一个随机索引......我还编写了一个测试用例,它类似于我的算法中会发生的事情(但在我尝试之前我真的不知道!)。基本原则是:一个key越是经常随机生成,就越不可能再次生成:

import random, time
import psyco
psyco.full()

class ProbDict():
"""
Modified version of Rex Logans RandomObject class. The more a key is randomly
chosen, the more unlikely it will further be randomly chosen.
"""
def __init__(self,keys_weights_values={}):
self._kw=keys_weights_values
self._keys=self._kw.keys()
self._len=len(self._keys)
self._findSeniors()
self._effort = 0.15
self._fails = 0
def __iter__(self):
return self.next()
def __getitem__(self, key):
return self._kw[key]
def __setitem__(self, key, value):
self.append(key, value)
def __len__(self):
return self._len
def next(self):
key=self._key()
while key:
yield key
key = self._key()
def __contains__(self, key):
return key in self._kw
def items(self):
return self._kw.items()
def pop(self, key):
try:
(w, value) = self._kw.pop(key)
self._len -=1
if w == self._seniorW:
self._seniors -= 1
if not self._seniors:
#costly but unlikely:
self._findSeniors()
return [w, value]
except KeyError:
return None
def popitem(self):
return self.pop(self._key())
def values(self):
values = []
for key in self._keys:
try:
values.append(self._kw[key][1])
except KeyError:
pass
return values
def weights(self):
weights = []
for key in self._keys:
try:
weights.append(self._kw[key][0])
except KeyError:
pass
return weights
def keys(self, imperfect=False):
if imperfect: return self._keys
return self._kw.keys()
def append(self, key, value=None):
if key not in self._kw:
self._len +=1
self._kw[key] = [0, value]
self._keys.append(key)
else:
self._kw[key][1]=value
def _key(self):
for i in range(int(self._effort*self._len)):
ri=random.randint(0,self._len-1) #choose a random object
rx=random.uniform(0,self._seniorW)
rkey = self._keys[ri]
try:
w = self._kw[rkey][0]
if rx >= w: # test to see if that is the value we want
w += 1
self._warnSeniors(w)
self._kw[rkey][0] = w
return rkey
except KeyError:
self._keys.pop(ri)
# if you do not find one after 100 tries then just get a random one
self._fails += 1 #for confirming effectiveness only
for key in self._keys:
if key in self._kw:
w = self._kw[key][0] + 1
self._warnSeniors(w)
self._kw[key][0] = w
return key
return None
def _findSeniors(self):
'''this function finds the seniors, counts them and assess their age. It
is costly but unlikely.'''
seniorW = 0
seniors = 0
for w in self._kw.itervalues():
if w >= seniorW:
if w == seniorW:
seniors += 1
else:
seniorsW = w
seniors = 1
self._seniors = seniors
self._seniorW = seniorW
def _warnSeniors(self, w):
#a weight can only be incremented...good
if w >= self._seniorW:
if w == self._seniorW:
self._seniors+=1
else:
self._seniors = 1
self._seniorW = w
def test():
#test code
iterations = 200000
size = 2500
nextkey = size


pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)]))
start = time.clock()
for i in xrange(iterations):
key=pd._key()
w=pd[key][0]
if random.randint(0,1+pd._seniorW-w):
#the heavier the object, the more unlikely it will be removed
pd.pop(key)
probAppend = float(500+(size-len(pd)))/1000
if random.uniform(0,1) < probAppend:
nextkey+=1
pd.append(nextkey)
print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations"
weights = pd.weights()
weights.sort()
print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights)
print weights
test()

仍然欢迎任何意见。 @Darius:你的二叉树对我来说太复杂太复杂了;而且我不认为它的叶子可以有效地去除......谢谢所有

最佳答案

This activestate recipe提供了一种易于遵循的方法,特别是评论中的版本,不需要您对权重进行预标准化:

import random

def weighted_choice(items):
"""items is a list of tuples in the form (item, weight)"""
weight_total = sum((item[1] for item in items))
n = random.uniform(0, weight_total)
for item, weight in items:
if n < weight:
return item
n = n - weight
return item

如果您的项目列表很大,这会很慢。在这种情况下,二分搜索可能会更好......但编写起来也会更复杂,如果你的样本量很小, yield 会很小。 Here's an example of the binary search approach in python如果你想走那条路。

(我建议在您的数据集上对这两种方法进行一些快速的性能测试。这种算法的不同方法的性能通常有点不直观。)


编辑:出于好奇,我采纳了自己的建议,并做了一些测试。

我比较了四种方法:

上面的 weighted_choice 函数。

像这样的二分搜索选择函数:

def weighted_choice_bisect(items):
added_weights = []
last_sum = 0

for item, weight in items:
last_sum += weight
added_weights.append(last_sum)

return items[bisect.bisect(added_weights, random.random() * last_sum)][0]

1的编译版本:

def weighted_choice_compile(items):
"""returns a function that fetches a random item from items

items is a list of tuples in the form (item, weight)"""
weight_total = sum((item[1] for item in items))
def choice(uniform = random.uniform):
n = uniform(0, weight_total)
for item, weight in items:
if n < weight:
return item
n = n - weight
return item
return choice

2的编译版本:

def weighted_choice_bisect_compile(items):
"""Returns a function that makes a weighted random choice from items."""
added_weights = []
last_sum = 0

for item, weight in items:
last_sum += weight
added_weights.append(last_sum)

def choice(rnd=random.random, bis=bisect.bisect):
return items[bis(added_weights, rnd() * last_sum)][0]
return choice

然后我构建了一个大列表,如下所示:

choices = [(random.choice("abcdefg"), random.uniform(0,50)) for i in xrange(2500)]

还有一个过于简单的分析功能:

def profiler(f, n, *args, **kwargs):
start = time.time()
for i in xrange(n):
f(*args, **kwargs)
return time.time() - start

结果:

(1000 次调用该函数所用的秒数。)

  • 简单未编译:0.918624162674
  • 二进制未编译:1.01497793198
  • 简单编译:0.287325024605
  • 二进制编译:0.00327413797379

“编译”结果包括编译一次选择函数所用的平均时间。 (我对 1,000 次编译计时,然后将该时间除以 1,000,并将结果添加到选择函数时间。)

所以:如果你有一个很少改变的项目+权重列表,那么二进制编译方法是到目前为止最快的。

关于python - Python中的概率分布,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/526255/

32 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com