gpt4 book ai didi

python - Spark .stdev() Python 问题

转载 作者:可可西里 更新时间:2023-11-01 16:56:50 25 4
gpt4 key购买 nike

所以我正在尝试做一些统计分析,我一直在做 sum 与 stdev 有点不同。

Sum 可以像这样正常工作:

stats[0] = myData2.map(lambda (Column, values): (sum(values))).collect()

Stdev 的格式不同,无法正常工作:

stats[4] = myData2.map(lambda (Column, values): (values)).stdev()

我收到以下错误:

TypeError: unsupported operand type(s) for -: 'ResultIterable' and 'float'

最佳答案

第一个解决方案使用NumPy

data=[(1,[1,2,3,4,5]),(2,[6,7,8,9]),(3,[1,3,5,7])]
dataRdd = sc.parallelize(data)
import numpy
dataRdd.mapValues(lambda values: numpy.std(values)).collect()
# Result
# [(1, 1.4142135623730951), (2, 1.1180339887498949), (3, 2.2360679774997898)]

第二种解决方案 DIY 并使其更分布式

data = [(1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (2, 6), (2, 7), (2, 8), (2, 9), (3, 1), (3, 3), (3, 5), (3, 7)]
# Generate RDD of (Key, (Sum, Sum of squares, Count))
dataSumsRdd = dataRdd.aggregateByKey((0.0, 0.0, 0.0),
lambda (sum, sum2, count), value: (sum + float(value), sum2 + float(value**2), count+1.0),
lambda (suma, sum2a, counta), (sumb, sum2b, countb): (suma + sumb, sum2a + sum2b, counta + countb))
# Generate RDD of (Key, (Count, Average, Std Dev))
import math
dataStatsRdd = dataSumsRdd.mapValues(lambda (sum, sum2, count) : (count, sum/count, math.sqrt(sum2/count - (sum/count)**2)))
# Result
# [(1, (5.0, 3.0, 1.4142135623730951)), (2, (4.0, 7.5, 1.118033988749895)), (3, (4.0, 4.0, 2.23606797749979))]

关于python - Spark .stdev() Python 问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28812912/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com