gpt4 book ai didi

python - 使用 Pandas 滚动相关时如何处理不一致的结果?

转载 作者:行者123 更新时间:2023-12-03 17:18:53 25 4
gpt4 key购买 nike

让我先说一下,为了重现这个问题,我需要一个大数据,这是问题的一部分,我无法预测什么时候会出现这种特殊性。不管怎样,数据太大(~13k 行,2 列)无法粘贴到问题中,我在帖子末尾添加了一个 pastebin 链接 .

过去几天我遇到了一个奇怪的问题 pandas.core.window.rolling.Rolling.corr .我有一个数据集,我试图在其中计算滚动相关性。这就是问题:

While calculating rolling (window_size=100) correlations between two columns (a and b): some indices (one such index is 12981) give near 0 values (of order 1e-10), but it should ideally return nan or inf, (because all values in one column are constant). However, if I just calculate standalone correlation pertaining to that index, (i.e. last 100 rows of data including the said index), or perform the rolling calculations on lesser amount of rows (e.g. 300 or 1000 as opposed to 13k), I get the correct result (i.e. nan or inf.)


期待:
>>> df = pd.read_csv('sample_corr_data.csv') # link at the end,  ## columns = ['a', 'b']
>>> df.a.tail(100).value_counts()

0.000000 86
-0.000029 3
0.000029 3
-0.000029 2
0.000029 2
-0.000029 2
0.000029 2
Name: a, dtype: int64

>>> df.b.tail(100).value_counts() # all 100 values are same

6.0 100
Name: b, dtype: int64

>>> df.a.tail(100).corr(df.b.tail(100))
nan # expected, because column 'b' has same value throughout

# Made sure of this using,
# 1. np.corrcoef, because pandas uses this internally to calculate pearson moments
>>> np.corrcoef(df.a.tail(100), df.b.tail(100))[0, 1]
nan

# 2. using custom function
>>> def pearson(a, b):
n = a.size
num = n*np.nansum(a*b) - np.nansum(a)*np.nansum(b)
den = (n*np.nansum((a**2)) - np.nansum(a)**2)*(n*np.nansum(b**2) - np.nansum(b)**2)
return num/np.sqrt(den) if den * np.isfinite(den*num) else np.nan

>>> pearson(df.a.tail(100), df.b.tail(100))
nan
现在,现实:
>>> df.a.rolling(100).corr(df.b).tail(3)

12979 7.761921e-07
12980 5.460717e-07
12981 2.755881e-10 # This should have been NaN/inf !!

## Furthermore!!

>>> debug = df.tail(300)
>>> debug.a.rolling(100).corr(debug.b).tail(3)

12979 7.761921e-07
12980 5.460717e-07
12981 -inf # Got -inf, fine
dtype: float64

>>> debug = df.tail(3000)
>>> debug.a.rolling(100).corr(debug.b).tail(3)

12979 7.761921e-07
12980 5.460717e-07
12981 inf # Got +inf, still acceptable
dtype: float64
这一直持续到 9369行:
>>> debug = df.tail(9369)
>>> debug.a.rolling(100).corr(debug.b).tail(3)

12979 7.761921e-07
12980 5.460717e-07
12981 inf
dtype: float64

# then
>>> debug = df.tail(9370)
>>> debug.a.rolling(100).corr(debug.b).tail(3)

12979 7.761921e-07
12980 5.460717e-07
12981 4.719615e-10 # SPOOKY ACTION IN DISTANCE!!!
dtype: float64

>>> debug = df.tail(10000)
>>> debug.a.rolling(100).corr(debug.b).tail(3)

12979 7.761921e-07
12980 5.460717e-07
12981 1.198994e-10 # SPOOKY ACTION IN DISTANCE!!!
dtype: float64
当前的解决方法
>>> df.a.rolling(100).apply(lambda x: x.corr(df.b.reindex(x.index))).tail(3)   # PREDICTABLY, VERY SLOW!

12979 7.761921e-07
12980 5.460717e-07
12981 NaN
Name: a, dtype: float64

# again this checks out using other methods,
>>> df.a.rolling(100).apply(lambda x: np.corrcoef(x, df.b.reindex(x.index))[0, 1]).tail(3)

12979 7.761921e-07
12980 5.460717e-07
12981 NaN
Name: a, dtype: float64

>>> df.a.rolling(100).apply(lambda x: pearson(x, df.b.reindex(x.index))).tail(3)

12979 7.761921e-07
12980 5.460717e-07
12981 NaN
Name: a, dtype: float64
据我了解, series.rolling(n).corr(other_series)的结果应符合以下条件:
>>> def rolling_corr(series, other_series, n=100):
return pd.Series(
[np.nan]*(n-1) + [series[i-n: i].corr(other_series[i-n:i])
for i in range (n, series.size+1)]
)

>>> rolling_corr(df.a, df.b).tail(3)

12979 7.761921e-07
12980 5.460717e-07
12981 NaN
起初我以为这是一个 floating-point arithmetic问题(因为最初,在某些情况下,我可以通过将列 'a' 舍入到小数点后 5 位或强制转换为 float32 来解决此问题,但在这种情况下,无论使用的样本数量如何,它都会存在。所以 rolling肯定有问题或至少 rolling产生 floating-point问题取决于数据的大小。我检查了 rolling.corr 的源代码,但找不到任何可以解释这种不一致的东西。现在我很担心,有多少过去的代码受到这个问题的困扰。
这背后的原因是什么?以及如何解决这个问题?如果发生这种情况是因为 Pandas 更喜欢速度而不是准确性(如建议的 here ),这是否意味着我永远无法可靠地使用 pandas.rolling大样本操作?我怎么知道这种不一致会出现的大小?

sample_corr_data.csv:https://pastebin.com/jXXHSv3r
测试在
  • Windows 10,python 3.9.1,pandas 1.2.2,(IPython 7.20)
  • Windows 10,python 3.8.2,pandas 1.0.5,(IPython 7.19)
  • Ubuntu 20.04,python 3.7.7,pandas 1.0.5,(GCC 7.3.0,标准 REPL)
  • CentOS Linux 7(核心)、Python 2.7.5、pandas 0.23.4、(IPython 5.8.0)

  • 注意:不同的操作系统在上述索引处返回不同的值,但都是有限的并且接近 0 .

    最佳答案

    如果你计算你用滚动总和替换皮尔逊公式中的总和怎么办


    def rolling_pearson(a, b, n):
    a_sum = a.rolling(n).sum()
    b_sum = b.rolling(n).sum()
    ab_sum = (a*b).rolling(n).sum()
    aa_sum = (a**2).rolling(n).sum()
    bb_sum = (b**2).rolling(n).sum();

    num = n * ab_sum - a_sum * b_sum;
    den = (n*aa_sum - a_sum**2) * (n * bb_sum - b_sum**2)
    return num / den**(0.5)

    rolling_pearson(df.a, df.b, 100)

                 ...     
    12977 1.109077e-06
    12978 9.555249e-07
    12979 7.761921e-07
    12980 5.460717e-07
    12981 inf
    Length: 12982, dtype: float64
    为什么会这样
    为了回答这个问题,我需要检查实现。因为确实是 b的最后100个样本的方差为零,滚动相关性计算为 a.cov(b) / (a.var() * b.var())**0.5 .
    经过一番搜索,我找到了滚动方差实现 here ,他们使用的方法是 Welford's online algorithm .这个算法很好,因为您可以只使用一次乘法来添加一个样本(与累积和的方法相同),并且您可以使用单个整数除法进行计算。这里用python重写它。
    def welford_add(existingAggregate, newValue):
    if pd.isna(newValue):
    return s
    (count, mean, M2) = existingAggregate
    count += 1
    delta = newValue - mean
    mean += delta / count
    delta2 = newValue - mean
    M2 += delta * delta2
    return (count, mean, M2)
    def welford_remove(existingAggregate, newValue):
    if pd.isna(newValue):
    return s
    (count, mean, M2) = existingAggregate
    count -= 1
    delta = newValue - mean
    mean -= delta / count
    delta2 = newValue - mean
    M2 -= delta * delta2
    return (count, mean, M2)
    def finalize(existingAggregate):
    (count, mean, M2) = existingAggregate
    (mean, variance, sampleVariance) = (mean,
    M2 / count if count > 0 else None,
    M2 / (count - 1) if count > 1 else None)
    return (mean, variance, sampleVariance)
    在 Pandas 的实现中,他们提到了 Kahan's summation ,这对于在添加中获得更好的精度很重要,但是结果并没有因此而改善(我没有检查它是否正确实现)。
    使用 n=100 应用 Welford 算法
    s = (0,0,0)
    for i in range(len(df.b)):
    if i >= n:
    s = welford_remove(s, df.b[i-n])
    s = welford_add(s, df.b[i])
    finalize(s)
    它给
    (6.000000000000152, 4.7853099260919405e-12, 4.8336463899918594e-12)
    df.b.rolling(100).var()
    0                 NaN
    1 NaN
    2 NaN
    3 NaN
    4 NaN
    ...
    12977 6.206061e-01
    12978 4.703030e-01
    12979 3.167677e-01
    12980 1.600000e-01
    12981 6.487273e-12
    Name: b, Length: 12982, dtype: float64
    有错误 6.4e-12略高于 4.83e-12通过直接应用 Welford 方法给出。
    另一方面 (df.b**2).rolling(n).sum()-df.b.rolling(n).sum()**2/n最后一个条目为 0.0。
    0          NaN
    1 NaN
    2 NaN
    3 NaN
    4 NaN
    ...
    12977 61.44
    12978 46.56
    12979 31.36
    12980 15.84
    12981 0.00
    Name: b, Length: 12982, dtype: float64
    我希望这个解释是令人满意的:)

    关于python - 使用 Pandas 滚动相关时如何处理不一致的结果?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66615107/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com