gpt4 book ai didi

Python float silent overflow 精度错误

转载 作者:太空狗 更新时间:2023-10-30 02:58:41 27 4
gpt4 key购买 nike

StackOverflow 有很多关于浮点表示的主题,关于异常、截断、精度问题。我一直试图解释这个,但仍然没有弄明白。

from operator import add, sub, mul, div

fun = 'add(add(sub(sub(safeDiv(xL1, add(D, mul(sub(add(sub(A, 0), mul(D, rv)), mul(xL1, add(D, 3))), xL1))), add(0, rv)), safeDiv(mul(sub(D, sigma2), safeDiv(sub(safeDiv(xL1, A), 1), add(safeDiv(safeDiv(B1, xL1), sub(4, xL2)), add(sigma1, xL1)))), sigma1)), add(4, B1)), add(add(A, A), sub(add(xL1, xL1), mul(xL2, safeDiv(xL1, add(sub(add(mul(D, -4), add(add(safeDiv(mul(sigma2, sigma2), safeDiv(B1, sigma1)), sub(add(D, safeDiv(xL2, B1)), D)), sub(4, B1))), A), add(mul(sigma2, xL1), mul(xL1, mul(rv, xL2)))))))))'

d = [(
51.696521954140991,
31.156112806482234,
54.629863633907163,
27.491618858013698,
26.223584534107289,
77.10005191617563,
2708.4145268939428,
0.20952943771134946,
15.558278150405643,
102.0,
225.0)]

arglabels = ['xL1', 'sigma1', 'xL2', 'sigma2', 'A', 'B1', 'D', 'rv']
other = {'add': add, 'sub':sub, 'mul':mul,'safeDiv':div}
inputs = dict(zip(arglabels, d[0][: -4] + (d[0][-3]*d[0][-4],)))
inputs.update(other)
print eval(fun, inputs)

此代码应产生介于 225 和 240 之间的结果,但返回负数。就是这样,没有异常(exception),没有警告,没有。所以一定是某处的精度错误导致结果完全关闭。

通过四舍五入我可以用来获得合理结果的最大值是小数点后一位(这让我接近 207...),numpy 的 longdoubles 在某些情况下有帮助,但还不够。我已经手工完成了(如此可观的精度损失,并获得了240)。

另一个细节,和主脚本一起在notebook中运行我有this behaviour :

当我第一次添加 locals 字典时,它返回一个非常合理的结果,但接下来它又回到负值。一定有一些导入影响了这一点,但我也找不到它。

我应该怎么做才能避免这种情况?如何生成某种警告?我如何跟踪哪里出了问题?

编辑: 已接受的答案正确识别了问题,请查看答案下方的评论以了解更多详细信息。但是,它没有讨论如何避免它或更正功能。也许这应该是对 MathOverflow 的讨论......

最佳答案

如果预期结果在 225...240 范围内,则可能会出现以下问题:

  • 生成fun时出错
  • d 中的值不正确>
  • addsubmulsafeDiv 函数应该做一些比浮点加法更复杂的事情、减法、乘法和除法。

问题中提供的输入不能给出除 -2786.17215265 以外的任何内容,因为它是一个完美的数值结果。没有浮点舍入错误或溢出。下面显示了测试脚本的输出,其中包含详细版本的算术函数,所有浮点运算均已明确定义。没有什么可以给出明显的舍入误差。收盘价相减时有一些风险操作:

add: -10626.8589858 + 10627.794501 = 0.935515251547

然而,它离舍入误差还很远。

通过 C 程序或数学工具 (MATLAB/Octave) 可以获得相同的结果。


屏幕截图中的不同输出是由于此处未显示的局部变量的值所致。由于 Out[108]Out[110] 相同,我假设 dsdata[17]。局部变量用于输出 Out[109]Out[110],所以区别在于 rv 的值,因为只有该变量在 In[110] 中更改。如果所有其他变量的值都是固定的,可以看出,如果 rv 相等,则可以得到 Out[109] (230.62977145178198)为以下值之一:4.0751641474.48570992251.72476610。下面的测试脚本中的最后一行也说明了这一点。

请注意,如果 fun 被分析为 rv 的函数,它有两个极点(大约 3.351.7).因此,从技术上讲,该函数可以给出从负无穷大到正无穷大的结果。


Test

from operator import add, sub, mul, div

fun = 'add(add(sub(sub(safeDiv(xL1, add(D, mul(sub(add(sub(A, 0), mul(D, rv)), mul(xL1, add(D, 3))), xL1))), add(0, rv)), safeDiv(mul(sub(D, sigma2), safeDiv(sub(safeDiv(xL1, A), 1), add(safeDiv(safeDiv(B1, xL1), sub(4, xL2)), add(sigma1, xL1)))), sigma1)), add(4, B1)), add(add(A, A), sub(add(xL1, xL1), mul(xL2, safeDiv(xL1, add(sub(add(mul(D, -4), add(add(safeDiv(mul(sigma2, sigma2), safeDiv(B1, sigma1)), sub(add(D, safeDiv(xL2, B1)), D)), sub(4, B1))), A), add(mul(sigma2, xL1), mul(xL1, mul(rv, xL2)))))))))'

d = [(
51.696521954140991,
31.156112806482234,
54.629863633907163,
27.491618858013698,
26.223584534107289,
77.10005191617563,
2708.4145268939428,
0.20952943771134946,
15.558278150405643,
102.0,
225.0)]

def add_verbose(a,b):
res = a + b;
print "add: {0} + {1} = {2}".format(a,b,res);
return res;

def sub_verbose(a,b):
res = a - b;
print "sub: {0} - {1} = {2}".format(a,b,res);
return res;

def div_verbose(a,b):
res = a / b;
print "div: {0} / {1} = {2}".format(a,b,res);
return res;

def mul_verbose(a,b):
res = a * b;
print "mul: {0} * {1} = {2}".format(a,b,res);
return res;

arglabels = ['xL1', 'sigma1', 'xL2', 'sigma2', 'A', 'B1', 'D', 'rv']
other = {'add': add_verbose, 'sub':sub_verbose, 'mul':mul_verbose,'safeDiv':div_verbose}
inputs = dict(zip(arglabels, d[0][: -4] + (d[0][-3]*d[0][-4],)))
inputs.update(other)

# out 108
print eval(fun, inputs)

# set locals that can give out 109
safeDiv = div;
rv = 4.0751641470166795256;
inputs.update(locals());

# out 109
print eval(fun, inputs)

Output

sub: 26.2235845341 - 0 = 26.2235845341
mul: 2708.41452689 * 3.25991727261 = 8829.20729761
add: 26.2235845341 + 8829.20729761 = 8855.43088215
add: 2708.41452689 + 3 = 2711.41452689
mul: 51.6965219541 * 2711.41452689 = 140170.700616
sub: 8855.43088215 - 140170.700616 = -131315.269734
mul: -131315.269734 * 51.6965219541 = -6788542.72473
add: 2708.41452689 + -6788542.72473 = -6785834.3102
div: 51.6965219541 / -6785834.3102 = -7.61830006318e-06
add: 0 + 3.25991727261 = 3.25991727261
sub: -7.61830006318e-06 - 3.25991727261 = -3.25992489091
sub: 2708.41452689 - 27.491618858 = 2680.92290804
div: 51.6965219541 / 26.2235845341 = 1.97137511414
sub: 1.97137511414 - 1 = 0.971375114142
div: 77.1000519162 / 51.6965219541 = 1.49139727397
sub: 4 - 54.6298636339 = -50.6298636339
div: 1.49139727397 / -50.6298636339 = -0.029456869265
add: 31.1561128065 + 51.6965219541 = 82.8526347606
add: -0.029456869265 + 82.8526347606 = 82.8231778914
div: 0.971375114142 / 82.8231778914 = 0.0117283004453
mul: 2680.92290804 * 0.0117283004453 = 31.4426693361
div: 31.4426693361 / 31.1561128065 = 1.00919744165
sub: -3.25992489091 - 1.00919744165 = -4.26912233256
add: 4 + 77.1000519162 = 81.1000519162
add: -4.26912233256 + 81.1000519162 = 76.8309295836
add: 26.2235845341 + 26.2235845341 = 52.4471690682
add: 51.6965219541 + 51.6965219541 = 103.393043908
mul: 2708.41452689 * -4 = -10833.6581076
mul: 27.491618858 * 27.491618858 = 755.789107434
div: 77.1000519162 / 31.1561128065 = 2.47463643475
div: 755.789107434 / 2.47463643475 = 305.414200171
div: 54.6298636339 / 77.1000519162 = 0.708558065477
add: 2708.41452689 + 0.708558065477 = 2709.12308496
sub: 2709.12308496 - 2708.41452689 = 0.708558065477
add: 305.414200171 + 0.708558065477 = 306.122758237
sub: 4 - 77.1000519162 = -73.1000519162
add: 306.122758237 + -73.1000519162 = 233.02270632
add: -10833.6581076 + 233.02270632 = -10600.6354013
sub: -10600.6354013 - 26.2235845341 = -10626.8589858
mul: 27.491618858 * 51.6965219541 = 1421.22107785
mul: 3.25991727261 * 54.6298636339 = 178.088836061
mul: 51.6965219541 * 178.088836061 = 9206.57342319
add: 1421.22107785 + 9206.57342319 = 10627.794501
add: -10626.8589858 + 10627.794501 = 0.935515251547
div: 51.6965219541 / 0.935515251547 = 55.2599456488
mul: 54.6298636339 * 55.2599456488 = 3018.84329521
sub: 103.393043908 - 3018.84329521 = -2915.4502513
add: 52.4471690682 + -2915.4502513 = -2863.00308224
add: 76.8309295836 + -2863.00308224 = -2786.17215265
-2786.17215265
230.629771452

关于Python float silent overflow 精度错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33380164/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com