gpt4 book ai didi

apache-spark - 与 RDD 和 DataFrame 不同的浮点精度

转载 作者:行者123 更新时间:2023-12-01 04:44:01 26 4
gpt4 key购买 nike

我将 RDD 更改为 DataFrame 并将结果与​​我使用 read.csv 导入的另一个 DataFrame 进行比较,但两种方法的浮点精度不同。我感谢您的帮助。

我使用的数据来自 here .

from pyspark.sql import Row
from pyspark.sql.types import *

RDD方式
orders = sc.textFile("retail_db/orders")
order_items = sc.textFile('retail_db/order_items')
orders_comp = orders.filter(lambda line: ((line.split(',')[-1] == 'CLOSED') or (line.split(',')[-1] == 'COMPLETE')))
orders_compMap = orders_comp.map(lambda line: (int(line.split(',')[0]), line.split(',')[1]))

order_itemsMap = order_items.map(lambda line: (int(line.split(',')[1]),
(int(line.split(',')[2]), float(line.split(',')[4])) ))

joined = orders_compMap.join(order_itemsMap)
joined2 = joined.map(lambda line: ((line[1][0], line[1][1][0]), line[1][1][1]))

joined3 = joined2.reduceByKey(lambda a, b : a +b).sortByKey()

df1 = joined3.map(lambda x:Row(date = x[0][0], product_id = x[0][1], total = x[1])).toDF().select(['date','product_id', 'total'])

数据框
 schema = StructType([StructField('order_id', IntegerType(), True),
StructField('date', StringType(), True),
StructField('customer_id', StringType(), True),
StructField('status', StringType(), True)])


orders2 = spark.read.csv("retail_db/orders",schema = schema)


schema = StructType([StructField('item_id', IntegerType(), True),
StructField('order_id', IntegerType(), True),
StructField('product_id', IntegerType(), True),
StructField('quantity', StringType(), True),
StructField('sub_total', FloatType(), True),
StructField('product_price', FloatType(), True)])



orders_items2 = spark.read.csv("retail_db/order_items", schema = schema)

orders2.registerTempTable("orders2t")
orders_items2.registerTempTable("orders_items2t")

df2 = spark.sql('select o.date, oi.product_id, sum(oi.sub_total) \
as total from orders2t as o inner join orders_items2t as oi on
o.order_id = oi.order_id \
where o.status in ("CLOSED", "COMPLETE") group by o.date,
oi.product_id order by o.date, oi.product_id')

他们是一样的吗?
df1.registerTempTable("df1t")
df2.registerTempTable("df2t")

spark.sql("select d1.total - d2.total as difference from df1t as d1 inner
join df2t as d2 on d1.date = d2.date \
and d1.product_id =d2.product_id ").show(truncate = False)

enter image description here

最佳答案

忽略转换中的精度损失是不一样的。

  • python

    根据 Python's Floating Point Arithmetic: Issues and Limitations标准实现使用 64 位表示:

    Almost all machines today (November 2000) use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”. 754 doubles contain 53 bits of precision,

  • Spark SQL

    Spark SQL FloatType uses 32 bit representation :

    FloatType: Represents 4-byte single-precision floating point numbers.


  • 使用 DoubleType可能更接近:

    DoubleType: Represents 8-byte double-precision floating point numbers.



    但如果可预测的行为很重要,您应该使用 DecimalTypes具有明确定义的精度。

    关于apache-spark - 与 RDD 和 DataFrame 不同的浮点精度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48465055/

    26 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com