gpt4 book ai didi

r - Spark 错误 - 小数精度 39 超过最大精度 38

转载 作者:行者123 更新时间:2023-12-04 12:01:28 35 4
gpt4 key购买 nike

当我尝试从 Spark 数据帧收集数据时,出现错误提示

"java.lang.IllegalArgumentException: requirement failed: Decimal precision 39 exceeds max precision 38".



Spark 数据帧中的所有数据都来自 Oracle 数据库,我相信小数精度小于 38。有什么方法可以在不修改数据的情况下实现这一目标?
# Load required table into memory from Oracle database
df <- loadDF(sqlContext, source = "jdbc", url = "jdbc:oracle:thin:usr/pass@url.com:1521" , dbtable = "TBL_NM")

RawData <- df %>%
filter(DT_Column > DATE(‘2015-01-01’))

RawData <- as.data.frame(RawData)

给出错误

以下是堆栈跟踪:

WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, 10...***, executor 0): java.lang.IllegalArgumentException: requirement failed: Decimal precision 39 exceeds max precision 38 at scala.Predef$.require(Predef.scala:224) at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113) at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:426) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3$$anonfun$9.apply(JdbcUtils.scala:337) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3$$anonfun$9.apply(JdbcUtils.scala:337) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$nullSafeConvert(JdbcUtils.scala:438) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3.apply(JdbcUtils.scala:337) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3.apply(JdbcUtils.scala:335) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:286) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:268) at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377) at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:231) at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:826) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:826) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)



请提出任何解决方案。谢谢你。

最佳答案

使用 AWS Glue 和 Postgres 解决了这个问题。 Spark 2.1.0 中有一个 bug 为大多数人修复了它,但有人在关于使用 customSchema option 的评论中发布了一个解决方法。 .
我在使用 AWS Glue 和 Spark SQL 时遇到了类似的问题:我正在计算货币金额,因此结果是浮点数。胶水抛出错误Decimal precision 1 exceeds max precision -1即使 Glue 数据目录将列定义为小数。通过将列显式转换为 NUMERIC(10,2) 从上面的 customSchema 解决方案中取出一页,Spark 停止提示。

关于r - Spark 错误 - 小数精度 39 超过最大精度 38,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44130460/

35 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com