gpt4 book ai didi

scala - SBT测试错误: java. lang.NoSuchMethodError : net. jpountz.lz4.LZ4BlockInputStream

转载 作者:行者123 更新时间:2023-12-02 03:41:38 27 4
gpt4 key购买 nike

当我尝试使用 scalatest 在 SBT 窗口上对 Spark 流代码执行单元测试时,出现以下异常。

仅 sbt 测试 <<ClassName>>

*
*
*
*
*
*

2018-06-18 02:39:00 ERROR Executor:91 - Exception in task 1.0 in stage 3.0 (TID 11) java.lang.NoSuchMethodError: net.jpountz.lz4.LZ4BlockInputStream.(Ljava/io/InputStream;Z)V at org.apache.spark.io.LZ4CompressionCodec.compressedInputStream(CompressionCodec.scala:122) at org.apache.spark.serializer.SerializerManager.wrapForCompression(SerializerManager.scala:163) at org.apache.spark.serializer.SerializerManager.wrapStream(SerializerManager.scala:124) at org.apache.spark.shuffle.BlockStoreShuffleReader$$anonfun$2.apply(BlockStoreShuffleReader.scala:50) at org.apache.spark.shuffle.BlockStoreShuffleReader$$anonfun$2.apply(BlockStoreShuffleReader.scala:50) at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:417) at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:61) at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.sort_addToSorter$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614) at org.apache.spark.sql.execution.GroupedIterator$.apply(GroupedIterator.scala:29) at org.apache.spark.sql.execution.streaming.FlatMapGroupsWithStateExec$StateStoreUpdater.updateStateForKeysWithData(FlatMapGroupsWithStateExec.scala:176)**

尝试了一些方法来排除 net.jpountz.lz4 jar(以及其他帖子的建议),但输出中再次出现相同的错误。

目前使用spark 2.3、scalatest 3.0.5、Scala 2.11版本。我仅在升级到 Spark 2.3 和 scalatest 3.0.5 后才看到此问题

有什么建议吗?

最佳答案

Kafka 与 Spark 存在冲突的依赖关系,这就是导致我出现此问题的原因。

这就是你可以 exclude sbt 文件中的依赖项

lazy val excludeJpountz = ExclusionRule(organization = "net.jpountz.lz4", name = "lz4")

lazy val kafkaClients = "org.apache.kafka" % "kafka-clients" % userKafkaVersionHere excludeAll(excludeJpountz) // add more exclusions here

当您使用此 kafkaClients 依赖项时,它现在会排除有问题的 lz4 库。

<小时/>

更新: 这似乎是 Kafka 0.11.x.x 及早期版本的问题。从 1.x.x 开始,Kafka 似乎已经不再使用有问题的 net.jpountz.lz4 库。因此,将最新的 Kafka (1.x) 与最新的 Spark (2.3.x) 一起使用应该不会出现此问题。

关于scala - SBT测试错误: java. lang.NoSuchMethodError : net. jpountz.lz4.LZ4BlockInputStream,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50907437/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com