gpt4 book ai didi

hadoop - native snappy 库不可用 : this version of libhadoop was built without snappy support

转载 作者:可可西里 更新时间:2023-11-01 16:01:24 29 4
gpt4 key购买 nike

我在使用 MLUtils saveAsLibSVMFile 时遇到了上述错误。尝试了如下各种方法,但没有任何效果。

		/*
conf.set("spark.io.compression.codec","org.apache.spark.io.LZFCompressionCodec")
*/

/*
conf.set("spark.executor.extraClassPath","/usr/hdp/current/hadoop-client/lib/snappy-java-*.jar")
conf.set("spark.driver.extraClassPath","/usr/hdp/current/hadoop-client/lib/snappy-java-*.jar")

conf.set("spark.executor.extraLibraryPath","/usr/hdp/2.3.4.0-3485/hadoop/lib/native")
conf.set("spark.driver.extraLibraryPath","/usr/hdp/2.3.4.0-3485/hadoop/lib/native")
*/

我阅读了以下链接 https://community.hortonworks.com/questions/18903/this-version-of-libhadoop-was-built-without-snappy.html

最后我只有两种方法可以解决它。这在下面的答案中给出。

最佳答案

  1. 一种方法是使用不同的 hadoop 编解码器,如下所示 sc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress", "true")
    sc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress.type", CompressionType.BLOCK.toString)
    sc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress.codec", "org.apache.hadoop.io.compress.BZip2Codec")
    sc.hadoopConfiguration.set("mapreduce.map.output.compress", "true")
    sc.hadoopConfiguration.set("mapreduce.map.output.compress.codec", "org.apache.hadoop.io.compress.BZip2Codec")

  2. 第二种方法是提及 --driver-library-path /usr/hdp/<whatever is your current version>/hadoop/lib/native/作为我的 spark-submit 作业的参数(在命令行中)

关于hadoop - native snappy 库不可用 : this version of libhadoop was built without snappy support,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38714702/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com