gpt4 book ai didi

java - 为什么我不能再从 AWS S3 in Spark 应用程序读取数据?

转载 作者:塔克拉玛干 更新时间:2023-11-03 05:12:48 30 4
gpt4 key购买 nike

我已经升级到 Apache Spark 1.5.1,但我不确定这是否导致了它。我在 spark-submit 中有我的访问 key ,它一直有效。

Exception in thread "main" java.lang.NoSuchMethodError: org.jets3t.service.impl.rest.httpclient.RestS3Service.<init>(Lorg/jets3t/service/security/AWSCredentials;)V

SQLContext sqlContext = new SQLContext(sc);
DataFrame df = sqlContext.read()
.format("com.databricks.spark.csv")
.option("inferSchema", "true")
.load("s3n://ossem-replication/gdelt_data/event_data/" + args[0]);

df.write()
.format("com.databricks.spark.csv")
.save("/user/spark/ossem_data/gdelt/" + args[0]);

更多错误如下。有一个类不包含该方法,这意味着依赖项不匹配。 jets3t 似乎不包含 RestS3Service 方法。(Lorg/jets3t/service/security/AWSCredentials;)V 有人可以向我解释一下吗?

Exception in thread "main" java.lang.NoSuchMethodError: org.jets3t.service.impl.rest.httpclient.RestS3Service.<init>(Lorg/jets3t/service/security/AWSCredentials;)V
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.initialize(Jets3tNativeFileSystemStore.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at org.apache.hadoop.fs.s3native.$Proxy24.initialize(Unknown Source)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:272)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1277)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
at org.apache.spark.rdd.RDD.take(RDD.scala:1272)
at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1312)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
at org.apache.spark.rdd.RDD.first(RDD.scala:1311)
at com.databricks.spark.csv.CsvRelation.firstLine$lzycompute(CsvRelation.scala:101)
at com.databricks.spark.csv.CsvRelation.firstLine(CsvRelation.scala:99)
at com.databricks.spark.csv.CsvRelation.inferSchema(CsvRelation.scala:82)
at com.databricks.spark.csv.CsvRelation.<init>(CsvRelation.scala:42)
at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:74)
at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:39)
at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:27)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:125)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:114)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:104)
at com.bah.ossem.spark.GdeltSpark.main(GdeltSpark.java:20)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)

最佳答案

我遇到了同样的问题,但使用的是 Spark 1.6,我使用的是 Scala 而不是 Java。出现这个错误的原因是 Spark Core 的 Hadoop Client 版本是 2.2,而我使用的 Spark 集群安装是 1.6。我必须进行以下更改才能使其正常工作。

  1. 将 hadoop 客户端依赖项更改为 2.6(我使用的 Hadoop 版本)

    "org.apache.hadoop" % "hadoop-client" % "2.6.0",
  2. 在我的 Spark fat jar 中包含 hadoop-aws 库,因为此依赖项不再包含在 1.6 中的 Hadoop 库中

    "org.apache.hadoop" % "hadoop-aws" % "2.6.0",
  3. 将 AWS key 和 secret 导出为环境变量。

  4. 在SparkConf中指定如下Hadoop配置

    val sparkContext = new SparkContext(sparkConf)
    val hadoopConf = sparkContext.hadoopConfiguration
    hadoopConf.set("fs.s3.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
    hadoopConf.set("fs.s3.awsAccessKeyId", sys.env.getOrElse("AWS_ACCESS_KEY_ID", ""))
    hadoopConf.set("fs.s3.awsSecretAccessKey", sys.env.getOrElse("AWS_SECRET_ACCESS_KEY", ""))

关于java - 为什么我不能再从 AWS S3 in Spark 应用程序读取数据?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33852044/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com