gpt4 book ai didi

python - Pyspark S3 错误 : java. lang.NoClassDefFoundError: com/amazonaws/services/s3/model/MultiObjectDeleteException

转载 作者:行者123 更新时间:2023-12-04 12:32:21 33 4
gpt4 key购买 nike

设置可以读取 AWS s3 文件的 Spark 集群失败。我使用的软件如下:

  • hadoop-aws-3.2.0.jar
  • aws-java-sdk-1.11.887.jar
  • spark-3.0.1-bin-hadoop3.2.tgz

  • 使用python版本:Python 3.8.6
    from pyspark.sql import SparkSession, SQLContext
    from pyspark.sql.types import *
    from pyspark.sql.functions import *
    import sys

    spark = (SparkSession.builder
    .appName("AuthorsAges")
    .appName('SparkCassandraApp')
    .getOrCreate())


    spark._jsc.hadoopConfiguration().set("fs.s3a.access.key", "access-key")
    spark._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "secret-key")
    spark._jsc.hadoopConfiguration().set("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem")
    spark._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", "true")
    spark._jsc.hadoopConfiguration().set("fs.s3a.aws.credentials.provider","org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider")
    spark._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "")


    input_file='s3a://spark-test-data/Fire_Department_Calls_for_Service.csv'

    file_schema = StructType([StructField("Call_Number",StringType(),True),
    StructField("Unit_ID",StringType(),True),
    StructField("Incident_Number",StringType(),True),
    ...
    ...
    # Read file into a Spark DataFrame
    input_df = (spark.read.format("csv") \
    .option("header", "true") \
    .schema(file_schema) \
    .load(input_file))
    代码在开始执行 spark.read.format 时失败。似乎找不到类。 java.lang.NoClassDefFoundError: com.amazonaws.services.s3.model.MultiObjectDeleteException
      File "<stdin>", line 1, in <module>
    File "/usr/local/spark/spark-3.0.1-bin-hadoop3.2/python/pyspark/sql/readwriter.py", line 178, in load
    return self._df(self._jreader.load(path))
    File "/usr/local/spark/spark-3.0.1-bin-hadoop3.2/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
    File "/usr/local/spark/spark-3.0.1-bin-hadoop3.2/python/pyspark/sql/utils.py", line 128, in deco
    return f(*a, **kw)
    File "/usr/local/spark/spark-3.0.1-bin-hadoop3.2/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
    py4j.protocol.Py4JJavaError: An error occurred while calling o51.load.
    : java.lang.NoClassDefFoundError: com/amazonaws/services/s3/model/MultiObjectDeleteException
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2532)
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2497)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2593)
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3269)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3301)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
    at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:46)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:366)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297)
    at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:286)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:286)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:232)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
    Caused by: java.lang.ClassNotFoundException: com.amazonaws.services.s3.model.MultiObjectDeleteException
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
    我一直在尝试为上述 jars 和 python 找到正确的组合,但我找不到正确的组合。我收到了各种 NoClassDefFoundError 所以我决定使用我上面列出的所有 jars 和 python 的最新版本,但仍然没有成功。
    我想知道您使用了哪些版本的 jars 和 python 来成功设置一个能够通过 pyspark 使用 s3a 访问 s3 的集群?预先感谢您的回复/帮助。

    最佳答案

    Hadoop 3.2 是针对 1.11.563 构建的;将特定版本的完整阴影 sdk 粘贴到您的类路径“aws-java-sdk-bundle”中,一切都应该没问题。
    SDK 过去一直很“挑剔”……升级总是会带来惊喜。对于好奇 Qualifying an AWS SDK update .可能是时候有人再做一次了。

    关于python - Pyspark S3 错误 : java. lang.NoClassDefFoundError: com/amazonaws/services/s3/model/MultiObjectDeleteException,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64563127/

    33 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com