gpt4 book ai didi

java - 动态类加载器的 IllegalAccessError

转载 作者:可可西里 更新时间:2023-11-01 15:27:51 25 4
gpt4 key购买 nike

我创建了一个自定义 ParquetOutputFormat(org.apache.parquet.hadoop 中的类)来覆盖 getRecordWriter 方法。在 getRecordWriter 方法中,它访问 CodecFactory,这会导致 IllegalAccessError。为了尝试解决这个问题,我尝试创建自己的类加载器,但这没有帮助。我关注了这篇博文 http://techblog.applift.com/upgrading-spark#advanced-case-parquet-writer

在创建自定义类加载器之前,我使用了 CustomParquetOutputFormat,如下所示:

override def createOutputFormat: OutputFormat[Void, InternalRow] with Ext = new CustomParquetOutputFormat[InternalRow]() with Ext {
...
}

当调用 getRecordWriterCustomParquetOutputFormat 尝试访问第 274 行的 CodecFactory 时会发生此问题:

  CodecFactory codecFactory = new CodecFactory(conf);

(这是 CustomParquetOutputFormat 访问的 ParquetOutputFormat 的第 274 行)

CodecFactory 是包私有(private)的。

自定义类加载器:

class CustomClassLoader(urls: Array[URL], parent: ClassLoader, whiteList: List[String])
extends ChildFirstURLClassLoader(urls, parent) {
override def loadClass(name: String) = {
if (whiteList.exists(name.startsWith)) {
super.loadClass(name)
} else {
parent.loadClass(name)
}
}
}

用法:

val sc: SparkContext = SparkContext.getOrCreate()
val cl: CustomClassLoader = new CustomClassLoader(sc.jars.map(new URL(_)).toArray,
Thread.currentThread.getContextClassLoader, List(
"org.apache.parquet.hadoop.CustomParquetOutputFormat",
"org.apache.parquet.hadoop.CodecFactory",
"org.apache.parquet.hadoop.ParquetFileWriter",
"org.apache.parquet.hadoop.ParquetRecordWriter",
"org.apache.parquet.hadoop.InternalParquetRecordWriter",
"org.apache.parquet.hadoop.ColumnChunkPageWriteStore",
"org.apache.parquet.hadoop.MemoryManager"
))


cl.loadClass("org.apache.parquet.hadoop.CustomParquetOutputFormat")
.getConstructor(classOf[String], classOf[TaskAttemptContext])
.newInstance(fullPathWithoutExt, taskAttemptContext)
.asInstanceOf[OutputFormat[Void, InternalRow] with ProvidesExtension]

错误:

java.lang.IllegalAccessError: tried to access class org.apache.parquet.hadoop.CodecFactory from class org.apache.parquet.hadoop.customParquetOutputFormat
at org.apache.parquet.hadoop.CustomParquetOutputFormat.getRecordWriter(CustomParquetOutputFormat.scala:40)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:262)
at org.apache.spark.custom.hadoop.HadoopWriter.<init>(HadoopWriter.scala:35)
at org.apache.spark.sql.execution.datasources.parquet.ParquetWriter.<init>(ParquetWriter.scala:16)
at org.apache.spark.sql.execution.datasources.parquet.ParquetWriterFactory.createWriter(ParquetWriterFactory.scala:71)
at com.abden.custom.index.IndexBuilder$$anonfun$4.apply(IndexBuilder.scala:55)
at com.abden.custom.index.IndexBuilder$$anonfun$4.apply(IndexBuilder.scala:54)
at scala.collection.immutable.Stream.map(Stream.scala:418)
at com.abden.custom.index.IndexBuilder.generateTiles(IndexBuilder.scala:54)
at com.abden.custom.index.IndexBuilder.generateLayer(IndexBuilder.scala:155)
at com.abden.custom.index.IndexBuilder.appendLayer(IndexBuilder.scala:184)
at com.abden.custom.index.IndexBuilder$$anonfun$appendLayers$1$$anonfun$apply$1.apply(IndexBuilder.scala:213)
at com.abden.custom.index.IndexBuilder$$anonfun$appendLayers$1$$anonfun$apply$1.apply(IndexBuilder.scala:210)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at com.abden.custom.util.SplittingByKeyIterator.foreach(SplittingByKeyIterator.scala:3)
at com.abden.custom.index.IndexBuilder$$anonfun$appendLayers$1.apply(IndexBuilder.scala:210)
at com.abden.custom.index.IndexBuilder$$anonfun$appendLayers$1.apply(IndexBuilder.scala:209)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

错误发生在 getRecordWriter 的这一行:

val codecFactory = new CodecFactory(conf)

CodecFactory 没有修饰符,因此仅限于其包。即使使用动态类加载器从同一个类加载器加载所有类,我仍然得到 IllegalAccessError

最佳答案

所以您尝试做的是打破 Java 的工作方式!你想通过实现你自己的类加载器来访问它的包之外的包私有(private)的类,这允许打破 JVM 的保护规则(所以你想打破 Java 语言规范!)。

我的回答很简单:不要这样做!

如果它是私有(private)包,则您无法访问它。期间!

我认为最好的办法是考虑您需要什么功能,并使用当前的 API 来实现它,而不是试图强行进入。因此,最好不要问如何进行一些技术破解,最好是解释什么你想做什么(为什么你想实现你自己的 getRecordWriter 方法。

我已经在这个 SOW 问题中给出了关于如何在普通 Java 中读/写 parquet 文件的答案:Write Parquet format to HDFS using Java API with out using Avro and MR

问候,

洛伊克

关于java - 动态类加载器的 IllegalAccessError,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40973000/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com