gpt4 book ai didi

maven - 由 : java. lang.ClassNotFoundException : org. apache.hadoop.fs.CanSetDropBehind issue in eclipse 引起

转载 作者:可可西里 更新时间:2023-11-01 16:31:14 25 4
gpt4 key购买 nike

我有以下 spark 字数统计程序:

    package com.sample.spark;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.*;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFlatMapFunction;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import scala.Tuple2;


public class SparkWordCount {

public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("wordcountspark").setMaster("local").setSparkHome("/Users/hadoop/spark-1.4.0-bin-hadoop1");
JavaSparkContext sc = new JavaSparkContext(conf);
//SparkConf conf = new SparkConf();
//JavaSparkContext sc = new JavaSparkContext("hdfs", "Simple App","/Users/hadoop/spark-1.4.0-bin-hadoop1", new String[]{"target/simple-project-1.0.jar"});
JavaRDD<String> textFile = sc.textFile("hdfs://localhost:54310/data/wordcount");
JavaRDD<String> words = textFile.flatMap(new FlatMapFunction<String, String>() {
public Iterable<String> call(String s) { return Arrays.asList(s.split(" ")); }
});
JavaPairRDD<String, Integer> pairs = words.mapToPair(new PairFunction<String, String, Integer>() {
public Tuple2<String, Integer> call(String s) { return new Tuple2<String, Integer>(s, 1); }

});

JavaPairRDD<String, Integer> counts = pairs.reduceByKey(new Function2<Integer, Integer, Integer>() {
public Integer call(Integer a, Integer b) { return a + b; }
});
counts.saveAsTextFile("hdfs://localhost:54310/data/output/spark/outfile");

}


}

当我从 eclipse 运行代码时,出现 Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.CanSetDropBehind 异常,但是如果我导出为可运行的 jar 并从终端运行,如下所示:

      bin/spark-submit --class com.sample.spark.SparkWordCount --master local  /Users/hadoop/spark-1.4.0-bin-hadoop1/finalJars/SparkJar-v2.jar

maven pom 看起来像:

    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.sample.spark</groupId>
<artifactId>SparkRags</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<name>SparkRags</name>
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency> <!-- Spark dependency -->
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.4.0</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>0.23.11</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.1</version>
<scope>compile</scope>
</dependency>
</dependencies>
</project>

最佳答案

当您在 eclipse 中运行时,引用的 jar 是您的程序运行的唯一来源。因此,由于某些原因,jar hadoop-core(即 CanSetDropBehind 所在的位置)未从本地存储库正确添加到您的 eclipse 中。如果是代理问题或任何其他 pom.xml 问题,您需要确定这一点。

当您从终端运行 jar 时,运行的原因可能是由于引用的类路径中存在 jar。此外,在从终端运行时,您还可以选择将这些 jar 作为 fat jar(包括 hadoop-core)放在您的 jar 中。我希望你在创建 jar 时没有使用这个选项。然后将从您的 jar 中选择引用,而不依赖于类路径。

验证每个步骤,这将帮助您找出原因。快乐编码

关于maven - 由 : java. lang.ClassNotFoundException : org. apache.hadoop.fs.CanSetDropBehind issue in eclipse 引起,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30956799/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com