gpt4 book ai didi

scala - 使用 elasticsearch-hadoop 的 Spark sbt "Conflicting cross-version suffixes"错误

转载 作者:行者123 更新时间:2023-12-01 13:42:21 25 4
gpt4 key购买 nike

我有一个包含多个子模块的 sbt 项目。我正在使用 Spark,最近尝试升级到 Spark 2.0.0,它需要 Scala 2.11 而不是 Scala 2.10。这是我的 sbt 配置:

project/commons.scala:

import sbt._
import Keys._

object Commons {
val appVersion = "0.0.2"

val settings: Seq[Def.Setting[_]] = Seq(
version := appVersion,
scalaVersion := "2.11.8",
resolvers += Opts.resolver.mavenLocalFile,
resolvers += "conjars" at "http://conjars.org/repo",
resolvers += "clojars" at "https://clojars.org/repo"
)
}

project/dependencies.scala:

import sbt._
import Keys._

object Dependencies {
val sparkVersion = "2.0.0"
val awsVersion = "1.11.12"
val sprayVersion = "1.3.2"
val hiveVersion = "2.1.0"
val hadoopVersion = "2.7.2"
val esVersion = "2.3.2"

val sparkCoreDependency: ModuleID = "org.apache.spark" %% "spark-core" % sparkVersion % "provided"
val sparkMLDependency: ModuleID = "org.apache.spark" %% "spark-mllib" % sparkVersion % "provided"
val awsDependency: ModuleID = "com.amazonaws" % "aws-java-sdk" % awsVersion
val sprayJsonDependency: ModuleID = "io.spray" %% "spray-json" % sprayVersion
val hiveExecDependency: ModuleID = "org.apache.hive" % "hive-exec" % hiveVersion % "provided"
val hadoopCommonDependency: ModuleID = "org.apache.hadoop" % "hadoop-common" % hadoopVersion % "provided"
val esHadoopDependency: ModuleID = "org.elasticsearch" % "elasticsearch-hadoop" % esVersion % "provided"
}

build.sbt:

import Dependencies._

lazy val utils = (project in file("utils")).
settings(Commons.settings: _*).
settings(
name := "hadoop-utils",
libraryDependencies ++= Seq(
hiveExecDependency,
sparkCoreDependency,
awsDependency,
hadoopCommonDependency,
esHadoopDependency
)
)

lazy val ingestion = (project in file("ingestion")).
settings(Commons.settings: _*).
settings(
name := "datascience-ingestion",
libraryDependencies ++= Seq(
sparkCoreDependency,
awsDependency,
sprayJsonDependency
)
).
dependsOn(utils)

lazy val hlda = (project in file("hlda")).
settings(Commons.settings: _*).
settings(
name := "hlda",
libraryDependencies ++= Seq(
sparkCoreDependency,
sparkMLDependency,
awsDependency,
sprayJsonDependency
),
assemblyShadeRules in assembly := Seq(
ShadeRule.rename("org.apache.http.**" -> "shadehttp.@1").inAll
)
)

当我尝试编译时,我得到:

$ sbt utils/compile
[info] Loading global plugins from ~/.sbt/0.13/plugins
[info] Loading project definition from /<snip>/project
[info] Compiling 2 Scala sources to /<snip>/project/target/scala-2.10/sbt-0.13/classes...
[info] Set current project to datascience-ingestion (in build file:/<snip>/)
[info] Updating {file:/<snip>/}utils...
[info] Resolving jline#jline;2.12.1 ...
[info] Done updating.
[error] Modules were resolved with conflicting cross-version suffixes in {file:/<snip>/}utils:
[error] org.apache.spark:spark-launcher _2.11, _2.10
[error] org.json4s:json4s-ast _2.11, _2.10
[error] org.apache.spark:spark-network-shuffle _2.11, _2.10
[error] com.twitter:chill _2.11, _2.10
[error] org.json4s:json4s-jackson _2.11, _2.10
[error] com.fasterxml.jackson.module:jackson-module-scala _2.11, _2.10
[error] org.json4s:json4s-core _2.11, _2.10
[error] org.apache.spark:spark-unsafe _2.11, _2.10
[error] org.apache.spark:spark-core _2.11, _2.10
[error] org.apache.spark:spark-network-common _2.11, _2.10
java.lang.RuntimeException: Conflicting cross-version suffixes in: org.apache.spark:spark-launcher, org.json4s:json4s-ast, org.apache.spark:spark-network-shuffle, com.twitter:chill, org.json4s:json4s-jackson, com.fasterxml.jackson.module:jackson-module-scala, org.json4s:json4s-core, org.apache.spark:spark-unsafe, org.apache.spark:spark-core, org.apache.spark:spark-network-common
at scala.sys.package$.error(package.scala:27)
at sbt.ConflictWarning$.processCrossVersioned(ConflictWarning.scala:46)
at sbt.ConflictWarning$.apply(ConflictWarning.scala:32)
at sbt.Classpaths$$anonfun$69.apply(Defaults.scala:1219)
at sbt.Classpaths$$anonfun$69.apply(Defaults.scala:1216)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[error] (utils/*:update) Conflicting cross-version suffixes in: org.apache.spark:spark-launcher, org.json4s:json4s-ast, org.apache.spark:spark-network-shuffle, com.twitter:chill, org.json4s:json4s-jackson, com.fasterxml.jackson.module:jackson-module-scala, org.json4s:json4s-core, org.apache.spark:spark-unsafe, org.apache.spark:spark-core, org.apache.spark:spark-network-common
[error] Total time: 4 s, completed Aug 11, 2016 10:43:16 AM

我什至尝试过“硬编码”版本后缀。例如:

val sparkCoreDependency: ModuleID = "org.apache.spark" % "spark-core_2.11" % sparkVersion % "provided"

但是我得到了同样的错误。在我只更改 spark 和 scala 版本号之前,这一切都有效。

最佳答案

这是因为elasticsearch-hadoop 2.3.2取决于 spark-core_2.10 和 spark-sql_2.10。您可能必须使用 Scala 2.10 或 fork elasticsearch-hadoop。

关于scala - 使用 elasticsearch-hadoop 的 Spark sbt "Conflicting cross-version suffixes"错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38903045/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com