gpt4 book ai didi

java - 如何修复 scala.tools.nsc.typechecker.Contexts$Context.imports(Contexts.scala :232) in an sbt project?

转载 作者:行者123 更新时间:2023-12-02 06:17:25 31 4
gpt4 key购买 nike

问题在于以下错误,

[错误] 在 scala.tools.nsc.typechecker.Typers$Typer.typedApply$1(Typers.scala:4580)[错误] 在 scala.tools.nsc.typechecker.Typers$Typer.typedInAnyMode$1(Typers.scala:5343)[错误] 在 scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5360)[错误] 在 scala.tools.nsc.typechecker.Typers$Typer.runTyper$1(Typers.scala:5396)[错误](编译/编译增量)java.lang.StackOverflowError[错误]总时间:11秒,完成于2019年4月25日7:11:28 PM

还尝试增加jmx参数javaOptions++= Seq("-Xms512M", "-Xmx4048M", "-XX:MaxPermSize=4048M", "-XX:+CMSClassUnloadingEnabled") 但没有帮助。所有依赖项似乎都能正确解决,但这个错误有点令人震惊。

build.properties
sbt.version=1.2.8
plugin.sbt
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "5.2.4")
addSbtPlugin("org.scoverage" % "sbt-scoverage" % "1.5.1")
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.9")
And the build.sbt

name := "ProjectNew"

version := "4.0"

scalaVersion := "2.11.8"



fork := true

libraryDependencies ++= Seq(
"org.scalaz" %% "scalaz-core" % "7.1.0" % "test",
("org.apache.spark" %% "spark-core" % "2.1.0.cloudera1").
exclude("org.mortbay.jetty", "servlet-api").
exclude("commons-beanutils", "commons-beanutils-core").
//exclude("commons-collections", "commons-collections").
exclude("com.esotericsoftware.minlog", "minlog").
//exclude("org.apache.hadooop","hadoop-client").
exclude("commons-logging", "commons-logging") % "provided",
("org.apache.spark" %% "spark-sql" % "2.1.0.cloudera1")
.exclude("com.esotericsoftware.minlog","minlog")
//.exclude("org.apache.hadoop","hadoop-client")
% "provided",
("org.apache.spark" %% "spark-hive" % "2.1.0.cloudera1")
.exclude("com.esotericsoftware.minlog","minlog")
//.exclude("org.apache.hadoop","hadoop-client")
% "provided",
"spark.jobserver" % "job-server-api" % "0.4.0",
"org.scalatest" %%"scalatest" % "2.2.4" % "test",
"com.github.nscala-time" %% "nscala-time" % "1.6.0"
)


//libraryDependencies ++= Seq(
// "org.apache.spark" %% "spark-core" % "1.5.0-cdh5.5.0" % "provided",
// "org.apache.spark" %% "spark-sql" % "1.5.0-cdh5.5.0" % "provided",
// "org.scalatest"%"scalatest_2.10" % "2.2.4" % "test",
// "com.github.nscala-time" %% "nscala-time" % "1.6.0"
// )


resolvers ++= Seq(
"cloudera" at "http://repository.cloudera.com/artifactory/cloudera-repos/",
"Job Server Bintray" at "http://dl.bintray.com/spark-jobserver/maven"
)
scalacOptions ++= Seq("-unchecked", "-deprecation")

assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs @ _*) => MergeStrategy.discard
case x => MergeStrategy.first
}

parallelExecution in Test := false

fork in Test := true

javaOptions ++= Seq("-Xms512M", "-Xmx4048M", "-XX:MaxPermSize=4048M", "-XX:+CMSClassUnloadingEnabled")

最佳答案

这是一个内存问题。我引用了以下答案:Answer 。在我的 C:\Program Files (x86)\sbt\conf\sbtconfig 文件中,我添加/增加了以下内存参数。

-Xmx2G
-XX:MaxPermSize=1000m
-XX:ReservedCodeCacheSize=1000m
-Xss8M

并且运行 sbt 包已经无缝工作并且编译成功。谢谢大家。

关于java - 如何修复 scala.tools.nsc.typechecker.Contexts$Context.imports(Contexts.scala :232) in an sbt project?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55859159/

31 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com