- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我正在尝试使用 Scala 将示例 MongoDB 集合加载到 Spark,然后将 RDD 保存到文本文件。以下是我的代码:
val sc = new SparkContext(conf)
val mongoConfig = new Configuration()
mongoConfig.set("mongo.input.uri",
"mongodb://localhost:27017/myDB.myCollectionData")
val sparkConf = new SparkConf()
val documents = sc.newAPIHadoopRDD(
mongoConfig, // Configuration
classOf[MongoInputFormat], // InputFormat
classOf[Object], // Key type
classOf[BSONObject]) // Value type
documents.map(t => t._1).saveAsTextFile("myMongo")
//-------------------------------------------- --------------
然后我得到以下错误:
Exception in thread "main" java.lang.IncompatibleClassChangeError: Implementing class
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at com.mongodb.MongoClientOptions$Builder.<init>(MongoClientOptions.java:55)
at com.mongodb.MongoClientURI.<init>(MongoClientURI.java:165)
at com.mongodb.hadoop.util.MongoConfigUtil.getMongoClientURI(MongoConfigUtil.java:318)
at com.mongodb.hadoop.util.MongoConfigUtil.getInputURI(MongoConfigUtil.java:322)
at com.mongodb.hadoop.splitter.MongoSplitterFactory.getSplitter(MongoSplitterFactory.java:107)
at com.mongodb.hadoop.MongoInputFormat.getSplits(MongoInputFormat.java:56)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1505)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1087)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:954)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:863)
at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1290)
从这一行:
documents.map(t => t._1).saveAsTextFile("myMongo")
有人知道这个错误是什么意思吗?非常感谢!
此外,下面是我的依赖树:
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ myProject ---
[INFO] +- junit:junit:jar:4.11:compile
[INFO] | \- org.hamcrest:hamcrest-core:jar:1.3:compile
[INFO] +- org.scala-lang:scala-library:jar:2.10.3:compile
[INFO] +- org.scalatest:scalatest_2.10:jar:2.0.M5b:test
[INFO] +- com.thoughtworks.xstream:xstream:jar:1.4.4:compile
[INFO] | +- xmlpull:xmlpull:jar:1.1.3.1:compile
[INFO] | \- xpp3:xpp3_min:jar:1.1.4c:compile
[INFO] +- com.google.code.gson:gson:jar:2.2.4:compile
[INFO] +- org.apache.hadoop:hadoop-client:jar:2.6.0:compile
[INFO] | +- org.apache.hadoop:hadoop-common:jar:2.6.0:compile
[INFO] | | +- commons-cli:commons-cli:jar:1.2:compile
[INFO] | | +- xmlenc:xmlenc:jar:0.52:compile
[INFO] | | +- commons-httpclient:commons-httpclient:jar:3.1:compile
[INFO] | | +- commons-codec:commons-codec:jar:1.4:compile
[INFO] | | +- commons-io:commons-io:jar:2.4:compile
[INFO] | | +- commons-collections:commons-collections:jar:3.2.1:compile
[INFO] | | +- commons-logging:commons-logging:jar:1.1.3:compile
[INFO] | | +- commons-lang:commons-lang:jar:2.6:compile
[INFO] | | +- commons-configuration:commons-configuration:jar:1.6:compile
[INFO] | | | +- commons-digester:commons-digester:jar:1.8:compile
[INFO] | | | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile
[INFO] | | | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
[INFO] | | +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO] | | +- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
[INFO] | | +- org.apache.avro:avro:jar:1.7.4:compile
[INFO] | | +- com.google.protobuf:protobuf-java:jar:2.5.0:compile
[INFO] | | +- org.apache.hadoop:hadoop-auth:jar:2.6.0:compile
[INFO] | | | +- org.apache.httpcomponents:httpclient:jar:4.2.5:compile
[INFO] | | | | \- org.apache.httpcomponents:httpcore:jar:4.2.4:compile
[INFO] | | | +- org.apache.directory.server:apacheds-kerberos-codec:jar:2.0.0-M15:compile
[INFO] | | | | +- org.apache.directory.server:apacheds-i18n:jar:2.0.0-M15:compile
[INFO] | | | | +- org.apache.directory.api:api-asn1-api:jar:1.0.0-M20:compile
[INFO] | | | | \- org.apache.directory.api:api-util:jar:1.0.0-M20:compile
[INFO] | | | \- org.apache.curator:curator-framework:jar:2.6.0:compile
[INFO] | | +- org.apache.curator:curator-client:jar:2.6.0:compile
[INFO] | | +- org.apache.curator:curator-recipes:jar:2.6.0:compile
[INFO] | | +- org.htrace:htrace-core:jar:3.0.4:compile
[INFO] | | \- org.apache.commons:commons-compress:jar:1.4.1:compile
[INFO] | | \- org.tukaani:xz:jar:1.0:compile
[INFO] | +- org.apache.hadoop:hadoop-hdfs:jar:2.6.0:compile
[INFO] | | +- org.mortbay.jetty:jetty-util:jar:6.1.26:compile
[INFO] | | +- io.netty:netty:jar:3.6.2.Final:compile
[INFO] | | \- xerces:xercesImpl:jar:2.9.1:compile
[INFO] | | \- xml-apis:xml-apis:jar:1.3.04:compile
[INFO] | +- org.apache.hadoop:hadoop-mapreduce-client-app:jar:2.6.0:compile
[INFO] | +- org.apache.hadoop:hadoop-yarn-api:jar:2.6.0:compile
[INFO] | +- org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.6.0:compile
[INFO] | | \- org.apache.hadoop:hadoop-yarn-common:jar:2.6.0:compile
[INFO] | | +- javax.xml.bind:jaxb-api:jar:2.2.2:compile
[INFO] | | | +- javax.xml.stream:stax-api:jar:1.0-2:compile
[INFO] | | | \- javax.activation:activation:jar:1.1:compile
[INFO] | | +- com.sun.jersey:jersey-core:jar:1.9:compile
[INFO] | | +- com.sun.jersey:jersey-client:jar:1.9:compile
[INFO] | | +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
[INFO] | | \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile
[INFO] | +- org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.6.0:compile
[INFO] | \- org.apache.hadoop:hadoop-annotations:jar:2.6.0:compile
[INFO] +- org.eclipse.jetty:jetty-servlet:jar:8.1.14.v20131031:compile
[INFO] | \- org.eclipse.jetty:jetty-security:jar:8.1.14.v20131031:compile
[INFO] | \- org.eclipse.jetty:jetty-server:jar:8.1.14.v20131031:compile
[INFO] | +- org.eclipse.jetty:jetty-continuation:jar:8.1.14.v20131031:compile
[INFO] | \- org.eclipse.jetty:jetty-http:jar:8.1.14.v20131031:compile
[INFO] | \- org.eclipse.jetty:jetty-io:jar:8.1.14.v20131031:compile
[INFO] | \- org.eclipse.jetty:jetty-util:jar:8.1.14.v20131031:compile
[INFO] +- com.google.guava:guava:jar:14.0.1:compile
[INFO] +- net.sourceforge.argparse4j:argparse4j:jar:0.4.3:compile
[INFO] +- com.amazonaws:aws-java-sdk:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-elasticloadbalancing:jar:1.9.1:compile
[INFO] | | \- com.amazonaws:aws-java-sdk-core:jar:1.9.1:compile
[INFO] | | \- joda-time:joda-time:jar:2.9:compile (version selected from constraint [2.2,))
[INFO] | +- com.amazonaws:aws-java-sdk-cloudfront:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-datapipeline:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-storagegateway:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-ec2:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-elasticbeanstalk:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-emr:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-simpledb:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-cloudsearch:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-directconnect:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-redshift:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-rds:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-cloudformation:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-kinesis:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-logs:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-cloudwatchmetrics:jar:1.9.1:compile
[INFO] | | +- com.amazonaws:aws-java-sdk-cloudwatch:jar:1.9.1:compile
[INFO] | | \- com.amazonaws:aws-java-sdk-dynamodb:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-cognitosync:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-importexport:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-glacier:jar:1.9.1:compile
[INFO] | | +- com.amazonaws:aws-java-sdk-sqs:jar:1.9.1:compile
[INFO] | | +- com.amazonaws:aws-java-sdk-sns:jar:1.9.1:compile
[INFO] | | \- com.amazonaws:aws-java-sdk-s3:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-elastictranscoder:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-cloudtrail:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-sts:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-support:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-cognitoidentity:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-autoscaling:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-elasticache:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-ses:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-swf-libraries:jar:1.9.1:compile
[INFO] | | \- com.amazonaws:aws-java-sdk-simpleworkflow:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-opsworks:jar:1.9.1:compile
[INFO] | +- com.amazonaws:aws-java-sdk-route53:jar:1.9.1:compile
[INFO] | \- com.amazonaws:aws-java-sdk-iam:jar:1.9.1:compile
[INFO] +- log4j:log4j:jar:1.2.16:compile
[INFO] +- com.github.scopt:scopt_2.10:jar:3.2.0:compile
[INFO] +- org.apache.spark:spark-mllib_2.10:jar:1.3.1:compile
[INFO] | +- org.apache.spark:spark-streaming_2.10:jar:1.3.1:compile
[INFO] | +- org.apache.spark:spark-sql_2.10:jar:1.3.1:compile
[INFO] | | +- org.apache.spark:spark-catalyst_2.10:jar:1.3.1:compile
[INFO] | | | +- org.scala-lang:scala-compiler:jar:2.10.4:compile
[INFO] | | | +- org.scala-lang:scala-reflect:jar:2.10.4:compile
[INFO] | | | \- org.scalamacros:quasiquotes_2.10:jar:2.0.1:compile
[INFO] | | +- com.twitter:parquet-column:jar:1.6.0rc3:compile
[INFO] | | | +- com.twitter:parquet-common:jar:1.6.0rc3:compile
[INFO] | | | \- com.twitter:parquet-encoding:jar:1.6.0rc3:compile
[INFO] | | | \- com.twitter:parquet-generator:jar:1.6.0rc3:compile
[INFO] | | +- com.twitter:parquet-hadoop:jar:1.6.0rc3:compile
[INFO] | | | +- com.twitter:parquet-format:jar:2.2.0-rc1:compile
[INFO] | | | \- com.twitter:parquet-jackson:jar:1.6.0rc3:compile
[INFO] | | \- org.jodd:jodd-core:jar:3.6.3:compile
[INFO] | +- org.apache.spark:spark-graphx_2.10:jar:1.3.1:compile
[INFO] | +- org.jblas:jblas:jar:1.2.3:compile
[INFO] | +- org.scalanlp:breeze_2.10:jar:0.11.2:compile
[INFO] | | +- org.scalanlp:breeze-macros_2.10:jar:0.11.2:compile
[INFO] | | +- com.github.fommil.netlib:core:jar:1.1.2:compile
[INFO] | | +- net.sourceforge.f2j:arpack_combined_all:jar:0.1:compile
[INFO] | | +- net.sf.opencsv:opencsv:jar:2.3:compile
[INFO] | | +- com.github.rwl:jtransforms:jar:2.4.0:compile
[INFO] | | \- org.spire-math:spire_2.10:jar:0.7.4:compile
[INFO] | | \- org.spire-math:spire-macros_2.10:jar:0.7.4:compile
[INFO] | +- org.apache.commons:commons-math3:jar:3.1.1:compile
[INFO] | \- org.spark-project.spark:unused:jar:1.0.0:compile
[INFO] +- org.apache.lucene:lucene-spellchecker:jar:3.6.2:compile
[INFO] | +- org.apache.lucene:lucene-core:jar:3.6.2:compile
[INFO] | \- org.apache.lucene:lucene-analyzers:jar:3.6.2:compile
[INFO] +- org.mongodb:bson:jar:2.5.1:compile
[INFO] +- org.mongodb.mongo-hadoop:mongo-hadoop-core:jar:1.4.1:compile
[INFO] | +- org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:2.6.0:compile
[INFO] | | +- org.apache.hadoop:hadoop-yarn-server-common:jar:2.6.0:compile
[INFO] | | | \- org.apache.zookeeper:zookeeper:jar:3.4.6:compile
[INFO] | | | \- jline:jline:jar:0.9.94:compile
[INFO] | | +- org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:2.6.0:compile
[INFO] | | | +- org.codehaus.jettison:jettison:jar:1.1:compile
[INFO] | | | +- javax.servlet:servlet-api:jar:2.5:compile
[INFO] | | | +- com.google.inject:guice:jar:3.0:compile
[INFO] | | | | +- javax.inject:javax.inject:jar:1:compile
[INFO] | | | | \- aopalliance:aopalliance:jar:1.0:compile
[INFO] | | | +- com.sun.jersey:jersey-json:jar:1.9:compile
[INFO] | | | | \- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile
[INFO] | | | \- com.sun.jersey.contribs:jersey-guice:jar:1.9:compile
[INFO] | | | \- com.sun.jersey:jersey-server:jar:1.9:compile
[INFO] | | | \- asm:asm:jar:3.1:compile
[INFO] | | +- org.fusesource.leveldbjni:leveldbjni-all:jar:1.8:compile
[INFO] | | \- com.google.inject.extensions:guice-servlet:jar:3.0:compile
[INFO] | +- org.apache.hadoop:hadoop-mapreduce-client-common:jar:2.6.0:compile
[INFO] | | \- org.apache.hadoop:hadoop-yarn-client:jar:2.6.0:compile
[INFO] | \- org.mongodb:mongo-java-driver:jar:3.0.0:compile
[INFO] \- org.apache.spark:spark-core_2.10:jar:1.3.1:compile
[INFO] +- com.twitter:chill_2.10:jar:0.5.0:compile
[INFO] | \- com.esotericsoftware.kryo:kryo:jar:2.21:compile
[INFO] | +- com.esotericsoftware.reflectasm:reflectasm:jar:shaded:1.07:compile
[INFO] | +- com.esotericsoftware.minlog:minlog:jar:1.2:compile
[INFO] | \- org.objenesis:objenesis:jar:1.2:compile
[INFO] +- com.twitter:chill-java:jar:0.5.0:compile
[INFO] +- org.apache.spark:spark-network-common_2.10:jar:1.3.1:compile
[INFO] +- org.apache.spark:spark-network-shuffle_2.10:jar:1.3.1:compile
[INFO] +- net.java.dev.jets3t:jets3t:jar:0.7.1:compile
[INFO] +- org.eclipse.jetty.orbit:javax.servlet:jar:3.0.0.v201112011016:compile
[INFO] +- org.apache.commons:commons-lang3:jar:3.3.2:compile
[INFO] +- com.google.code.findbugs:jsr305:jar:1.3.9:compile
[INFO] +- org.slf4j:slf4j-api:jar:1.7.10:compile
[INFO] +- org.slf4j:jul-to-slf4j:jar:1.7.10:compile
[INFO] +- org.slf4j:jcl-over-slf4j:jar:1.7.10:compile
[INFO] +- org.slf4j:slf4j-log4j12:jar:1.7.10:compile
[INFO] +- com.ning:compress-lzf:jar:1.0.0:compile
[INFO] +- org.xerial.snappy:snappy-java:jar:1.1.1.6:compile
[INFO] +- net.jpountz.lz4:lz4:jar:1.2.0:compile
[INFO] +- org.roaringbitmap:RoaringBitmap:jar:0.4.5:compile
[INFO] +- commons-net:commons-net:jar:2.2:compile
[INFO] +- org.spark-project.akka:akka-remote_2.10:jar:2.3.4-spark:compile
[INFO] | +- org.spark-project.akka:akka-actor_2.10:jar:2.3.4-spark:compile
[INFO] | | \- com.typesafe:config:jar:1.2.1:compile
[INFO] | +- org.spark-project.protobuf:protobuf-java:jar:2.5.0-spark:compile
[INFO] | \- org.uncommons.maths:uncommons-maths:jar:1.2.2a:compile
[INFO] +- org.spark-project.akka:akka-slf4j_2.10:jar:2.3.4-spark:compile
[INFO] +- org.json4s:json4s-jackson_2.10:jar:3.2.10:compile
[INFO] | \- org.json4s:json4s-core_2.10:jar:3.2.10:compile
[INFO] | \- org.json4s:json4s-ast_2.10:jar:3.2.10:compile
[INFO] +- org.apache.mesos:mesos:jar:shaded-protobuf:0.21.0:compile
[INFO] +- io.netty:netty-all:jar:4.0.23.Final:compile
[INFO] +- com.clearspring.analytics:stream:jar:2.7.0:compile
[INFO] +- io.dropwizard.metrics:metrics-core:jar:3.1.0:compile
[INFO] +- io.dropwizard.metrics:metrics-jvm:jar:3.1.0:compile
[INFO] +- io.dropwizard.metrics:metrics-json:jar:3.1.0:compile
[INFO] +- io.dropwizard.metrics:metrics-graphite:jar:3.1.0:compile
[INFO] +- com.fasterxml.jackson.core:jackson-databind:jar:2.4.4:compile
[INFO] | +- com.fasterxml.jackson.core:jackson-annotations:jar:2.4.0:compile
[INFO] | \- com.fasterxml.jackson.core:jackson-core:jar:2.4.4:compile
[INFO] +- com.fasterxml.jackson.module:jackson-module-scala_2.10:jar:2.4.4:compile
[INFO] | \- com.thoughtworks.paranamer:paranamer:jar:2.6:compile
[INFO] +- org.apache.ivy:ivy:jar:2.4.0:compile
[INFO] +- oro:oro:jar:2.0.8:compile
[INFO] +- org.tachyonproject:tachyon-client:jar:0.5.0:compile
[INFO] | \- org.tachyonproject:tachyon:jar:0.5.0:compile
[INFO] +- org.spark-project:pyrolite:jar:2.0.1:compile
[INFO] \- net.sf.py4j:py4j:jar:0.8.2.1:compile
最佳答案
感谢 imagin 的帮助,这是有效的依赖库的最终组合:
<dependencies>
<dependency> <!-- Spark dependency -->
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.5.0</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.10.4</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>bson</artifactId>
<version>3.0.4</version>
</dependency>
<dependency>
<groupId>org.mongodb.mongo-hadoop</groupId>
<artifactId>mongo-hadoop-core</artifactId>
<version>1.4.1</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongo-java-driver</artifactId>
<version>3.0.4</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongodb-driver</artifactId>
<version>3.0.4</version>
</dependency>
</dependencies>
关于mongodb - Spark with Mongo DB : java. lang.IncompatibleClassChangeError:实现类,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33353503/
背景: 我最近一直在使用 JPA,我为相当大的关系数据库项目生成持久层的轻松程度给我留下了深刻的印象。 我们公司使用大量非 SQL 数据库,特别是面向列的数据库。我对可能对这些数据库使用 JPA 有一
我已经在我的 maven pom 中添加了这些构建配置,因为我希望将 Apache Solr 依赖项与 Jar 捆绑在一起。否则我得到了 SolarServerException: ClassNotF
interface ITurtle { void Fight(); void EatPizza(); } interface ILeonardo : ITurtle {
我希望可用于 Java 的对象/关系映射 (ORM) 工具之一能够满足这些要求: 使用 JPA 或 native SQL 查询获取大量行并将其作为实体对象返回。 允许在行(实体)中进行迭代,并在对当前
好像没有,因为我有实现From for 的代码, 我可以转换 A到 B与 .into() , 但同样的事情不适用于 Vec .into()一个Vec . 要么我搞砸了阻止实现派生的事情,要么这不应该发
在 C# 中,如果 A 实现 IX 并且 B 继承自 A ,是否必然遵循 B 实现 IX?如果是,是因为 LSP 吗?之间有什么区别吗: 1. Interface IX; Class A : IX;
就目前而言,这个问题不适合我们的问答形式。我们希望答案得到事实、引用资料或专业知识的支持,但这个问题可能会引发辩论、争论、投票或扩展讨论。如果您觉得这个问题可以改进并可能重新打开,visit the
我正在阅读标准haskell库的(^)的实现代码: (^) :: (Num a, Integral b) => a -> b -> a x0 ^ y0 | y0 a -> b ->a expo x0
我将把国际象棋游戏表示为 C++ 结构。我认为,最好的选择是树结构(因为在每个深度我们都有几个可能的移动)。 这是一个好的方法吗? struct TreeElement{ SomeMoveType
我正在为用户名数据库实现字符串匹配算法。我的方法采用现有的用户名数据库和用户想要的新用户名,然后检查用户名是否已被占用。如果采用该方法,则该方法应该返回带有数据库中未采用的数字的用户名。 例子: “贾
我正在尝试实现 Breadth-first search algorithm , 为了找到两个顶点之间的最短距离。我开发了一个 Queue 对象来保存和检索对象,并且我有一个二维数组来保存两个给定顶点
我目前正在 ika 中开发我的 Python 游戏,它使用 python 2.5 我决定为 AI 使用 A* 寻路。然而,我发现它对我的需要来说太慢了(3-4 个敌人可能会落后于游戏,但我想供应 4-
我正在寻找 Kademlia 的开源实现C/C++ 中的分布式哈希表。它必须是轻量级和跨平台的(win/linux/mac)。 它必须能够将信息发布到 DHT 并检索它。 最佳答案 OpenDHT是
我在一本书中读到这一行:-“当我们要求 C++ 实现运行程序时,它会通过调用此函数来实现。” 而且我想知道“C++ 实现”是什么意思或具体是什么。帮忙!? 最佳答案 “C++ 实现”是指编译器加上链接
我正在尝试使用分支定界的 C++ 实现这个背包问题。此网站上有一个 Java 版本:Implementing branch and bound for knapsack 我试图让我的 C++ 版本打印
在很多情况下,我需要在 C# 中访问合适的哈希算法,从重写 GetHashCode 到对数据执行快速比较/查找。 我发现 FNV 哈希是一种非常简单/好/快速的哈希算法。但是,我从未见过 C# 实现的
目录 LRU缓存替换策略 核心思想 不适用场景 算法基本实现 算法优化
1. 绪论 在前面文章中提到 空间直角坐标系相互转换 ,测绘坐标转换时,一般涉及到的情况是:两个直角坐标系的小角度转换。这个就是我们经常在测绘数据处理中,WGS-84坐标系、54北京坐标系
在软件开发过程中,有时候我们需要定时地检查数据库中的数据,并在发现新增数据时触发一个动作。为了实现这个需求,我们在 .Net 7 下进行一次简单的演示. PeriodicTimer .
二分查找 二分查找算法,说白了就是在有序的数组里面给予一个存在数组里面的值key,然后将其先和数组中间的比较,如果key大于中间值,进行下一次mid后面的比较,直到找到相等的,就可以得到它的位置。
我是一名优秀的程序员,十分优秀!