gpt4 book ai didi

hadoop - 在 EMR Spark 上,JDBC 加载第一次失败,然后工作

转载 作者:可可西里 更新时间:2023-11-01 15:57:16 26 4
gpt4 key购买 nike

我在 AWS Elastic Map Reduce 5.3.1 中使用 spark-shell 和 Spark 2.1.0 从 Postgres 数据库加载数据。 loader.load 总是失败然后成功。为什么会这样?

[hadoop@[SNIP] ~]$ SPARK_PRINT_LAUNCH_COMMAND=1 spark-shell --driver-class-path ~/postgresql-42.0.0.jar 
Spark Command: /etc/alternatives/jre/bin/java -cp /home/hadoop/postgresql-42.0.0.jar:/usr/lib/spark/conf/:/usr/lib/spark/jars/*:/etc/hadoop/conf/ -Dscala.usejavacp=true -Xmx640M -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p org.apache.spark.deploy.SparkSubmit --conf spark.driver.extraClassPath=/home/hadoop/postgresql-42.0.0.jar --class org.apache.spark.repl.Main --name Spark shell spark-shell
========================================
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/02/28 17:17:52 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
17/02/28 17:18:56 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://[SNIP]
Spark context available as 'sc' (master = yarn, app id = application_1487878172787_0014).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.1.0
/_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.

scala> val loader = spark.read.format("jdbc") // connection options removed
loader: org.apache.spark.sql.DataFrameReader = org.apache.spark.sql.DataFrameReader@46067a74

scala> loader.load
java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:315)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:83)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
... 48 elided

scala> loader.load
res1: org.apache.spark.sql.DataFrame = [id: int, fsid: string ... 4 more fields]

最佳答案

我也遇到了同样的问题。我正在尝试使用 JDBC 通过 Spark 连接到 vertica。我在用 : Spark 壳星火版本是2.2.0java版本是1.8

用于连接的外部 jar :vertica-8.1.1_spark2.1_scala2.11-20170623.jarvertica-jdbc-8.1.1-0.jar

连接代码:

import java.sql.DriverManager
import com.vertica.jdbc.Driver


val jdbcUsername = "<username>"
val jdbcPassword = "<password>"
val jdbcHostname = "<vertica server>"
val jdbcPort = <vertica port>
val jdbcDatabase ="<vertica DB>"
val jdbcUrl = s"jdbc:vertica://${jdbcHostname}:${jdbcPort}/${jdbcDatabase}?user=${jdbcUsername}&password=${jdbcPassword}"

val connectionProperties = new Properties()
connectionProperties.put("user", jdbcUsername)
connectionProperties.put("password", jdbcPassword )

val connection = DriverManager.getConnection(jdbcUrl, connectionProperties)
java.sql.SQLException: No suitable driver found for jdbc:vertica://${jdbcHostname}:${jdbcPort}/${jdbcDatabase}?user=${jdbcUsername}&password=${jdbcPassword}

at java.sql.DriverManager.getConnection(Unknown Source)
at java.sql.DriverManager.getConnection(Unknown Source)
... 56 elided

如果我第二次运行相同的命令,我会得到以下输出并建立连接

scala> val connection = DriverManager.getConnection(jdbcUrl, connectionProperties)
connection: java.sql.Connection = com.vertica.jdbc.VerticaJdbc4ConnectionImpl@7d994c

关于hadoop - 在 EMR Spark 上,JDBC 加载第一次失败,然后工作,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42515185/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com