gpt4 book ai didi

mysql - Spark 结构化流 : primary key in JDBC sink

转载 作者:行者123 更新时间:2023-12-01 00:34:55 25 4
gpt4 key购买 nike

我正在使用带更新模式的结构化流从 kafka 主题读取数据流,然后进行一些转换。

然后我创建了一个 jdbc sink,以 Append 模式将数据推送到 mysql sink 中。问题是我如何告诉我的接收器让它知道这是我的主键并根据它进行更新,以便我的表不应该有任何重复的行。

   val df: DataFrame = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "<List-here>")
.option("subscribe", "emp-topic")
.load()


import spark.implicits._
// value in kafka is bytes so cast it to String
val empList: Dataset[Employee] = df.
selectExpr("CAST(value AS STRING)")
.map(row => Employee(row.getString(0)))

// window aggregations on 1 min windows
val aggregatedDf= ......

// How to tell here that id is my primary key and do the update
// based on id column
aggregatedDf
.writeStream
.trigger(Trigger.ProcessingTime(60.seconds))
.outputMode(OutputMode.Update)
.foreachBatch { (batchDF: DataFrame, batchId: Long) =>
batchDF
.select("id", "name","salary","dept")
.write.format("jdbc")
.option("url", "jdbc:mysql://localhost/empDb")
.option("driver","com.mysql.cj.jdbc.Driver")
.option("dbtable", "empDf")
.option("user", "root")
.option("password", "root")
.mode(SaveMode.Append)
.save()
}

最佳答案

一种方法是,您可以使用 ON DUPLICATE KEY UPDATEforeachPartition 可以达到这个目的

下面是伪代码片段

/**
* Insert in to database using foreach partition.
* @param dataframe : DataFrame
* @param sqlDatabaseConnectionString
* @param sqlTableName
*/
def insertToTable(dataframe: DataFrame, sqlDatabaseConnectionString: String, sqlTableName: String): Unit = {

//numPartitions = number of simultaneous DB connections you can planning to give
datframe.repartition(numofpartitionsyouwant)

val tableHeader: String = dataFrame.columns.mkString(",")
dataFrame.foreachPartition { partition =>
// Note : Each partition one connection (more better way is to use connection pools)
val sqlExecutorConnection: Connection = DriverManager.getConnection(sqlDatabaseConnectionString)
//Batch size of 1000 is used since some databases cant use batch size more than 1000 for ex : Azure sql
partition.grouped(1000).foreach {
group =>
val insertString: scala.collection.mutable.StringBuilder = new scala.collection.mutable.StringBuilder()
group.foreach {
record => insertString.append("('" + record.mkString(",") + "'),")
}

val sql = s"""
| INSERT INTO $sqlTableName VALUES
| $tableHeader
| ${insertString}
| ON DUPLICATE KEY UPDATE
| yourprimarykeycolumn='${record.getAs[String]("key")}'
sqlExecutorConnection.createStatement()
.executeUpdate(sql)
}
sqlExecutorConnection.close() // close the connection
}
}

您可以使用 preparedstatement 代替 jdbc 语句。

进一步阅读:SPARK SQL - update MySql table using DataFrames and JDBC

关于mysql - Spark 结构化流 : primary key in JDBC sink,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55954996/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com