gpt4 book ai didi

java - Flink 1.2.0 jdbc 从Mysql读取流数据

转载 作者:行者123 更新时间:2023-11-29 02:45:50 25 4
gpt4 key购买 nike

我正在尝试使用 Flink 2.1.0 从 mysql 日志表中读取流数据,但是,它只读取一次,然后它将停止进程。如果有传入数据并打印它,我希望它继续读取。以下是我的代码

public class Database {

public static void main(String[] args) throws Exception {

// get the execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

TypeInformation[] fieldTypes = new TypeInformation[] { LONG_TYPE_INFO, STRING_TYPE_INFO };
RowTypeInfo rowTypeInfo = new RowTypeInfo(fieldTypes);

DataStreamSource source = env.createInput(
JDBCInputFormat.buildJDBCInputFormat()
.setDrivername("com.mysql.jdbc.Driver")
.setDBUrl("jdbc:mysql://localhost/log_db")
.setUsername("root")
.setPassword("pass")
.setQuery("select id, SERVER_NAME from ERRORLOG")
.setRowTypeInfo(rowTypeInfo)
.finish()
);
source.print().setParallelism(1);
env.execute("Error Log Data");
}
}

我正在使用 maven 进行本地内部运行:

mvn exec:java -Dexec.mainClass=com.test.Database

结果:

09:15:56,394 INFO  org.apache.flink.runtime.taskmanager.Task                     - Freeing task resources for Source: Custom Source (1$
4) (41c66a6dfb97e1d024485f473617a342).
09:15:56,394 INFO org.apache.flink.core.fs.FileSystem - Ensuring all FileSystem streams are closed for Sour$
e: Custom Source (1/4)
09:15:56,394 INFO org.apache.flink.runtime.taskmanager.Task - Sink: Unnamed (1/1) (5212fc2a570152c58ffe3d39d3d805$
0) switched from RUNNING to FINISHED.
09:15:56,394 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Sink: Unnamed (1/1) (521$
fc2a570152c58ffe3d39d3d805b0).
09:15:56,394 INFO org.apache.flink.runtime.taskmanager.TaskManager - Un-registering task and sending final execution sta$
e FINISHED to JobManager for task Source: Custom Source (41c66a6dfb97e1d024485f473617a342)
09:15:56,396 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source (1/4) (41c66a6dfb97e1d024485f$
73617a342) switched from RUNNING to FINISHED.
09:15:56,396 INFO org.apache.flink.runtime.client.JobSubmissionClientActor - 02/22/2017 09:15:56 Source: Custom Source(1/4) swi$
ched to FINISHED
02/22/2017 09:15:56 Source: Custom Source(1/4) switched to FINISHED
09:15:56,396 INFO org.apache.flink.core.fs.FileSystem - Ensuring all FileSystem streams are closed for Sink$
Unnamed (1/1)
09:15:56,397 INFO org.apache.flink.runtime.taskmanager.TaskManager - Un-registering task and sending final execution sta$
e FINISHED to JobManager for task Sink: Unnamed (5212fc2a570152c58ffe3d39d3d805b0)
09:15:56,398 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Sink: Unnamed (1/1) (5212fc2a570152c58ffe3d39d3d805$
0) switched from RUNNING to FINISHED.
09:15:56,398 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Job Socket Window Data (0eb15d61031ede785e7ed21ead2$
ceea) switched from state RUNNING to FINISHED.
09:15:56,398 INFO org.apache.flink.runtime.client.JobSubmissionClientActor - 02/22/2017 09:15:56 Sink: Unnamed(1/1) switched to
FINISHED
02/22/2017 09:15:56 Sink: Unnamed(1/1) switched to FINISHED
09:15:56,405 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Stopping checkpoint coordinator for job 0eb15d61031$
de785e7ed21ead21ceea
09:15:56,406 INFO org.apache.flink.runtime.client.JobSubmissionClientActor - Terminate JobClientActor.
09:15:56,406 INFO org.apache.flink.runtime.client.JobClient - Job execution complete
09:15:56,408 INFO org.apache.flink.runtime.minicluster.FlinkMiniCluster - Stopping FlinkMiniCluster.
09:15:56,405 INFO org.apache.flink.runtime.checkpoint.StandaloneCompletedCheckpointStore - Shutting down

最佳答案

mysql的表数据在开始查询时是固定的,所以job应该是flink batch job。

如果有传入数据就想读取传入的数据,flink无法处理这种情况,因为flink不知道传入的数据,除非你监控binlog。

你必须使用canal将binlog从mysql同步到kafka,并运行一个flink streaming job从kafka读取数据。这是最好的解决方案。

关于java - Flink 1.2.0 jdbc 从Mysql读取流数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42394435/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com