gpt4 book ai didi

postgresql - 无法使用 Kafka 将数据从 MySQL 流式传输到 Postgres

转载 作者:行者123 更新时间:2023-12-05 07:04:24 25 4
gpt4 key购买 nike

我是第一次尝试 Kafka 并使用 AWS MSK 设置 Kafka 集群。目标是将数据从 MySQL 服务器流式传输到 Postgresql。我使用 debezium MySQL 连接器作为源,并使用 Confluent JDBC 连接器作为接收器。

MySQL 配置:

  "connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.server.id": "1",
"tasks.max": "3",
"internal.key.converter.schemas.enable": "false",
"transforms.unwrap.add.source.fields": "ts_ms",
"key.converter.schemas.enable": "false",
"internal.key.converter": "org.apache.kafka.connect.json.JsonConverter",
"internal.value.converter.schemas.enable": "false",
"value.converter.schemas.enable": "false",
"internal.value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"transforms": "unwrap",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState"

注册 Mysql 连接器后,其状态为“正在运行”并捕获 MySQL 表中所做的更改,并以以下格式在消费者控制台中显示结果:

{"id":5,"created_at":1594910329000,"userid":"asldnl3r234mvnkk","amount":"B6Eg","wallet_type":"CDW"}

我的第一个问题:表中的“金额”列是“十进制”类型并包含数值,但为什么在消费者控制台中它显示为字母数字值?

对于将 Postgresql 作为目标数据库,我使用了 JDBC 接收器连接器,配置如下:

"name": "postgres-connector-db08",
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable": "false",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "false",
"topics": "mysql-cash.kafka_test.test",
"connection.url": "jdbc:postgresql://xxxxxx:5432/test?currentSchema=public",
"connection.user": "xxxxxx",
"connection.password": "xxxxxx",
"insert.mode": "upsert",
"auto.create": "true",
"auto.evolve": "true"

注册 JDBC 连接器后,当我检查状态时出现错误:

{"name":"postgres-connector-db08","connector":{"state":"RUNNING","worker_id":"x.x.x.x:8083"},"tasks":[{"id":0,"state":"FAILED","worker_id":"x.x.x.x:8083","trace":"org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:561)
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)\nCaused by: org.apache.kafka.connect.errors.ConnectException: Sink connector 'postgres-connector-db08' is configured with 'delete.enabled=false' and 'pk.mode=none' and therefore requires records with a non-null Struct value and non-null Struct schema, but found record at (topic='mysql-cash.kafka_test.test',partition=0,offset=0,timestamp=1594909233389) with a HashMap value and null value schema.
io.confluent.connect.jdbc.sink.RecordValidator.lambda$requiresValue$2(RecordValidator.java:83)
io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:82)
io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:66)
io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:74)
org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:539)
... 10 more
"}],"type":"sink"}

为什么会出现这个错误?我在接收器配置中遗漏了什么吗?

最佳答案

https://docs.confluent.io/kafka-connect-jdbc/current/sink-connector/index.html#data-mapping

The sink connector requires knowledge of schemas, so you should use a suitable converter e.g. the Avro converter that comes with Schema Registry, or the JSON converter with schemas enabled.

由于 JSON 是普通的(没有架构)并且连接器配置有 "value.converter.schemas.enable": "false"(禁用架构的 JSON 转换器),Avro 转换器应该使用模式注册表设置:https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/#applying-schema

关于postgresql - 无法使用 Kafka 将数据从 MySQL 流式传输到 Postgres,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62937670/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com