gpt4 book ai didi

Flink-读Kafka写Hive表

转载 作者:我是一只小鸟 更新时间:2023-08-31 07:38:54 42 4
gpt4 key购买 nike

回到顶部

1. 目标

使用Flink读取Kafka数据并实时写入Hive表.

  。

回到顶部

2. 环境配置

EMR环境:Hadoop 3.3.3, Hive 3.1.3, Flink 1.16.0 。

 

根据官网描述:

https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/table/hive/overview/ 。

当前Flink 1.16.0 支持Hive 3.1.3版本,如果是开发,则需要加入依赖有:

                          <dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-connector-hive_2.12</artifactId>
    <version>1.16.0</version>
    <scope>provided</scope>
</dependency>


                          
                            //
                          
                          
                             Hive dependencies
                          
                          
<dependency>
    <groupId>org.apache.hive</groupId>
    <artifactId>hive-exec</artifactId>
    <version>3.1.3</version>
</dependency>
                        

  。

回到顶部

3. hive表

在读写hive表时,预先条件是注册hive catalog:

                          
                            //
                          
                          
                             set hive dialect
                          
                          
                            tableEnv.getConfig().setSqlDialect(SqlDialect.HIVE)


                          
                          
                            //
                          
                          
                             set hive catalog
                          
                          
tableEnv.executeSql("CREATE CATALOG myhive WITH (" +
  "'type' = 'hive'," +
  "'default-database' = 'default'," +
  "'hive-conf-dir' = 'hiveconf'" +
  ")"
                          
                            )

tableEnv.executeSql(
                          
                          "use catalog myhive")
                        

  。

然后创建hive表:

                          
                            //
                          
                          
                             hive table
                          
                          
tableEnv.executeSql("CREATE TABLE IF NOT EXISTS hive_table (" +
  "id string," +
  "`value` float," +
  "hashdata string," +
  "num integer," +
  "token string," +
  "info string," +
  "ts timestamp " +
  ") " +
  "PARTITIONED BY (dt string, hr string) STORED AS ORC TBLPROPERTIES (" +
 
                          
                            //
                          
                          
                             "'path'='hive-output'," +
                          
                          
  "'partition.time-extractor.timestamp-pattern'='$dt $hr:00:00'," +
  "'sink.partition-commit.policy.kind'='metastore,success-file'," +
  "'sink.partition-commit.trigger'='partition-time'," +
  "'sink.partition-commit.delay'='0 s'" +
  " )")
                        

  。

  。

回到顶部

4. 消费Kafka并写入Hive表

参考官方文档:

https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/kafka/ 。

  。

添加对应依赖:

                          <!-- https:
                          
                            //
                          
                          
                            mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka -->
                          
                          
<dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-connector-kafka</artifactId>
    <version>${flink.version}</version>
</dependency>
                        

  。

flinksql参考代码:

                          
                            package
                          
                          
                             com.tang.hive


                          
                          
                            import
                          
                          
                             org.apache.flink.api.java.utils.ParameterTool

                          
                          
                            import
                          
                          
                             org.apache.flink.streaming.api.scala.StreamExecutionEnvironment

                          
                          
                            import
                          
                          
                             org.apache.flink.table.api.SqlDialect

                          
                          
                            import
                          
                          
                             org.apache.flink.table.api.bridge.scala.StreamTableEnvironment


object Kafka2Hive {

  
                          
                          
                            /**
                          
                          
                            *
   *  create hive table
   * 
                          
                          
                            @param
                          
                          
                             tbl_env
   * 
                          
                          
                            @param
                          
                          
                             drop
   * 
                          
                          
                            @param
                          
                          
                             hiveConfDir
   * 
                          
                          
                            @param
                          
                          
                             database
   * 
                          
                          
                            @return
                          
                          
                            */
                          
                          
                            
  def buildHiveTable(tbl_env: StreamTableEnvironment,
                     drop: Boolean,
                     hiveConfDir: String,
                     database: String,
                     tableName: String,
                     dbLocation: String) 
                          
                          =
                          
                             {
    
                          
                          
                            //
                          
                          
                             set hive dialect
                          
                          
                                tbl_env.getConfig().setSqlDialect(SqlDialect.HIVE)

    
                          
                          
                            //
                          
                          
                             set hive catalog
                          
                          
    tbl_env.executeSql("CREATE CATALOG myhive WITH (" +
      "'type' = 'hive'," +
      "'default-database' = '"+ database + "'," +
      "'hive-conf-dir' = '" + hiveConfDir + "'" +
      ")"
                          
                            )
    tbl_env.executeSql(
                          
                          "use catalog myhive"
                          
                            )

    
                          
                          
                            //
                          
                          
                             whether drop hive table first
                          
                          
                            if
                          
                          
                             (drop) {
      
                          
                          
                            //
                          
                          
                             drop first
                          
                          
      tbl_env.executeSql("drop table if exists" +
                          
                             tableName)
    }

    val sql 
                          
                          = "CREATE TABLE IF NOT EXISTS " + tableName + "(" +
      "id string," +
      "`value` float," +
      "hashdata string," +
      "num integer," +
      "token string," +
      "info string," +
      "ts timestamp " +
      ") " +
      "PARTITIONED BY (dt string, hr string) STORED AS ORC " +
      "LOCATION '" + dbLocation + "/" + tableName +"' TBLPROPERTIES (" +
      "'partition.time-extractor.timestamp-pattern'='$dt $hr:00:00'," +
      "'sink.partition-commit.policy.kind'='metastore,success-file'," +
      "'sink.partition-commit.trigger'='partition-time'," +
      "'sink.partition-commit.watermark-time-zone'='Asia/Shanghai'," +
      "'sink.partition-commit.delay'='0 s'," +
      "'auto-compaction'='true'" +
      " )"

    
                          
                            //
                          
                          
                             hive table
                          
                          
                                tbl_env.executeSql(sql)
  }

  
                          
                          
                            /**
                          
                          
                            *
   *  create kafka table
   * 
                          
                          
                            @param
                          
                          
                             tbl_env
   * 
                          
                          
                            @param
                          
                          
                             drop
   * 
                          
                          
                            @param
                          
                          
                             bootstrapServers
   * 
                          
                          
                            @param
                          
                          
                             topic
   * 
                          
                          
                            @param
                          
                          
                             groupId
   * 
                          
                          
                            @return
                          
                          
                            */
                          
                          
                            
  def buildKafkaTable(tbl_env: StreamTableEnvironment,
                      drop: Boolean,
                      bootstrapServers: String,
                      topic: String,
                      groupId: String,
                      tableName: String) 
                          
                          =
                          
                             {
    
                          
                          
                            //
                          
                          
                             set to default dialect
                          
                          
                                tbl_env.getConfig.setSqlDialect(SqlDialect.DEFAULT)

    
                          
                          
                            if
                          
                          
                             (drop) {
      tbl_env.executeSql(
                          
                          "drop table if exists " +
                          
                             tableName)
    }

    
                          
                          
                            //
                          
                          
                             kafka table
                          
                          
    tbl_env.executeSql("CREATE TABLE IF NOT EXISTS "+ tableName + " (" +
      "id string," +
      "`value` float," +
      "hashdata string," +
      "num integer," +
      "token string," +
      "info string," +
      "created_timestamp bigint," +
      "ts AS TO_TIMESTAMP( FROM_UNIXTIME(created_timestamp) ), " +
      "WATERMARK FOR ts AS ts - INTERVAL '5' SECOND "+
      " )" +
      "with (" +
      " 'connector' = 'kafka'," +
      " 'topic' = '" + topic + "'," +
      " 'properties.bootstrap.servers' = '" + bootstrapServers +"'," +
      " 'properties.group.id' = '" + groupId + "'," +
      " 'scan.startup.mode' = 'latest-offset'," +
      " 'format' = 'json'," +
      " 'json.fail-on-missing-field' = 'false'," +
      " 'json.ignore-parse-errors' = 'true'" +
      ")"
                          
                             )

  }

    def main(args: Array[String]): Unit 
                          
                          =
                          
                             {
      val senv 
                          
                          =
                          
                             StreamExecutionEnvironment.getExecutionEnvironment
      val tableEnv 
                          
                          =
                          
                             StreamTableEnvironment.create(senv)

      
                          
                          
                            //
                          
                          
                             set checkpoint
      
                          
                          
                            //
                          
                          
                             senv.enableCheckpointing(60000);
      
                          
                          
                            //
                          
                          
                            senv.getCheckpointConfig.setCheckpointStorage("file:
                          
                          
                            //
                          
                          
                            flink-hive-chk");

      
                          
                          
                            //
                          
                          
                             get parameter
                          
                          
      val tool: ParameterTool =
                          
                             ParameterTool.fromArgs(args)
      val hiveConfDir 
                          
                          = tool.get("hive.conf.dir", "src/main/resources"
                          
                            )
      val database 
                          
                          = tool.get("database", "default"
                          
                            )
      val hiveTableName 
                          
                          = tool.get("hive.table.name", "hive_tbl"
                          
                            )
      val kafkaTableName 
                          
                          = tool.get("kafka.table.name", "kafka_tbl"
                          
                            )
      val bootstrapServers 
                          
                          = tool.get("bootstrap.servers", "b-2.cdc.62vm9h.c4.kafka.ap-northeast-1.amazonaws.com:9092,b-1.cdc.62vm9h.c4.kafka.ap-northeast-1.amazonaws.com:9092,b-3.cdc.62vm9h.c4.kafka.ap-northeast-1.amazonaws.com:9092"
                          
                            )
      val groupId 
                          
                          = tool.get("group.id", "flinkConsumer"
                          
                            )
      val reset 
                          
                          = tool.getBoolean("tables.reset", 
                          
                            false
                          
                          
                            )
      val topic 
                          
                          = tool.get("kafka.topic", "cider"
                          
                            )
      val hiveDBLocation 
                          
                          = tool.get("hive.db.location", "s3://tang-emr-tokyo/flink/kafka2hive/"
                          
                            )

      buildHiveTable(tableEnv, reset, hiveConfDir, database, hiveTableName, hiveDBLocation)
      buildKafkaTable(tableEnv, reset, bootstrapServers, topic, groupId, kafkaTableName)

      
                          
                          
                            //
                          
                          
                             select from kafka table and write to hive table
                          
                          
      tableEnv.executeSql("insert into " + hiveTableName + " select id, `value`, hashdata, num, token, info, ts, DATE_FORMAT(ts, 'yyyy-MM-dd'), DATE_FORMAT(ts, 'HH') from " +
                          
                             kafkaTableName)

  }

}
                            

  。

Kafka写入数据格式:

                          {"id": "35f1c5a8-ec19-4dc3-afa5-84ef6bc18bd8", "value": 1327.12, "hashdata": "0822c055f097f26f85a581da2c937895c896200795015e5f9e458889", "num": 3, "token": "800879e1ef9a356cece14e49fb6949c1b8c1862107468dc682d406893944f2b6", "info": "valentine", "created_timestamp": 1690165700
                          
                            }


                            

  。

5.1. 代码配置说明

Hive表的部分配置:

                          "'sink.partition-commit.policy.kind'='metastore,success-file',"
=》在分区完成写入后,如何通知下游“分区数据已经可读”。目前支持metastore和success-
                          
                            file


                          
                          "'sink.partition-commit.trigger'='partition-time',"
=》什么时候触发partition commit。Partition-time表示在watermark超过了“分区时间”+
                          
                            “delay”的时间后,commit partition


                          
                          "'sink.partition-commit.delay'='0 s'"
=
                          
                            》延迟这个时间后再commit分区
 

                          
                          'sink.partition-commit.watermark-time-zone'='Asia/Shanghai'
=
                          
                            》时区必须与数据时间戳一致


                          
                          "'auto-compaction'='true'"
=
                          
                            》开启文件合并,在落盘前先合并

通过checkponit来决定落盘频率
senv.enableCheckpointing(
                          
                          60000
                          
                            );
                          
                        

在这个配置下,每1分钟会做一次checkpoint,即将文件写入s3。同时,还会触发自动合并的动作,最终每1分钟生成1个orc文件.

  。

5.2. 提交job

参考flink官网:

  。

  。

需要移除flink-table-planner-loader-1.16.0.jar,并移入flink-table-planner_2.12-1.16.0:

                          cd /usr/lib/flink/
                          
                            lib
sudo mv flink
                          
                          -table-planner-loader-1.16.0.jar ../
                          
                            

sudo wget https:
                          
                          
                            //
                          
                          
                            repo1.maven.org/maven2/org/apache/flink/flink-table-planner_2.12/1.16.0/flink-table-planner_2.12-1.16.0.jar
                          
                          
                            
sudo chown flink:flink flink
                          
                          -table-planner_2.12-1.16.0
                          
                            .jar
sudo chmod 
                          
                          +x flink-table-planner_2.12-1.16.0
                          
                            .jar

然后主节点运行:
sudo cp 
                          
                          /usr/lib/hive/lib/antlr-runtime-3.5.2.jar /usr/lib/flink/
                          
                            lib
sudo cp 
                          
                          /usr/lib/hive/lib/hive-exec-3.1.3*.jar /lib/flink/
                          
                            lib
sudo cp 
                          
                          /usr/lib/hive/lib/libfb303-0.9.3.jar /lib/flink/
                          
                            lib
sudo cp 
                          
                          /usr/lib/flink/opt/flink-connector-hive_2.12-1.16.0.jar /lib/flink/
                          
                            lib

sudo chmod 
                          
                          755 /usr/lib/flink/lib/antlr-runtime-3.5.2
                          
                            .jar
sudo chmod 
                          
                          755 /usr/lib/flink/lib/hive-exec-3.1.3*
                          
                            .jar
sudo chmod 
                          
                          755 /usr/lib/flink/lib/libfb303-0.9.3
                          
                            .jar
sudo chmod 
                          
                          755 /usr/lib/flink/lib/flink-connector-hive_2.12-1.16.0
                          
                            .jar
                            

  。

上传hive配置文件到hdfs:

                          hdfs dfs -mkdir /user/hadoop/hiveconf/
                          
                            
hdfs dfs 
                          
                          -put /etc/hive/conf/hive-site.xml /user/hadoop/hiveconf/hive-
                          
                            site.xml
                            

  。

Emr主节点提交job:

                          flink run-
                          
                            application \

                          
                          -t yarn-
                          
                            application \

                          
                          -
                          
                            c com.tang.hive.Kafka2Hive \

                          
                          -p 8
                          
                             \

                          
                          -D state.backend=
                          
                            rocksdb \

                          
                          -D state.checkpoint-storage=
                          
                            filesystem \

                          
                          -D state.checkpoints.dir=s3:
                          
                            //
                          
                          
                            tang-emr-tokyo/flink/kafka2hive/checkpoints \
                          
                          
-D execution.checkpointing.interval=60000
                          
                             \

                          
                          -D state.checkpoints.num-retained=5
                          
                             \

                          
                          -D execution.checkpointing.mode=
                          
                            EXACTLY_ONCE \

                          
                          -D execution.checkpointing.externalized-checkpoint-retention=
                          
                            RETAIN_ON_CANCELLATION \

                          
                          -D state.backend.incremental=
                          
                            true
                          
                          
                             \

                          
                          -D execution.checkpointing.max-concurrent-checkpoints=1
                          
                             \

                          
                          -D rest.flamegraph.enabled=
                          
                            true
                          
                          
                             \
flink
                          
                          -
                          
                            tutorial.jar \

                          
                          --hive.conf.dir hdfs:
                          
                            //
                          
                          
                            /user/hadoop/hiveconf \
                          
                          
--reset 
                          
                            true
                          
                        

  。

回到顶部

5. 测试结果

5.1. 文件数量与大小

从写入基于s3的hive表来看,基本是1分钟2个文件(因为超出了默认rolling配置的128MB文件大小,所以会额外再写1个文件)。同时,未compaction的文件对下游不可见:

  。

  。

5.2. hive分区注册

从hive表来看,写入数据后默认在hive元数据内注册了新分区.

  。

S3路径:

  。

  。

Hive分区:

  。

  。

5.3. 可见的最近数据

从hive查询结果来看,下游能查询到的数据为最近1分钟之前的数据:

                          select current_timestamp, ts from hive_tbl order by ts desc limit 10
                          
                            ;
》

                          
                          2023-07-27 09:25:24.193    2023-07-27 09:24:24
2023-07-27 09:25:24.193    2023-07-27 09:24:24
2023-07-27 09:25:24.193    2023-07-27 09:24:24
2023-07-27 09:25:24.193    2023-07-27 09:24:24

                        

  。

最后此篇关于Flink-读Kafka写Hive表的文章就讲到这里了,如果你想了解更多关于Flink-读Kafka写Hive表的内容请搜索CFSDN的文章或继续浏览相关文章,希望大家以后支持我的博客! 。

42 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com