gpt4 book ai didi

hadoop - 在配置单元中使用Bucket创建Avro表

转载 作者:行者123 更新时间:2023-12-02 21:16:20 25 4
gpt4 key购买 nike

我创建了带有存储桶的avro表,但遇到以下错误:

Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Bucket columns uniqueid is not part of the table columns ([]


CREATE TABLE s.TEST_OD_V(
UniqueId int,
dtCd string,
SysSK int,
Ind string)
PARTITIONED BY (vcd STRING)
CLUSTERED BY (UniqueId) INTO 500 BUCKETS
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
TBLPROPERTIES ('avro.schema.url'='s3a:/bucket/schema/pr_v.avsc');

我正在使用配置单元1.1。请帮助我。

最佳答案

尝试一下(可从Hive 0.14获得):CREATE TABLE s.TEST_OD_V(
UniqueId int,
dtCd string,
SysSK int,
Ind string)
PARTITIONED BY (vcd STRING)
CLUSTERED BY (UniqueId) INTO 500 BUCKETS
STORED AS AVRO;

关于hadoop - 在配置单元中使用Bucket创建Avro表,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38689355/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com