gpt4 book ai didi

amazon-web-services - 雅典娜和 S3 库存。 HIVE_BAD_DATA : Field size's type LONG in ORC is incompatible with type varchar defined in table schema

转载 作者:行者123 更新时间:2023-12-04 08:08:42 24 4
gpt4 key购买 nike

我试图了解如何使用 s3 库存。
我正在关注这个 tutorial

将库存 list 加载到我的表中后,我试图查询它并发现两个问题。

1) SELECT key, size FROM table;所有记录的大小列显示一个魔数(Magic Number)(值)4923069104295859283

2) select * from table;查询 ID:cf07c309-c685-4bf4-9705-8bca69b00b3c .

接收错误:

HIVE_BAD_DATA: Field size's type LONG in ORC is incompatible with type varchar defined in table schema

这是我的表架构:
CREATE EXTERNAL TABLE `table`(
`bucket` string,
`key` string,
`version_id` string,
`is_latest` boolean,
`is_delete_marker` boolean,
`size` bigint,
`last_modified_date` timestamp,
`e_tag` string,
`storage_class` string)
PARTITIONED BY (
`dt` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
's3://......../hive'
TBLPROPERTIES (
'transient_lastDdlTime'='1516093603')

最佳答案

来自 AWS S3 生成的 list 的任何 orc 文件的以下命令将为您提供 list 的实际结构:

$> hive --orcfiledump ~/Downloads/017c2014-1205-4431-a30d-2d9ae15492d6.orc
...
Processing data file /tmp/017017c2014-1205-4431-a30d-2d9ae15492d6.orc [length: 4741786]
Structure for /mp/017c2014-1205-4431-a30d-2d9ae15492d6.orc
File Version: 0.12 with ORC_135
Rows: 223473
Compression: ZLIB
Compression size: 262144
Type: struct<bucket:string,key:string,size:bigint,last_modified_date:timestamp,e_tag:string,storage_class:string,is_multipart_uploaded:boolean,replication_status:string,encryption_status:string>
...

看来aws提供的例子 here期望您的库存不仅仅用于 current version但对于 all versions存储桶中的对象。
Athena 的正确表结构然后是加密存储桶:
CREATE EXTERNAL TABLE inventory(
bucket string,
key string,
version_id string,
is_latest boolean,
is_delete_marker boolean,
size bigint,
last_modified_date timestamp,
e_tag string,
storage_class string,
is_multipart_uploaded boolean,
replication_status string,
encryption_status string
)
PARTITIONED BY (dt string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat'
LOCATION 's3://............/hive'
TBLPROPERTIES ('has_encrypted_data'='true');

关于amazon-web-services - 雅典娜和 S3 库存。 HIVE_BAD_DATA : Field size's type LONG in ORC is incompatible with type varchar defined in table schema,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48283768/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com