gpt4 book ai didi

sql - 文件格式错误

转载 作者:可可西里 更新时间:2023-11-01 15:48:59 25 4
gpt4 key购买 nike

我在 Cloudera 工作,刚刚开始学习它。所以我一直在尝试用flume实现一个著名的twitter例子。通过努力,我已经能够从 Twitter 流式传输数据,现在它被保存在一个文件中。获得数据后,我想对 Twitter 数据进行分析。但问题是我无法在表中获取推特数据。我已成功创建"tweets" 表,但无法加载表中的数据。下面我给出了 Twitter.conf 文件、外部表创建查询、数据加载查询、错误消息和我得到的一些数据 block 。请指导我哪里做错了。请注意,我一直在 HIVE 编辑器中编写查询。

Twitter.conf 文件

# Naming the components on the current agent. 
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS

# Describing/Configuring the source
TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource
TwitterAgent.sources.Twitter.consumerKey = 95y0IPClnNPUTJ1AHSfvBLWes
TwitterAgent.sources.Twitter.consumerSecret = UmlNcFwiBIQIvuHF9J3M3xUv6UmJlQI3RZWT8ybF2KaKcDcAw5
TwitterAgent.sources.Twitter.accessToken = 994845066882699264-Yk0DNFQ4VJec9AaCQ7QTBlHldK5BSK1
TwitterAgent.sources.Twitter.accessTokenSecret = q1Am5G3QW4Ic7VBx6qJg0Iv7QXfk0rlDSrJi1qDjmY3mW
TwitterAgent.sources.Twitter.keywords = hadoop, big data, analytics, bigdata, cloudera, data science, data scientiest, business intelligence, mapreduce, data warehouse, data warehousing, mahout, hbase, nosql, newsql, businessintelligence, cloudcomputing



# Describing/Configuring the channel
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10000
TwitterAgent.channels.MemChannel.transactionCapacity = 100

# Binding the source and sink to the channel
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sinks.HDFS.channel = MemChannel

# Describing/Configuring the sink

TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path = /user/cloudera/latestdata/
TwitterAgent.sinks.flumeHDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000

外表查询和表查询加载数据

CREATE External  TABLE tweets (


id BIGINT,
created_at STRING,
source STRING,
favorited BOOLEAN,
retweet_count INT,
retweeted_status STRUCT<
text:STRING,
user:STRUCT<screen_name:STRING,name:STRING>>,
entities STRUCT<
urls:ARRAY<STRUCT<expanded_url:STRING>>,
user_mentions:ARRAY<STRUCT<screen_name:STRING,name:STRING>>,
hashtags:ARRAY<STRUCT<text:STRING>>>,
text STRING,
user STRUCT<
screen_name:STRING,
name:STRING,
friends_count:INT,
followers_count:INT,
statuses_count:INT,
verified:BOOLEAN,
utc_offset:INT,
time_zone:STRING>,
in_reply_to_screen_name STRING
)
PARTITIONED BY (datehour INT)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
LOCATION '/user/cloudera/tweets';




LOAD DATA INPATH '/user/cloudera/latestdata/FlumeData.1540555155464'
INTO TABLE `default.tweets`
PARTITION (datehour='2013022516')

当我尝试将数据加载到表中时出错

Error while processing statement: FAILED: Execution Error, return code 20013 from org.apache.hadoop.hive.ql.exec.MoveTask. Wrong file format. Please check the file's format.

我得到的twitter数据文件

SEQ!org.apache.hadoop.io.LongWritableorg.apache.hadoop.io.Text。{"type":"record","name":"Doc","doc":"adoc","fields":[{"name":"id","type":"string"},{"name":"user_friends_count","type":["int","null"]},{"name":"user_location","type":["string","null"]},{"name":"user_description","type":["string","null"]},{"name":"user_statuses_count","type":["int","null"]},{"name":"user_followers_count","type":["int","null"]},{"name":"user_name","type":["string","null"]},{"name":"user_screen_name","type":["string","null"]},{"name":"created_at","type":["string","null"]},{"name":"text","type":["string","null"]},{"name":"retweet_count","type":["long","null"]},{"name":"retweeted","type":["boolean","null"]},{"name":"in_reply_to_user_id","type":["long","null"]},{"name":"source","type":[ "string","null"]},{"name":"in_reply_to_status_id","type":["long","null"]},{"name":"media_url_https","type":["string ","null"]},{"name":"expanded_url","type":["string","null"]}]}��������w����M��J��&1055790978844540929���� �gracie 🔪owehimnothng(2018-10-26T04:59:19Z�GIRLS WE THRONGING IT BACK FOR JOAN O F

已经 1 周了,但无法弄清楚解决方案是什么。如果需要更多信息,请告诉我,我会在此处提供。

最佳答案

Flume 不编写 JSON,因此 JSONSerde 不是您想要的。

你需要调整这些行

TwitterAgent.sinks.flumeHDFS.hdfs.fileType = DataStream 
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text

Flume 目前正在编写包含 Avro 的 Sequencefile

SEQ!org.apache.hadoop.io.LongWritableorg.apache.hadoop.io.Text�����R�LX�}H��>(�H�Objavro.schema

而且 Hive 可以按原样读取 Avro,所以不清楚为什么要使用 JSONSerde

关于sql - 文件格式错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53028772/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com