gpt4 book ai didi

hadoop - 将sqoop导入到hbase中,未写入任何记录

转载 作者:行者123 更新时间:2023-12-02 20:12:01 25 4
gpt4 key购买 nike

我当前正在使用Cloudera CDH4 VM。

一切似乎都正常。以下是我的输出。导入声称已成功,但是未写入任何记录。我已经附加了导入的输出。
[

cloudera@ap00134-vip ~]$ hbase shell
12/11/26 18:53:41 WARN conf.Configuration: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.92.1-cdh4.1.1, rUnknown, Tue Oct 16 12:01:17 PDT 2012

hbase(main):001:0>

[cloudera@ap00134-vip ~]$ sqoop version
Sqoop 1.4.1-cdh4.1.1
git commit id b0c34454234e5246b4ef345694d7e1a5904f00fe
Compiled by jenkins on Tue Oct 16 12:17:51 PDT 2012
[cloudera@ap00134-vip ~]$

sqoop import --connect jdbc:oracle:thin:@//154.11.169.116:1521/bigdata --table BIGDATA_SMALL_RAW --username test --hbase-create-table --hbase-table t1 --column-family cf --columns DSERVER_COMPUTER --hbase-row-key ROWKEY -m 1

12/11/26 18:41:12 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/lib/hadoop/lib/native
12/11/26 18:41:12 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
12/11/26 18:41:12 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
12/11/26 18:41:12 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
12/11/26 18:41:12 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
12/11/26 18:41:12 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-220.23.1.el6.x86_64
12/11/26 18:41:12 INFO zookeeper.ZooKeeper: Client environment:user.name=cloudera
12/11/26 18:41:12 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/cloudera
12/11/26 18:41:12 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/cloudera
12/11/26 18:41:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
12/11/26 18:41:12 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (Unable to locate a login configuration)
12/11/26 18:41:12 INFO zookeeper.ClientCnxn: Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session
12/11/26 18:41:12 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost.localdomain/127.0.0.1:2181, sessionid = 0x13b2fc047340058, negotiated timeout = 40000
12/11/26 18:41:12 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 29089@ap00134-vip.osc.tac.net
12/11/26 18:41:13 WARN conf.Configuration: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
12/11/26 18:41:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@71257687
12/11/26 18:41:13 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (Unable to locate a login configuration)
12/11/26 18:41:13 INFO zookeeper.ClientCnxn: Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session
12/11/26 18:41:13 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost.localdomain/127.0.0.1:2181, sessionid = 0x13b2fc047340059, negotiated timeout = 40000
12/11/26 18:41:13 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 29089@ap00134-vip.osc.tac.net
12/11/26 18:41:13 INFO zookeeper.ClientCnxn: EventThread shut down
12/11/26 18:41:13 INFO zookeeper.ZooKeeper: Session: 0x13b2fc047340059 closed
12/11/26 18:41:13 INFO mapreduce.HBaseImportJob: Creating missing HBase table t1
12/11/26 18:41:17 INFO mapreduce.JobSubmitter: number of splits:1
12/11/26 18:41:17 WARN conf.Configuration: mapred.job.classpath.files is deprecated. Instead, use mapreduce.job.classpath.files
12/11/26 18:41:17 WARN conf.Configuration: mapred.cache.files is deprecated. Instead, use mapreduce.job.cache.files
12/11/26 18:41:17 WARN conf.Configuration: mapred.job.name is deprecated. Instead, use mapreduce.job.name
12/11/26 18:41:17 WARN conf.Configuration: mapred.cache.files.timestamps is deprecated. Instead, use mapreduce.job.cache.files.timestamps
12/11/26 18:41:17 WARN conf.Configuration: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
12/11/26 18:41:18 INFO mapred.ResourceMgrDelegate: Submitted application application_1353715862141_0011 to ResourceManager at /0.0.0.0:8032
12/11/26 18:41:18 INFO mapreduce.Job: The url to track the job: http://ap00134-vip.osc.tac.net:8088/proxy/application_1353715862141_0011/
12/11/26 18:41:18 INFO mapreduce.Job: Running job: job_1353715862141_0011
12/11/26 18:41:27 INFO mapreduce.Job: Job job_1353715862141_0011 running in uber mode : false
12/11/26 18:41:27 INFO mapreduce.Job: map 0% reduce 0%
12/11/26 18:41:50 INFO mapreduce.Job: map 100% reduce 0%
12/11/26 18:41:50 INFO mapreduce.Job: Job job_1353715862141_0011 completed successfully
12/11/26 18:41:50 INFO mapreduce.Job: Counters: 27
File System Counters
FILE: Number of bytes read=120
FILE: Number of bytes written=93711
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=87
HDFS: Number of bytes written=0
HDFS: Number of read operations=1
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job Counters
Launched map tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=182000
Total time spent by all reduces in occupied slots (ms)=0
Map-Reduce Framework
Map input records=21
Map output records=21
Input split bytes=87
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=93
CPU time spent (ms)=1910
Physical memory (bytes) snapshot=140869632
Virtual memory (bytes) snapshot=721960960
Total committed heap usage (bytes)=126877696
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=0
12/11/26 18:41:50 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 36.6957 seconds (0 bytes/sec)
12/11/26 18:41:51 INFO mapreduce.ImportJobBase: Retrieved 21 records.

hbase(main):005:0> scan '.META.'
ROW COLUMN+CELL
t1,,1353973273247.a173f168bb6ffabbcf78837cd3f5234b. column=info:regioninfo, timestamp=1353973273268, value={NAME => 't1,,1353973273247.a173f168bb6ffabbcf78837cd3f5234b.', STARTKEY => '', ENDKEY => '', ENCOD
ED => a173f168bb6ffabbcf78837cd3f5234b,}
t1,,1353973273247.a173f168bb6ffabbcf78837cd3f5234b. column=info:server, timestamp=1353973273287, value=ap00134-vip.osc.tac.net:56831
t1,,1353973273247.a173f168bb6ffabbcf78837cd3f5234b. column=info:serverstartcode, timestamp=1353973273287, value=1353715834683
1 row(s) in 0.0140 seconds

hbase(main):006:0> scan 't1'
ROW COLUMN+CELL
0 row(s) in 0.0160 seconds

hbase(main):007:0>

最佳答案

我写了以下内容对我有用

sqoop import --connect“jdbc:sqlserver://(主机名或IP);数据库= dbname;用户名= sa;密码= sqlserver” --table DimCustomer --hbase-create-table --hbase-table“HbasetableName”- -column-family cf --hbase-row-key“表的主键列” -m 1

关于hadoop - 将sqoop导入到hbase中,未写入任何记录,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/13571976/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com