gpt4 book ai didi

hadoop - 使用sqoop在HDFS中显示导入的表

转载 作者:行者123 更新时间:2023-12-02 20:12:43 25 4
gpt4 key购买 nike

我已经设置了一个单节点hadoop集群并将其配置为与apache Hive一起使用,现在当我使用以下命令(使用sqoop)导入mysql表时

sqoop import --connect jdbc:mysql://localhost/dwhadoop --table orders --username root --password 123456 --hive-import



它运行成功,但是当我这样做时会抛出一些异常
Hive> show tales;

它没有列出 orders

如果我再次运行导入命令,它给我的错误是 orders目录已经存在

请帮助我找到解决方案

编辑:

我没有在导入之前创建任何表,我是否必须在运行导入之前在配置单元中创建表 order。如果我导入另一个表 Customers,则会出现以下异常
[root@localhost root-647263876]# sqoop import --connect jdbc:mysql://localhost/dwhadoop  --table Customers --username root --password 123456 --hive-import
Warning: /usr/lib/hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: $HADOOP_HOME is deprecated.

12/08/05 07:30:25 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
12/08/05 07:30:25 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
12/08/05 07:30:25 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
12/08/05 07:30:26 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
12/08/05 07:30:26 INFO tool.CodeGenTool: Beginning code generation
12/08/05 07:30:26 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Customers` AS t LIMIT 1
12/08/05 07:30:26 INFO orm.CompilationManager: HADOOP_HOME is /home/enigma/hadoop/libexec/..
Note: /tmp/sqoop-root/compile/e48d4803894ee63079f7194792d624ed/Customers.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
12/08/05 07:30:28 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/e48d4803894ee63079f7194792d624ed/Customers.jar
12/08/05 07:30:28 WARN manager.MySQLManager: It looks like you are importing from mysql.
12/08/05 07:30:28 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
12/08/05 07:30:28 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
12/08/05 07:30:28 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
12/08/05 07:30:28 INFO mapreduce.ImportJobBase: Beginning import of Customers
12/08/05 07:30:28 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/08/05 07:30:29 INFO mapred.JobClient: Running job: job_local_0001
12/08/05 07:30:29 INFO util.ProcessTree: setsid exited with exit code 0
12/08/05 07:30:29 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@11f41fd
12/08/05 07:30:29 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
12/08/05 07:30:30 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/08/05 07:30:30 INFO mapred.LocalJobRunner:
12/08/05 07:30:30 INFO mapred.Task: Task attempt_local_0001_m_000000_0 is allowed to commit now
12/08/05 07:30:30 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_m_000000_0' to Customers
12/08/05 07:30:30 INFO mapred.JobClient: map 0% reduce 0%
12/08/05 07:30:32 INFO mapred.LocalJobRunner:
12/08/05 07:30:32 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
12/08/05 07:30:33 INFO mapred.JobClient: map 100% reduce 0%
12/08/05 07:30:33 INFO mapred.JobClient: Job complete: job_local_0001
12/08/05 07:30:33 INFO mapred.JobClient: Counters: 13
12/08/05 07:30:33 INFO mapred.JobClient: File Output Format Counters
12/08/05 07:30:33 INFO mapred.JobClient: Bytes Written=45
12/08/05 07:30:33 INFO mapred.JobClient: File Input Format Counters
12/08/05 07:30:33 INFO mapred.JobClient: Bytes Read=0
12/08/05 07:30:33 INFO mapred.JobClient: FileSystemCounters
12/08/05 07:30:33 INFO mapred.JobClient: FILE_BYTES_READ=3205
12/08/05 07:30:33 INFO mapred.JobClient: FILE_BYTES_WRITTEN=52579
12/08/05 07:30:33 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=45
12/08/05 07:30:33 INFO mapred.JobClient: Map-Reduce Framework
12/08/05 07:30:33 INFO mapred.JobClient: Map input records=3
12/08/05 07:30:33 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
12/08/05 07:30:33 INFO mapred.JobClient: Spilled Records=0
12/08/05 07:30:33 INFO mapred.JobClient: Total committed heap usage (bytes)=21643264
12/08/05 07:30:33 INFO mapred.JobClient: CPU time spent (ms)=0
12/08/05 07:30:33 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
12/08/05 07:30:33 INFO mapred.JobClient: SPLIT_RAW_BYTES=87
12/08/05 07:30:33 INFO mapred.JobClient: Map output records=3
12/08/05 07:30:33 INFO mapreduce.ImportJobBase: Transferred 45 bytes in 5.359 seconds (8.3971 bytes/sec)
12/08/05 07:30:33 INFO mapreduce.ImportJobBase: Retrieved 3 records.
12/08/05 07:30:33 INFO hive.HiveImport: Loading uploaded data into Hive
12/08/05 07:30:33 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Customers` AS t LIMIT 1
12/08/05 07:30:33 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Cannot run program "hive": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
at java.lang.Runtime.exec(Runtime.java:615)
at java.lang.Runtime.exec(Runtime.java:526)
at org.apache.sqoop.util.Executor.exec(Executor.java:76)
at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:344)
at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:297)
at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:239)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:393)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:454)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1021)
... 15 more

但是再一次,如果我再次运行导入,它会说
Warning: /usr/lib/hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: $HADOOP_HOME is deprecated.

12/08/05 07:33:48 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
12/08/05 07:33:48 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
12/08/05 07:33:48 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
12/08/05 07:33:48 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
12/08/05 07:33:48 INFO tool.CodeGenTool: Beginning code generation
12/08/05 07:33:49 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Customers` AS t LIMIT 1
12/08/05 07:33:49 INFO orm.CompilationManager: HADOOP_HOME is /home/enigma/hadoop/libexec/..
Note: /tmp/sqoop-root/compile/9855cf7de9cf54c59095fb4bfd65a369/Customers.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
12/08/05 07:33:50 ERROR orm.CompilationManager: Could not rename /tmp/sqoop-root/compile/9855cf7de9cf54c59095fb4bfd65a369/Customers.java to /app/hadoop/tmp/mapred/staging/root-647263876/./Customers.java
java.io.IOException: Destination '/app/hadoop/tmp/mapred/staging/root-647263876/./Customers.java' already exists
at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:1811)
at org.apache.sqoop.orm.CompilationManager.compile(CompilationManager.java:227)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:83)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:368)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:454)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
12/08/05 07:33:50 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/9855cf7de9cf54c59095fb4bfd65a369/Customers.jar
12/08/05 07:33:51 WARN manager.MySQLManager: It looks like you are importing from mysql.
12/08/05 07:33:51 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
12/08/05 07:33:51 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
12/08/05 07:33:51 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
12/08/05 07:33:51 INFO mapreduce.ImportJobBase: Beginning import of Customers
12/08/05 07:33:51 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/08/05 07:33:52 INFO mapred.JobClient: Cleaning up the staging area file:/app/hadoop/tmp/mapred/staging/root-195281052/.staging/job_local_0001
12/08/05 07:33:52 ERROR security.UserGroupInformation: PriviledgedActionException as:root cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory Customers already exists
12/08/05 07:33:52 ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory Customers already exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:889)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:119)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:179)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:413)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:97)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:381)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:454)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)

最佳答案

需要注意的主要事情是您的原始导入失败,因为Sqoop尝试调用配置单元,但它不在您的路径中。在继续之前,请解决该问题。

然后,您应该只从hdfs中找到并删除“客户”目录(本地,而不是HDFS中的目录),然后重试。

从我所看到的,“Customers.java已经存在”形式的错误不是致命的。

关于hadoop - 使用sqoop在HDFS中显示导入的表,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11812138/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com