- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我正在导出 HDFS 位置/user/training/sqoop_import/departments_export 目录中存在的 departments_export 文件的 HDFS 数据。以下是文件中的记录。
2,Fitness
3,Footwear
4,Apparel
5,Golf
6,Outdoors
7,Fan Shop
8,Development
1000,Admin
1001,Books
我想将数据导出到名为 departments_export(department_id int, department_name varchar) 的 mysql 表中。此表已包含以下数据
mysql> select * from departments_export;
+---------------+-----------------+
| department_id | department_name |
+---------------+-----------------+
| 2 | Fitness |
| 3 | Footwear |
| 4 | Apparel |
| 5 | Golf |
| 6 | Outdoors |
| 7 | Fan Shop |
| 8 | Development |
| 1000 | Admin |
+---------------+-----------------+
我正在运行下面的 sqoop 导出命令
sqoop export \
--connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \
--username retail_dba \
--password cloudera \
--table departments_export \
--export-dir /user/training/sqoop_import/departments_export \
--batch \
-m 1 \
--update-key department_id \
--update-mode allowinsert \
我在命令提示符下看到下面的日志。
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/06/10 10:42:39 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.0
18/06/10 10:42:40 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/06/10 10:42:41 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/06/10 10:42:43 INFO tool.CodeGenTool: Beginning code generation
18/06/10 10:42:43 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `departments_export` AS t LIMIT 1
18/06/10 10:42:44 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `departments_export` AS t LIMIT 1
18/06/10 10:42:44 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-cloudera/compile/995860db956fe955c309e42de79ab4f9/departments_export.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
18/06/10 10:42:51 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/995860db956fe955c309e42de79ab4f9/departments_export.jar
18/06/10 10:42:51 INFO mapreduce.ExportJobBase: Beginning export of departments_export
18/06/10 10:42:51 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
18/06/10 10:42:53 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
18/06/10 10:42:57 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
18/06/10 10:42:57 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
18/06/10 10:42:57 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
18/06/10 10:42:57 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/06/10 10:43:00 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:00 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:00 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:01 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:01 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:01 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:02 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:02 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:02 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:02 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:02 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:03 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:04 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:04 INFO input.FileInputFormat: Total input paths to process : 1
18/06/10 10:43:04 INFO input.FileInputFormat: Total input paths to process : 1
18/06/10 10:43:04 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:04 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:04 INFO mapreduce.JobSubmitter: number of splits:1
18/06/10 10:43:04 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
18/06/10 10:43:05 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1527924460497_0039
18/06/10 10:43:06 INFO impl.YarnClientImpl: Submitted application application_1527924460497_0039
18/06/10 10:43:06 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1527924460497_0039/
18/06/10 10:43:06 INFO mapreduce.Job: Running job: job_1527924460497_0039
18/06/10 10:43:33 INFO mapreduce.Job: Job job_1527924460497_0039 running in uber mode : false
18/06/10 10:43:33 INFO mapreduce.Job: map 0% reduce 0%
18/06/10 10:43:59 INFO mapreduce.Job: map 100% reduce 0%
18/06/10 10:43:59 INFO mapreduce.Job: Job job_1527924460497_0039 failed with state FAILED due to: Task failed task_1527924460497_0039_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
18/06/10 10:43:59 INFO mapreduce.Job: Counters: 8
Job Counters
Failed map tasks=1
Launched map tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=22735
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=22735
Total vcore-milliseconds taken by all map tasks=22735
Total megabyte-milliseconds taken by all map tasks=23280640
18/06/10 10:43:59 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
18/06/10 10:43:59 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 62.3015 seconds (0 bytes/sec)
18/06/10 10:43:59 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
18/06/10 10:43:59 INFO mapreduce.ExportJobBase: Exported 0 records.
18/06/10 10:43:59 ERROR tool.ExportTool: Error during export:
Export job failed!
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:439)
at org.apache.sqoop.manager.SqlManager.updateTable(SqlManager.java:965)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:70)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
最佳答案
departments_export 表是否有主键,如果没有则为 department_id 创建主键。
试试这个。
sqoop 导出\--connect "jdbc:mysql://quickstart.cloudera:3306/retail_db"\--用户名 retail_dba\--密码cloudera\--表 departments_export\--update-key department_id\--更新模式允许插入\--export-dir/user/training/sqoop_import/departments_export/departments.txt\--fields-terminated-by=','-m 1\
我试过了,效果很好。
关于mysql - 在 Cloudera 中使用 sqoop 将数据从 HDFS 导出到 mysql 时作业失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50786568/
我希望使用 API 根据处理 Q 的大小更改运行的 Web 作业实例的数量,我知道我可以在门户中设置规则,但最短聚合时间为 60 分钟,并且我如果我们突然遇到大量工作,不希望系统在扩展之前等待 60
假设我有一个 spark 应用程序并且有两个操作导致两个 spark 作业。 //spark Application //Spark Job1 .... erro
大家好! 作为我对Java的自学的一部分,我正在尝试完成可用的Java初学者分配之一here(非常古老的东西-2001) 问题是我不知道如何应对这个挑战:(我将不胜感激任何建议,因为该解决方案不再可用
我一直在使用 HADOOP 1.2.1 服务器,并在那里执行许多 pig 作业。最近,我考虑将我的 Hadoop 服务器更改为 HADOOP 2.2.0。所以我在 HADOOP 2.2.0 中尝试了一
好的,我修复了静态错误。现在我只是想找出为什么每个对象都得到相同的条目(即相同的名字、年龄、体重等)。这是代码: package classlab3b; import classlab3B.BodyM
我的家庭作业中的一个问题需要一些帮助,我已经尝试了大约一个小时,但无法运行。 列出购买商品数量超过每位顾客平均商品数量的顾客 表格如下: Customer(Cnum, CustomerName, Ad
Kubernetes Jobs重复创建 Pod,直到指定数量的容器成功终止。作业通常与更高级别的CronJob机制一起使用,该机制会按循环计划自动启动新作业。 定期使用 Jobs 和 CronJobs
我有以下工作类(我已经删除了实际的工作代码): @On("0 0 1 * * ?") public class DailyJob extends Job { @Override pub
假设您将 cron 作业配置为每分钟运行一次以做某事。如果实际任务运行时间超过一分钟会发生什么? cron 会创建另一个作业实例/线程吗?还是 cron 会等待并确保上一次运行完成? 谢谢! 最佳答案
我们正在使用 TeamCity 7 并想知道是否可以仅在前一个步骤失败时才运行步骤?我们在构建步骤配置中的选项让您可以选择仅在所有步骤都成功时执行,即使步骤失败,或者始终运行它。 有没有办法仅在前一个
我在 oracle 中编写作业以执行存储过程,但是当时机成熟时,它不会无缘无故地发生任何事情。 是否有某种日志可以让我查看是否发生了错误或其他事情? 我使用 dbms_job 包来创建作业 恩克斯。
我正在用 Java 创建一个用于文件共享的 p2p 应用程序。每个对等节点都将在我的机器上的不同端口上运行并监听请求。但我遇到的问题是,当创建 PeerNode 实例时,我的代码会进入无限循环。以下是
我正在尝试创建一个队列,但当我运行 php artisanqueue:work 时它不起作用,我在终端中得到的只是 [2017-11-30 19:56:27] Processing: App\Jobs
我正在使用PHP库phpseclib0.2.2将SSH自动化到我的一台服务器中。我将其设置为每5分钟运行一次的cron任务。 在设置完它并确保其运行等情况下注销后,我看到了以下内容: $ logout
有没有办法获取多分支管道作业扫描收集到的所有分支的名称? 我想设置一个依赖于现有构建作业的夜间构建,因此需要检查多分支作业是否包含某些特定分支。另一种方法是检查现有作业。 最佳答案 我通过使用 Jen
我在编程方面还很陌生,我不太确定如何完成分配给我的学校作业。 Write a function void print_min(unsigned char a, short b,int c),which
我的作业有问题,需要帮助! 问题 1: 完成下面的 Java 方法,以便 raiseToPower(x,n) 将数字 x 提高到整数 n 次方(即计算值 xn )。请记住 x-n = 1/xn,x0
我正在做一项家庭作业,该作业有四个文本字段和一个文本区域,以及一个将文本字段和文本区域保存到文本文件的按钮,每行一个元素。然后,应出现一个对话框通知用户文件已保存。当对话框关闭时,它应该清空文本字段和
我需要运行一个名为ArrayHolder的java程序,它将运行两个线程。 ArrayHolder 将有一个 Array。 ThreadSeven 会用 7 覆盖该 Array 的每个元素,并用 1
在我的程序中,应该读取学生姓名、ID 号和 GPA,将其分配给指定的学生,然后打印出来。一切都编译正常,但出现错误 Error: Could not find or load main class L
我是一名优秀的程序员,十分优秀!