gpt4 book ai didi

mysql - sqoop 导入查询给出重复名称错误

转载 作者:行者123 更新时间:2023-11-29 18:11:21 24 4
gpt4 key购买 nike

我在 mysql 中有三个表,分别名为 movie、moviegenre、genre。当我尝试使用 sqoop 自由格式查询将它们导入 HDFS 时:

sqoop import --connect jdbc:mysql://localhost/movielens --username user --password *** 
--query 'select m.id as id,m.name as movie_name,m.year as year, g.name as genre_name
from movie m join moviegenre mg on m.id = mg.movieid join genre g on g.id = mg.genreid
WHERE $CONDITIONS' --split-by m.id --target-dir /user/sqoop/moviegenre

它抛出错误:

Imported Failed: Duplicate Column identifier specified: 'name'.

当我在 mysql 中编写相同的查询时,它会给出我想要的输出:

id       movie_name         year   genre_name
1 Toy Story 1995 Animation
2 Jumanji 1995 Adventure
.. ....... .... ........

我点击了这个链接,并按照答案进行了操作:Imported Failed: Duplicate Column identifier specified (sqoop)但这似乎也没有帮助。

表格中的字段如下:电影 = id、名称、年份

流派 = id, 名称

电影类型=电影ID,流派ID

请指出我查询中的错误。

最佳答案

SQOOP 命令中没有任何错误。我刚刚在 cloudera 快速启动虚拟机中创建了表并运行了 SQOOP 导入。它运行良好并产生了结果。您可能在添加别名之前已运行该命令。另一个区别是我已经格式化了命令。

如果您需要它,下面是我运行的内容

MySQL命令

mysql> use retail_db;
mysql> create table movie (id integer, name varchar(100), year integer);
mysql> insert into movie values (1, 'TEST', 2016);
mysql> create table genre (id integer, name varchar(100));
mysql> insert into genre values (1, 'TEST');
mysql> create table moviegenre (movieid integer, genreid integer);
mysql> insert into moviegenre values (1, 1);

SQOOP 命令

sqoop import \
--connect jdbc:mysql://localhost/retail_db \
--username root \
--password cloudera \
--query 'select m.id as id,m.name as movie_name,m.year as year, g.name as genre_name from movie m join moviegenre mg on m.id = mg.movieid join genre g on g.id = mg.genreid WHERE $CONDITIONS' --split-by m.id --target-dir /user/cloudera/moviegenre

SQOOP标准输出

[cloudera@quickstart ~]$ sqoop import \
> --connect jdbc:mysql://localhost/retail_db \
> --username root \
> --password cloudera \
> --query 'select m.id as id,m.name as movie_name,m.year as year, g.name as genre_name from movie m join moviegenre mg on m.id = mg.movieid join genre g on g.id = mg.genreid WHERE $CONDITIONS' --split-by m.id --target-dir /user/cloudera/moviegenre
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
17/11/19 13:08:01 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.10.0
17/11/19 13:08:01 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/11/19 13:08:01 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
17/11/19 13:08:01 INFO tool.CodeGenTool: Beginning code generation
17/11/19 13:08:02 INFO manager.SqlManager: Executing SQL statement: select m.id as id,m.name as movie_name,m.year as year, g.name as genre_name from movie m join moviegenre mg on m.id = mg.movieid join genre g on g.id = mg.genreid WHERE (1 = 0)
17/11/19 13:08:02 INFO manager.SqlManager: Executing SQL statement: select m.id as id,m.name as movie_name,m.year as year, g.name as genre_name from movie m join moviegenre mg on m.id = mg.movieid join genre g on g.id = mg.genreid WHERE (1 = 0)
17/11/19 13:08:02 INFO manager.SqlManager: Executing SQL statement: select m.id as id,m.name as movie_name,m.year as year, g.name as genre_name from movie m join moviegenre mg on m.id = mg.movieid join genre g on g.id = mg.genreid WHERE (1 = 0)
17/11/19 13:08:02 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-cloudera/compile/3b35f51458e53da94c6852dcfc0b904a/QueryResult.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/11/19 13:08:04 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/3b35f51458e53da94c6852dcfc0b904a/QueryResult.jar
17/11/19 13:08:04 INFO mapreduce.ImportJobBase: Beginning query import.
17/11/19 13:08:04 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
17/11/19 13:08:04 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
17/11/19 13:08:05 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
17/11/19 13:08:05 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/11/19 13:08:08 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:951)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:689)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:878)
17/11/19 13:08:08 INFO db.DBInputFormat: Using read commited transaction isolation
17/11/19 13:08:08 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(t1.id), MAX(t1.id) FROM (select m.id as id,m.name as movie_name,m.year as year, g.name as genre_name from movie m join moviegenre mg on m.id = mg.movieid join genre g on g.id = mg.genreid WHERE (1 = 1) ) AS t1
17/11/19 13:08:08 INFO db.IntegerSplitter: Split size: 0; Num splits: 4 from: 1 to: 1
17/11/19 13:08:08 INFO mapreduce.JobSubmitter: number of splits:1
17/11/19 13:08:08 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1510865312807_0011
17/11/19 13:08:09 INFO impl.YarnClientImpl: Submitted application application_1510865312807_0011
17/11/19 13:08:09 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1510865312807_0011/
17/11/19 13:08:09 INFO mapreduce.Job: Running job: job_1510865312807_0011
17/11/19 13:08:33 INFO mapreduce.Job: Job job_1510865312807_0011 running in uber mode : false
17/11/19 13:08:33 INFO mapreduce.Job: map 0% reduce 0%
17/11/19 13:08:52 INFO mapreduce.Job: map 100% reduce 0%
17/11/19 13:08:52 INFO mapreduce.Job: Job job_1510865312807_0011 completed successfully
17/11/19 13:08:52 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=148008
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=99
HDFS: Number of bytes written=17
HDFS: Number of read operations=4
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=16886
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=16886
Total vcore-seconds taken by all map tasks=16886
Total megabyte-seconds taken by all map tasks=17291264
Map-Reduce Framework
Map input records=1
Map output records=1
Input split bytes=99
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=702
CPU time spent (ms)=2330
Physical memory (bytes) snapshot=231743488
Virtual memory (bytes) snapshot=1567064064
Total committed heap usage (bytes)=221249536
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=17
17/11/19 13:08:52 INFO mapreduce.ImportJobBase: Transferred 17 bytes in 46.7631 seconds (0.3635 bytes/sec)
17/11/19 13:08:52 INFO mapreduce.ImportJobBase: Retrieved 1 records.

关于mysql - sqoop 导入查询给出重复名称错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47374696/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com