gpt4 book ai didi

amazon-web-services - aws emr s3-dist-cp在CopyFilesReducer.cleanup上,MapReduce作业失败

转载 作者:行者123 更新时间:2023-12-02 20:14:57 31 4
gpt4 key购买 nike

具有(学习)AWS EMR集群版本emr-5.31.0
尝试将文件从s3复制到hdfs时,我在主节点上发出了一条命令:s3-dist-cp --src=s3://bigdata-xxxxxxxxx/emrdata/orders.tbl.gz --dest=hdfs:/emrdata/orders.tbl.gz实际上执行了一系列map / reduce作业,其中reduce作业之一失败了:

20/10/20 17:46:29 INFO mapreduce.Job:  map 100% reduce 50%
20/10/20 17:46:31 INFO mapreduce.Job: Task Id : attempt_1603203512239_0014_r_000005_0, Status : FAILED
Error: java.lang.RuntimeException: Reducer task failed to copy 1 files: s3://bigdata-xxxxxxxxx/emrdata/orders.tbl.gz etc
at com.amazon.elasticmapreduce.s3distcp.CopyFilesReducer.cleanup(CopyFilesReducer.java:67)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:179)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:635)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:177)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:171)
如果有帮助,我有 a full cli output and the task syslog
该文件是相对较小的归档文件(400MB)
我正在学习AWS EMR环境,因此我可能会丢失一些被认为是理所当然的东西。
集群信息:
Applications:Hive 2.3.7, Pig 0.17.0, Hue 4.7.1, Spark 2.4.6, Tez 0.9.2, Flink 1.11.0, ZooKeeper 3.4.14, Oozie 5.2.0

EC2 instance profile:EMR_EC2_DefaultRole
EMR role:EMR_DefaultRole
Auto Scaling role:EMR_AutoScaling_DefaultRole
我无法确定问题的根本原因或解决方法。

最佳答案

我知道了
使用s3-dist-cp的正确方法是使用存储桶和srcPattern参数。

s3-dist-cp --src=s3://bigdata-xxxxxxxxx/emrdata/ --dest=hdfs:///emrdata/ --srcPattern='orders\.tbl\.gz'

关于amazon-web-services - aws emr s3-dist-cp在CopyFilesReducer.cleanup上,MapReduce作业失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64451158/

31 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com