gpt4 book ai didi

apache - 500GB或1TB上的Hadoop 2.6和2.7 Apache Terasort

转载 作者:行者123 更新时间:2023-12-02 21:38:54 25 4
gpt4 key购买 nike

在运行 map 和 reducer 启动时,它从0变为100失败,并显示:

15/05/12 07:21:27 INFO terasort.TeraSort: starting
15/05/12 07:21:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/12 07:21:29 INFO input.FileInputFormat: Total input paths to process : 18000

Spent 1514ms computing base-splits.
Spent 109ms computing TeraScheduler splits.
Computing input splits took 1624ms
Sampling 10 splits of 18000
Making 1 from 100000 sampled records
Computing parititions took 315ms
Spent 1941ms computing partitions.
15/05/12 07:21:30 INFO client.RMProxy: Connecting to ResourceManager at n1/192.168.2.1:8032
15/05/12 07:21:31 INFO mapreduce.JobSubmitter: number of splits:18000
15/05/12 07:21:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1431389162125_0001
15/05/12 07:21:31 INFO impl.YarnClientImpl: Submitted application application_1431389162125_0001
15/05/12 07:21:31 INFO mapreduce.Job: The url to track the job: http://n1:8088/proxy/application_1431389162125_0001/
15/05/12 07:21:31 INFO mapreduce.Job: Running job: job_1431389162125_0001
15/05/12 07:21:37 INFO mapreduce.Job: Job job_1431389162125_0001 running in uber mode : false
15/05/12 07:21:37 INFO mapreduce.Job: map 0% reduce 0%
15/05/12 07:21:47 INFO mapreduce.Job: map 1% reduce 0%
15/05/12 07:22:01 INFO mapreduce.Job: map 2% reduce 0%
15/05/12 07:22:13 INFO mapreduce.Job: map 3% reduce 0%
15/05/12 07:22:25 INFO mapreduce.Job: map 4% reduce 0%
15/05/12 07:22:38 INFO mapreduce.Job: map 5% reduce 0%
15/05/12 07:22:50 INFO mapreduce.Job: map 6% reduce 0%
15/05/12 07:23:02 INFO mapreduce.Job: map 7% reduce 0%
15/05/12 07:23:15 INFO mapreduce.Job: map 8% reduce 0%
15/05/12 07:23:27 INFO mapreduce.Job: map 9% reduce 0%
15/05/12 07:23:40 INFO mapreduce.Job: map 10% reduce 0%
15/05/12 07:23:52 INFO mapreduce.Job: map 11% reduce 0%
15/05/12 07:24:02 INFO mapreduce.Job: map 100% reduce 100%
15/05/12 07:24:06 INFO mapreduce.Job: Job job_1431389162125_0001 failed with state FAILED due to: Task failed task_1431389162125_0001_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1

这是默认配置,每次都会失败。

我插入到xml中的所有配置我都注释掉了,以发现此问题,但是我仍然遇到问题,仅在还原开始时作业会失败。

最佳答案

Yarn处理资源管理,还提供可以使用MapReduce和实时工作负载的批处理工作负载。

可以在“ yarn ”容器级别以及映射器和化简器级别设置内存设置。请求内存以 yarn 容器大小为增量。映射器和化简器任务在容器内运行。

mapreduce.map.memory.mb and mapreduce.reduce.memory.mb

上面的参数描述了map-reduce任务的内存上限,如果该任务预订的内存超过该限制,则相应的容器将被杀死。

这些参数确定可以分别分配给映射器和缩减任务的最大内存量。让我们看一个例子:Mapper由配置参数mapreduce.map.memory.mb中定义的内存上限限制。

但是,如果yarn.scheduler.minimum-allocation-mb的值大于mapreduce.map.memory.mb的值,则将遵守yarn.scheduler.minimum-allocation-mb并给出该大小的容器出来。

该参数需要仔细设置,如果设置不正确,可能会导致性能下降或内存不足错误。
mapreduce.reduce.java.opts and mapreduce.map.java.opts

此属性值必须小于mapreduce.map.memory.mb / mapreduce.reduce.memory.mb中定义的map / reduce任务的上限,因为它应该适合map / reduce任务的内存分配。

关于apache - 500GB或1TB上的Hadoop 2.6和2.7 Apache Terasort,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30194176/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com