gpt4 book ai didi

java - 启用 hadoop 调度程序(资源感知自适应调度程序)

转载 作者:可可西里 更新时间:2023-11-01 16:17:13 26 4
gpt4 key购买 nike

我想在 hadoop 0.20.203.0 中启用 adaptivescheduler。我有一个来自这个调度程序的 jar 文件。 (我确信这个 jar 文件可以正常工作)。我将 jar 文件放在 HADOOP_HOME/lib 中,并在 hadoop-env.sh 中设置 HADOOP_CLASSPATH。我在 mapred-site.xml 中设置了调度程序的必需属性。当我运行我的集群时,所有的 jobtracker、datanode、……都开始了。但是当我转到调度程序的 UI(http://localhost:50030/scheduler) 时,我遇到错误 404。这个调度程序的 jar 文件包含 hadoop-0.20.203.0 的核心。我应该怎么做才能解决这个问题?请帮我。我的 jobtracker 日志是:

2013-07-27 01:22:29,333 INFO org.apache.hadoop.mapred.JobTracker: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting JobTracker
STARTUP_MSG: host = master/192.168.0.112
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.203.0
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011
************************************************************/
2013-07-27 01:22:29,527 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-07-27 01:22:29,537 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-07-27 01:22:29,538 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-07-27 01:22:29,538 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker metrics system started
2013-07-27 01:22:29,781 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-07-27 01:22:29,784 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-07-27 01:22:29,785 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2013-07-27 01:22:29,796 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
2013-07-27 01:22:29,796 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2013-07-27 01:22:29,796 INFO org.apache.hadoop.mapred.JobTracker: Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
2013-07-27 01:22:29,797 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-07-27 01:22:29,827 INFO org.apache.hadoop.mapred.JobTracker: Starting jobtracker with owner as maedeh
2013-07-27 01:22:29,852 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9001 registered.
2013-07-27 01:22:29,853 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9001 registered.
2013-07-27 01:22:29,856 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-07-27 01:22:35,276 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-07-27 01:22:35,404 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-07-27 01:22:35,668 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50030
2013-07-27 01:22:35,669 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50030 webServer.getConnectors()[0].getLocalPort() returned 50030
2013-07-27 01:22:35,669 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50030
2013-07-27 01:22:35,669 INFO org.mortbay.log: jetty-6.1.x
2013-07-27 01:22:36,225 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50030
2013-07-27 01:22:36,233 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-07-27 01:22:36,234 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source JobTrackerMetrics registered.
2013-07-27 01:22:36,234 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 9001
2013-07-27 01:22:36,234 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030
2013-07-27 01:22:36,366 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
2013-07-27 01:22:36,468 INFO org.apache.hadoop.mapred.JobHistory: Creating DONE folder at file:/home/maedeh/hadoop-0.20.203.0/logs/history/done
2013-07-27 01:22:36,478 INFO org.apache.hadoop.mapred.JobTracker: History server being initialized in embedded mode
2013-07-27 01:22:36,481 INFO org.apache.hadoop.mapred.JobHistoryServer: Started job history server at: localhost:50030
2013-07-27 01:22:36,481 INFO org.apache.hadoop.mapred.JobTracker: Job History Server web address: localhost:50030
2013-07-27 01:22:36,484 INFO org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is inactive
2013-07-27 01:22:36,782 INFO org.apache.hadoop.mapred.AdaptiveScheduler: Successfully configured AdaptiveScheduler
2013-07-27 01:22:36,782 INFO org.apache.hadoop.mapred.JobTracker: Refreshing hosts information
2013-07-27 01:22:36,791 INFO org.apache.hadoop.util.HostsFileReader: Setting the includes file to
2013-07-27 01:22:36,791 INFO org.apache.hadoop.util.HostsFileReader: Setting the excludes file to
2013-07-27 01:22:36,791 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-07-27 01:22:36,791 INFO org.apache.hadoop.mapred.JobTracker: Decommissioning 0 nodes
2013-07-27 01:22:36,802 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-07-27 01:22:36,802 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9001: starting
2013-07-27 01:22:36,803 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9001: starting
2013-07-27 01:22:36,804 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9001: starting
2013-07-27 01:22:36,804 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9001: starting
2013-07-27 01:22:36,804 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9001: starting
2013-07-27 01:22:36,804 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9001: starting
2013-07-27 01:22:36,805 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9001: starting
2013-07-27 01:22:36,805 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9001: starting
2013-07-27 01:22:36,805 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9001: starting
2013-07-27 01:22:36,805 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9001: starting
2013-07-27 01:22:36,806 INFO org.apache.hadoop.mapred.JobTracker: Starting RUNNING
2013-07-27 01:22:36,806 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9001: starting
2013-07-27 01:22:46,806 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/slave1
2013-07-27 01:22:46,808 INFO org.apache.hadoop.mapred.JobTracker: Adding tracker tracker_slave1:localhost/127.0.0.1:58226 to host slave1
2013-07-27 01:22:47,856 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/slave2
2013-07-27 01:22:47,859 INFO org.apache.hadoop.mapred.JobTracker: Adding tracker tracker_slave2:localhost/127.0.0.1:55061 to host slave2
2013-07-27 01:26:28,522 INFO org.apache.hadoop.mapred.JobInProgress: job_201307270122_0001: nMaps=3 nReduces=1 max=-1
2013-07-27 01:26:28,525 INFO org.apache.hadoop.mapred.JobTracker: Job job_201307270122_0001 added successfully for user 'maedeh' to queue 'default'
2013-07-27 01:26:28,538 INFO org.apache.hadoop.mapred.AuditLogger: USER=maedeh IP=192.168.0.112 OPERATION=SUBMIT_JOB TARGET=job_201307270122_0001 RESULT=SUCCESS
2013-07-27 01:26:28,560 INFO org.apache.hadoop.mapred.JobTracker: Initializing job_201307270122_0001
2013-07-27 01:26:28,560 INFO org.apache.hadoop.mapred.JobInProgress: Initializing job_201307270122_0001
2013-07-27 01:26:29,359 INFO org.apache.hadoop.mapred.JobInProgress: jobToken generated and stored with users keys in /home/maedeh/tempdir/mapred/system/job_201307270122_0001/jobToken
2013-07-27 01:26:29,403 INFO org.apache.hadoop.mapred.JobInProgress: Input size for job job_201307270122_0001 = 3671523. Number of splits = 3
2013-07-27 01:26:29,404 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201307270122_0001_m_000000 has split on node:/default-rack/slave1
2013-07-27 01:26:29,404 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201307270122_0001_m_000000 has split on node:/default-rack/slave2
2013-07-27 01:26:29,404 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201307270122_0001_m_000001 has split on node:/default-rack/slave1
2013-07-27 01:26:29,405 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201307270122_0001_m_000001 has split on node:/default-rack/slave2
2013-07-27 01:26:29,405 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201307270122_0001_m_000002 has split on node:/default-rack/slave1
2013-07-27 01:26:29,405 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201307270122_0001_m_000002 has split on node:/default-rack/slave2
2013-07-27 01:26:29,405 INFO org.apache.hadoop.mapred.JobInProgress: job_201307270122_0001 LOCALITY_WAIT_FACTOR=1.0
2013-07-27 01:26:29,405 INFO org.apache.hadoop.mapred.JobInProgress: Job job_201307270122_0001 initialized successfully with 3 map tasks and 1 reduce tasks.
2013-07-27 01:26:29,708 INFO org.apache.hadoop.mapred.JobTracker: Adding task (JOB_SETUP) 'attempt_201307270122_0001_m_000004_0' to tip task_201307270122_0001_m_000004, for tracker 'tracker_slave1:localhost/127.0.0.1:58226'
2013-07-27 01:26:39,051 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201307270122_0001_m_000004_0' has completed task_201307270122_0001_m_000004 successfully.
2013-07-27 01:26:39,069 INFO org.apache.hadoop.mapred.JobTracker: Adding task (MAP) 'attempt_201307270122_0001_m_000000_0' to tip task_201307270122_0001_m_000000, for tracker 'tracker_slave1:localhost/127.0.0.1:58226'
2013-07-27 01:26:39,073 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local task task_201307270122_0001_m_000000
2013-07-27 01:26:40,326 INFO org.apache.hadoop.mapred.JobTracker: Adding task (MAP) 'attempt_201307270122_0001_m_000001_0' to tip task_201307270122_0001_m_000001, for tracker 'tracker_slave2:localhost/127.0.0.1:55061'
2013-07-27 01:26:40,345 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local task task_201307270122_0001_m_000001
2013-07-27 01:26:42,214 INFO org.apache.hadoop.mapred.JobTracker: Adding task (MAP) 'attempt_201307270122_0001_m_000002_0' to tip task_201307270122_0001_m_000002, for tracker 'tracker_slave1:localhost/127.0.0.1:58226'
2013-07-27 01:26:42,214 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local task task_201307270122_0001_m_000002
2013-07-27 01:27:00,452 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201307270122_0001_m_000000_0' has completed task_201307270122_0001_m_000000 successfully.
2013-07-27 01:27:01,759 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201307270122_0001_m_000001_0' has completed task_201307270122_0001_m_000001 successfully.
2013-07-27 01:27:06,476 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201307270122_0001_m_000002_0' has completed task_201307270122_0001_m_000002 successfully.
2013-07-27 01:27:09,536 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE) 'attempt_201307270122_0001_r_000000_0' to tip task_201307270122_0001_r_000000, for tracker 'tracker_slave1:localhost/127.0.0.1:58226'
2013-07-27 01:27:21,749 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201307270122_0001_r_000000_0' has completed task_201307270122_0001_r_000000 successfully.
2013-07-27 01:27:21,756 INFO org.apache.hadoop.mapred.JobTracker: Adding task (JOB_CLEANUP) 'attempt_201307270122_0001_m_000003_0' to tip task_201307270122_0001_m_000003, for tracker 'tracker_slave1:localhost/127.0.0.1:58226'
2013-07-27 01:27:27,774 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201307270122_0001_m_000003_0' has completed task_201307270122_0001_m_000003 successfully.
2013-07-27 01:27:27,775 INFO org.apache.hadoop.mapred.JobInProgress: Job job_201307270122_0001 has completed successfully.
2013-07-27 01:27:27,790 INFO org.apache.hadoop.mapred.JobInProgress$JobSummary: jobId=job_201307270122_0001,submitTime=1374913588466,launchTime=1374913589405,firstMapTaskLaunchTime=1374913599068,firstReduceTaskLaunchTime=1374913629494,firstJobSetupTaskLaunchTime=1374913589678,firstJobCleanupTaskLaunchTime=1374913641756,finishTime=1374913647775,numMaps=3,numSlotsPerMap=1,numReduces=1,numSlotsPerReduce=1,user=maedeh,queue=default,status=SUCCEEDED,mapSlotSeconds=67,reduceSlotsSeconds=10,clusterMapCapacity=4,clusterReduceCapacity=4
2013-07-27 01:27:28,328 INFO org.apache.hadoop.mapred.JobHistory: Creating DONE subfolder at file:/home/maedeh/hadoop-0.20.203.0/logs/history/done/version-1/master_1374913354885_/2013/07/27/000000
2013-07-27 01:27:28,330 INFO org.apache.hadoop.mapred.JobHistory: Moving file:/home/maedeh/hadoop-0.20.203.0/logs/history/job_201307270122_0001_1374913588466_maedeh_word+count to file:/home/maedeh/hadoop-0.20.203.0/logs/history/done/version-1/master_1374913354885_/2013/07/27/000000
2013-07-27 01:27:28,336 INFO org.apache.hadoop.mapred.JobHistory: Moving file:/home/maedeh/hadoop-0.20.203.0/logs/history/job_201307270122_0001_conf.xml to file:/home/maedeh/hadoop-0.20.203.0/logs/history/done/version-1/master_1374913354885_/2013/07/27/000000
2013-07-27 01:27:28,345 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201307270122_0001_m_000000_0'
2013-07-27 01:27:28,348 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201307270122_0001_m_000002_0'
2013-07-27 01:27:28,348 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201307270122_0001_m_000003_0'
2013-07-27 01:27:28,348 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201307270122_0001_m_000004_0'
2013-07-27 01:27:28,348 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201307270122_0001_r_000000_0'
2013-07-27 01:27:29,228 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201307270122_0001_m_000001_0'

字数示例:

maedeh@master:~/hadoop-0.20.203.0$ bin/hadoop jar hadoop*examples*.jar wordcount /maedeh/gutenberg /maedeh/gutenberg-output
13/07/27 01:26:27 INFO input.FileInputFormat: Total input paths to process : 3
13/07/27 01:26:28 INFO mapred.JobClient: Running job: job_201307270122_0001
13/07/27 01:26:29 INFO mapred.JobClient: map 0% reduce 0%
13/07/27 01:27:01 INFO mapred.JobClient: map 33% reduce 0%
13/07/27 01:27:03 INFO mapred.JobClient: map 66% reduce 0%
13/07/27 01:27:07 INFO mapred.JobClient: map 100% reduce 0%
13/07/27 01:27:22 INFO mapred.JobClient: map 100% reduce 100%
13/07/27 01:27:28 INFO mapred.JobClient: Job complete: job_201307270122_0001
13/07/27 01:27:28 INFO mapred.JobClient: Counters: 25
13/07/27 01:27:28 INFO mapred.JobClient: Job Counters
13/07/27 01:27:28 INFO mapred.JobClient: Launched reduce tasks=1
13/07/27 01:27:28 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=67684
13/07/27 01:27:28 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/07/27 01:27:28 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/07/27 01:27:28 INFO mapred.JobClient: Launched map tasks=3
13/07/27 01:27:28 INFO mapred.JobClient: Data-local map tasks=3
13/07/27 01:27:28 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=10249
13/07/27 01:27:28 INFO mapred.JobClient: File Output Format Counters
13/07/27 01:27:28 INFO mapred.JobClient: Bytes Written=880838
13/07/27 01:27:28 INFO mapred.JobClient: FileSystemCounters
13/07/27 01:27:28 INFO mapred.JobClient: FILE_BYTES_READ=2214875
13/07/27 01:27:28 INFO mapred.JobClient: HDFS_BYTES_READ=3671869
13/07/27 01:27:28 INFO mapred.JobClient: FILE_BYTES_WRITTEN=3775263
13/07/27 01:27:28 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=880838
13/07/27 01:27:28 INFO mapred.JobClient: File Input Format Counters
13/07/27 01:27:28 INFO mapred.JobClient: Bytes Read=3671523
13/07/27 01:27:28 INFO mapred.JobClient: Map-Reduce Framework
13/07/27 01:27:28 INFO mapred.JobClient: Reduce input groups=82335
13/07/27 01:27:28 INFO mapred.JobClient: Map output materialized bytes=1474367
13/07/27 01:27:28 INFO mapred.JobClient: Combine output records=102324
13/07/27 01:27:28 INFO mapred.JobClient: Map input records=77931
13/07/27 01:27:28 INFO mapred.JobClient: Reduce shuffle bytes=1474367
13/07/27 01:27:28 INFO mapred.JobClient: Reduce output records=82335
13/07/27 01:27:28 INFO mapred.JobClient: Spilled Records=255966
13/07/27 01:27:28 INFO mapred.JobClient: Map output bytes=6076101
13/07/27 01:27:28 INFO mapred.JobClient: Combine input records=629172
13/07/27 01:27:28 INFO mapred.JobClient: Map output records=629172
13/07/27 01:27:28 INFO mapred.JobClient: SPLIT_RAW_BYTES=346
13/07/27 01:27:28 INFO mapred.JobClient: Reduce input records=102324
maedeh@master:~/hadoop-0.20.203.0$

最佳答案

您可以进入 jobtracker 页面 -> 作业历史记录 -> 作业文件右侧链接 -> 单击它 -> 在那里你可以获取正在运行的调度程序信息,从而检查正在运行的调度程序。

关于java - 启用 hadoop 调度程序(资源感知自适应调度程序),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/17894690/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com